High-precision mechanical sensors are critical for devices and systems with extremely high accuracy. By combining the mechanical and optical properties of specific materials, it is possible to fabricate sensors that meet specific performance requirements by using ultraprecise micro/nano fabrication, and optical sensing methods. In this work, we introduce a probe fabricated on the end face of an optical fiber using two-photon polymerization 3D printing technology. The 3D-printed probe and the optical fiber form a Fabry–Pérot cavity, which converts the minute mechanical signals received by the probe into optical signals for demodulation. The sensitivity of the probe depends on the material properties, structure, and sizes. The material we used has a lower Young’s modulus than normal 3D-printing photoresist, so the probe could achieve higher resolution. Depending on the specific requirements of different application conditions, various materials and different designs for 3D printing can be selected. The structure of this nano-mechanics sensor was demonstrated in this work. We also conducted mechanical testing on it. The verification results show that the sensor achieves an ultra-high resolution. The optical fiber nano-mechanics sensor that shows high force resolution has potential for high accuracy measurement applications, and the results reveal a potential design strategy for special optical probes with unique physical properties.
For the U-Net based low dose CT (LDCT) imaging, there remains an interesting question: can the LDCT imaging neural network trained at one image resolution be transferred and applied directly onto another LDCT imaging application of different image resolution, provided that both the noise level and the structural content are similar? To answer this question, numerical simulations are performed with high-resolution (HR) and low-resolution (LR) LDCT images having comparable noise levels. Results demonstrated that the U-Net trained with LR CT images can be used to effectively reduce the noise on HR CT images, and vice versa. However, additional artifacts may be generated when transferring the same U-Net to a different LDCT imaging task with varied image spatial resolution due to the noise induced 2D features. For example, noticeable bright spots were generated at the edges of the FOV when the HR CT image is denoised by the LR CT image trained U-Net. In conclusion, this study suggests that it is necessary to retrain the U-Net for a dedicated LDCT imaging application.
As a quantitative CT imaging technique, the dual-energy CT (DECT) imaging method attracts a lot of research interests. However, material decomposition from high energy (HE) and low energy (LE) data may suffer from magnified noise, resulting in severe degradation of image quality and decomposition accuracy. To overcome these challenges, this study presents a novel DECT material decomposition method based on deep neural network (DNN). In particular, this new DNN integrates the CT image reconstruction task and the nonlinear material decomposition procedures into one single network. This end-to-end network consists of three compartments: the sinogram domain decomposition compartment, the user-defined analytical domain transformation operation (OP) compartment, and the image domain decomposition compartment. By design, both the first and third compartments are responsible for complicated nonlinear material decomposition, while denoising the DECT images. Natural images are used to synthesized the dual-energy data with assumed certain volume fractions and density distributions. By doing so, the burden of collecting clinical DECT data can be significantly reduced, therefore the new DECT reconstruction framework becomes more easy to be implemented. Both numerical and experimental validation results demonstrate that the proposed DNN based DECT reconstruction algorithm can generate high quality basis images with improved accuracy.
Reducing the radiation dose is always an important topic in modern computed tomography (CT) imaging. As the dose level reduces, the conventional analytical filtered backprojection (FBP) reconstruction algorithm becomes inefficient in generating satisfactory CT images for clinical applications. To overcome such difficulties, in this study we developed a novel deep neural network (DNN) for low dose CT image reconstruction by exploring the simultaneous sinogram domain and CT image domain denoising capabilities. The key idea is to jointly denoise the acquired sinogram and the reconstructed CT image, while reconstructing CT image in an end-to-end manner with the help of DNN. Specifically, this new DNN contains three compartments: the sinogram domain denoising compartment, the sinogram to CT image reconstruction compartment, and the CT image domain denoising compartment. This novel sinogram and image domain based CT image reconstruction network is named as ADAPTIVE-NET. By design, the first and third compartments of ADAPTIVE-NET can mutually update their parameters for CT image denoising during network training. Clearly, one advantage of using ADAPTIVE-NET is that the unique information stored in sinogram can be accessed directly during network training. Validation results obtained from numerical simulations demonstrate that this newly proposed ADAPTIVE-NET can effectively improve the quality of CT images acquired with low radiation dose levels.
Low-dose computed tomography (CT) has attracted much attention in clinical applications since X-ray radiations can cause serious health risks to patients. Sparse-view CT imaging is one of the major ways to reduce radiation dose. However, if reconstructing the sparse-view CT images with conventional filtered backprojection (FBP) algorithm, image quality may be significantly degraded due to the severe streaking artifacts. Therefore, iterative sparse-view CT image reconstruction (IR) algorithms have been developed and utilized to improve image quality. One drawback of using the IR algorithms is they usually spend long computation time. Additionally, adjusting and optimizing the hyper-parameters that are needed during iteration procedures is also time-consuming, some- times may even depend on individual experience. These potential drawbacks strongly limit the wide applications of IR algorithms. Aiming at partially overcome such difficulties, in the present work, we propose a deep iterative reconstruction (DIR) framework to generalize the conventional IR algorithms by mapping them into the deep neural network (DNN) technique. With this proposed DIR algorithms, the prior term, the data fidelity term, and the hyper-parameters can all be represented and learned by network. By doing so, some generalized iterative models can be used to perform high quality sparse-view CT image reconstructions. Numerical experiments based on clinical patient data demonstrated that the proposed DIR algorithms can mitigate the streaking artifacts more effectively while well preserving the subtle structures.
In dental computed tomography (CT) scanning, high-quality images are crucial for oral disease diagnosis and treatment. However, many artifacts, such as metal artifacts, downsampling artifacts and motion artifacts, can degrade the image quality in practice. The main purpose of this article is to reduce motion artifacts. Motion artifacts are caused by the movement of patients during data acquisition during the dental CT scanning process. To remove motion artifacts, the goal of this study was to develop a dental CT motion artifact-correction algorithm based on a deep learning approach. We used dental CT data with motion artifacts reconstructed by conventional filtered back-projection (FBP) as inputs to a deep neural network and used the corresponding high-quality CT data as labeled data during training. We proposed training a generative adversarial network (GAN) with Wasserstein distance and mean squared error (MSE) loss to remove motion artifacts and to obtain high-quality CT dental images. In our network, to improve the generator structure, the generator used a cascaded CNN-Net style network with residual blocks. To the best of our knowledge, this work describes the first deep learning architecture method used with a commercial cone-beam dental CT scanner. We compared the performance of a general GAN and the m-WGAN. The experimental results confirmed that the proposed algorithm effectively removes motion artifacts from dental CT scans. The proposed m-WGAN method resulted in a higher peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) and a lower root-mean-squared error (RMSE) than the general GAN method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.