Although radiographs are the most frequently used medical imaging modality worldwide due to their cost-effectiveness and widespread accessibility, the structural superposition along the x-ray paths often renders lung nodules difficult to detect. In this study, we apply “X-ray dissectography” to dissect the lungs digitally in a few radiographic projections, suppress the interference of irrelevant structures, and improve lung nodule detectability. Then, we design a novel collaborative detection network to localize lung nodules in both the dissected 2D projections and the 3D physical space. Our experimental results show that our approach can significantly improve the average precision by 20+% in comparison with detecting lung nodules from the original projections using a popular detection network. Our proposed approach and results suggest a potential in re-designing the current X-ray imaging protocols and workflows and improving the diagnostic performance of chest radiographs in lung diseases.
X-ray imaging is the most popular medical imaging technology. While X-ray radiography is rather cost-effective, tissue structures are superimposed along the X-ray paths. On the other hand, computed tomography (CT) reconstructs internal structures but CT increases radiation dose, is complicated and expensive. Here we propose ”X-ray dissectography” to extract a target organ digitally from few radiographic projections for stereographic and tomographic analysis in the deep learning framework. As an exemplary embodiment, we propose a general X-ray dissectography network, a dedicated X-ray stereotography network, and the X-ray imaging systems to implement these functionalities. Our experiments show that X-ray stereography can be achieved of an isolated organ such as the lungs in this case, suggesting the feasibility of transforming conventional radiographic reading to the stereographic examination of the isolated organ, which potentially allows higher sensitivity and specificity, and even tomographic visualization of the target. With further improvements, X-ray dissectography promises to be a new X-ray imaging modality for CT-grade diagnosis at radiation dose and system cost comparable to that of radiographic or tomosynthetic imaging.
Radiation dose reduction is one of the most important topics in the field of computed tomography (CT). Over past years, deep learning based denoising methods have been demonstrated effective in reducing radiation dose and improving the image quality. Since the paired low-dose and normal-dose CT are usually not available in clinical scenarios, various learning paradigms are studied, including the fully-supervised learning based on simulation data, the weakly-supervised learning based on unpaired noise-clean or paired noise-noise data, and the self-supervised learning based on noisy data only. Under neither clean nor noisy reference data, unsupervised/self-supervised low-dose CT (LDCT) denoising methods are promising to processing real data and/or images. In this study, we propose the first-of-its-kind Self-Supervised Dual-Domain Network (SSDDNet) for LDCT denoising. SSDDNet consists of three modules including a projection-domain network, a reconstruction layer, and an image-domain network. During training, a projection-domain loss, a reconstruction loss, and an image-domain loss are simultaneously used to optimize the denoising model end-to-end using the single LDCT scan. Our experimental results show that the dual-domain network is effective and superior over single-domain networks in the self-supervised learning setting.
Material decomposition algorithms enable discrimination and quantification of multiple contrast agent and tissue compositions in spectral image datasets acquired by photon-counting computed tomography (PCCT). Image denoising has been shown to improve PCCT image reconstruction quality and feature recognition while preserving fine image detail. Reduction of image artifacts and noise could also improve the accuracy of material decomposition but the effects of denoising on material decomposition have not been investigated. In particular, deep learning methods can reduce inherent PCCT image noise without using a system-based or assumed prior noise model. Therefore, the objective of this study was to investigate the effects of image denoising on quantitative material decomposition in the absence of an influence of spatial resolution on feature recognition. Phantoms comprising multiple pure and spatially uniform contrast agent (gadolinium, iodine) and tissue (calcium, water) compositions were imaged by PCCT with four energy thresholds chosen to normalize photon counts and leverage contrast agent k-edges. Image denoising was performed by the established blockmatching and 3D-filtering (BM3D) algorithm or deep learning using convolutional neural networks. Material decomposition was performed on as-acquired, BM3D-denoised, and deep-learning-denoised datasets using constrained maximum likelihood estimation and compared to known material concentrations in the phantom. Image denoising by BM3D and deep learning improved the quantitative accuracy of material concentrations determined by material decomposition compared to ground truth, as measured by the root-mean-squared error. Material classification was not improved by image denoising compared with as-acquired images, suggesting that material decomposition was robust against inherent acquisition noise when feature recognition was not challenged by the system spatial resolution. Deeplearning-denoised images balanced preservation of local detail compared to more aggressive smoothing with BM3D, as measured by line profiles across features.
Deep learning based methods have achieved promising results for CT metal artifact reduction (MAR) by learning to map an artifact-affected image or projection data to the artifact-free image in the data-driven manner. Basically, the existing methods simply select a single window in the Hounsfield unit (HU) followed by a normalization operation to preprocess all training and testing images, based on which a neural network is trained to reduce metal artifacts. However, if the selected widow contains the whole range of HU values, the model is challenged to predict the dedicated narrow windows accurately since the contribution of small HU values to the training loss may not be sufficiently weighted relative to that for large HU values. On the other hand, if a selected window is small, the opportunity will be lost to train the network effectively on features of large HU values. In practice, various tissues and organs in CT images are inspected with different window settings. Therefore, here we propose a multiple-window learning method for CT MAR. The basic idea of multiple-window learning is that the content of large HU values may help improve features of small HU values, and vice versa. Our method can precisely process multiple specified windows through simultaneously and interactively learning to remove metal artifacts within multiple windows. Experimental results on both simulated and clinical datasets have demonstrated the effectiveness of the proposed method. Due to its simplicity, the proposed multiple-window network can be easily incorporated into other deep learning frameworks for CT MAR.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.