As DECT becomes widely accepted in the field of diagnostic radiology, there is growing interest in using dual-energy imaging to improve other scenarios. In this context, a new mobile dual-source dual-energy CBCT is being developed for scenarios such as radiotherapy and interventional radiology. The device performs dual-energy measurements by utilizing two X-ray sources mounted side-by-side in the z-axis direction, causing the problem of a mismatch in the fields of view of high-energy and low-energy sources in the z-axis. To solve this problem, this study proposes a method based on deep learning to generate high-energy and low-energy CT images in the missing fields of view. This method can generate high-energy (or low-energy) images from low-energy (or high-energy) images, and then complete the information in the missing fields of view. Furthermore, to enhance the quality of the generated images, a plug-and-play frequency-domain Mamba module is designed to extract frequency-domain features in the latent space, and then the redundant feature maps are filtered out through the designed frequency channel filtering module so that the model can pay more attention to learn and extract the effective features. Experimental results on the simulated data show that the proposed method can effectively generate the missing low- and high-energy CT images, and the SSIM, PSNR, and MAE are up to 99.3%, 48.1dB, and 6.3HU, respectively. Moreover, the generated images could maintain good continuity in the z-axis, which means that our method can effectively ensure the consistency in the fields of view of dual sources. In addition, our model can be further fine-tuned online using the paired dual-energy data in the overlap fields of view when dealing with data from unseen patients, constructing the patient-specific model to ensure the robustness against different samples.
Removing ring artifacts presents a significant challenge in x-ray computed tomography (CT) systems, particularly in those utilizing photon-counting detectors. To solve this problem, this study proposes the Inter-slice Complementarity Enhanced Ring Artifact Removal (ICE-RAR) algorithm, which is based on a learning-based approach. The variability and complexity of detector responses make it challenging to acquire enough paired data for training neural networks in real-world scenarios. To address this, the research first introduces a data simulation strategy that incorporates the characteristics of specific systems in accordance with the principles of ring artifact formation. Following this, a dual-branch neural network is designed, consisting of a global artifact removal branch and a central region enhancement branch, aimed at improving artifact removal, especially in the central region of interest where artifacts are more difficult to eliminate. Additionally, considering the independence of different detector element responses, the study proposes leveraging inter-slice complementarity to improve image restoration. The effectiveness of the central region reinforcement and inter-slice complementarity was confirmed through ablation experiments on simulated data. Both simulated and real-world results demonstrated that the ICE-RAR method effectively reduces ring artifacts while preserving image details. More importantly, by incorporating specific system characteristics into the data simulation process, models trained on simulated data can be directly applied to unseen real data, presenting significant potential for addressing ring artifact removal (RAR) issue in practical CT systems.
Parallel imaging is widely used in the clinic to accelerate magnetic resonance imaging (MRI) data collection. However, conventional reconstruction techniques for parallel imaging still face significant challenges in achieving satisfactory performance at high acceleration rates. It results in artifacts and noise that affect the subsequent diagnosis. Recently, implicit neural representation (INR) has emerged as a new deep learning paradigm that represents an object as a continuous function of spatial coordinates. INR’s continuity in representation enhances the model’s capacity to capture redundant information within the object. However, it usually needs thousands of training iterations to reconstruct the image. In this work, we proposed a method to speed up INR for parallel MRI reconstruction using hash-mapping and a pre-trained encoder. It enables INR to achieve better results with fewer training iterations. Benefiting from INR’s powerful representations, the proposed method outperforms existing methods in removing the aliasing artifacts and noise. The experimental results on simulated and real undersampled data demonstrate the model’s potential for further accelerating parallel MRI.
Due to the high cost of high-field MRI equipment, low-field MRI systems are still widely used in small and medium-sized hospitals. Compared to high-field MRI, images acquired from low-field MRI often suffer from lower resolution and lower signal-to-noise ratios. And the analysis of clinical data reveals that noise levels can vary significantly across different low-field MRI protocols. In this study, we propose an effective super-resolution reconstruction model based on generative adversarial networks (GAN). The proposed model can implicitly differentiate between various sequence types, allowing it to adapt to different scan protocols during reconstruction process. To further enhance image detail, a one-to-many supervision strategy is employed during the training process, utilizing similar patches within a single image. Additionally, the number of basic blocks in the model is reduced through knowledge distillation to meet the speed requirements for clinical use. The experimental results on actual 0.35T low-field MR images suggest that the proposed method holds substantial potential for clinical application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.