Photon-counting CT (PCCT) has the potential to provide superior spectral separation and improve material decomposition accuracy. However, PCDs face challenges of pixel-level inhomogeneity and instability, necessitating frequent spectral calibration and specialized processing adaptive to different spectral responses. In this work, we introduce a generalized supervised training method for adaptive spectral harmonization in PCCT. A multi-layer perceptron (MLP) network is trained to achieve material decomposition regardless of individual PCD pixel responses, resulting in artifact-free images and accurate material quantification, as validated in simulations and experiments.
Dual-energy CT (DECT) provides additional material-based contrast using spectral information. The realization of DECT using a rotation-to-rotation kVp switching may suffer from structure misalignment due to patient’s motion and requires deformable image registration (DIR) between the two kVp images. Recent studies in DIR has highlighted deep-learning-based methods which can achieve superior registration accuracy with reasonable computational time. However, current deep-learning-based DIR methods may eliminate important anatomical features or hallucinate faked structures. The lack of interpretability complicates the robustness verification. Alternatively, recent studies have introduced the algorithm unrolling method that provides a concrete and systematic connection between model-based iterative methods and data-driven methods. In this work, we present an unsupervised Model-Based deep Unrolling Registration Network (MBURegNet) for DIR in DECT. MBURegNet comprises a sequence of stacked update blocks to unroll the Large Deformation Diffeomorphic Metric Mapping method, where each block samples the velocity field that follows the diffeomorphism physics. Preliminary studies using clinical data has shown that the proposed network can achieve superior performance improvement compared to the baseline deep-learning-based method, as evidenced by both qualitative and quantitative analyses. Additionally, the network can generate a sequence of intermediate images connecting the initial and final motion states, effectively illustrating the continuous flow of diffeomorphisms.
Artifact correction is a great challenge in cardiac imaging. During the correction of coronary tissue with motion-induced artifacts, the spatial distribution of CT value not only shifts according to the motion vector field (MVF), but also shifts according to the volume change rate of the local voxels. However, the traditional interpolation method does not conserve the CT value during motion compensation. A new sample interpolation algorithm is developed based on the constraint of conservation of CT value before and after image deformation. This algorithm is modified on the existing interpolation algorithms and can be embedded into neural networks with deterministic back propagation. Comparative experimental results illustrate that the method can not only correct motion-induced artifacts, but also ensure the conservation of CT value in the region of interest (ROI) area, so as to obtain corrected images with clinically recognized CT value. Both effectiveness and efficiency are proved in forward motion correction process and backward training steps in deep learning. Simultaneously, the visualized motion vector field transparentizes the correction process, making this method more interpretable than the existing image-based end-to-end deep learning method.
Patient motion during computed tomography (CT) scan can result in serious degradation of imaging quality, and is of increasing concern due to the aging population and associated diseases. In this paper, we address this problem by focusing on the reduction of head motion artifacts. To achieve this, we introduce a head motion simulation system and a multi-scale deep learning architecture. The proposed motion simulation system can simulate rigid movement including translation and rotation. The images with simulated motion serve as the training set for the network, and the original motion free images serve as the gold standard. Motion artifacts exhibit in the image space as streaks and patchy shadows. We propose a multiscale neural network to learn the artifact. With different branches equipped with ResBlock and down-sampling, the network can learn long scale streaks and short scale shadow artifacts. Although we trained the network on simulated images, we find that the learned network generalizes well to images with real motion artifacts.
Bone induced artifacts caused by spectral absorption of skull is intrinsic to head images in CT. Artifacts which blur the images and further temper with the diagnostic power of CT. Several algorithms have been proposed to address the artifacts, but most are complex and take long time to eliminate the artifacts. In the past decade, the deep learning (DL) approach has demonstrated excellent effects in image processing. In this work, we present a twostep convolutional neural networks (CNNs) that reduces the artifacts. First step uses the U-shape network (UNet) to learn and correct the low frequency artifacts. Second step uses residual network (ResNet) to extract the high frequency artifacts. Our proposed method is capable of eliminating the bone induced artifacts within a relatively low time cost. Promising results have been obtained in our experiment with a large number of CT head images.
Beating of the heart is a type of motion which is the most difficult to control during the cardiac CT scan and causes significant artifacts. Hearts have the least motion at systole and diastole phase which for an average heart happens at 45% and 75% phase respectively. However, in practice this is not guaranteed, so doctors sometimes reconstruct several phases, review all those images and then make diagnosis from the phase that has the least artifacts. The new method for automatic dynamic optimal phase reconstruction has image quality comparable to the manual phase selection but it also significantly reduces the exam time by omitting the review of unnecessary phases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.