Pinwheel artifact is caused in multidetector Computed Tomography (CT) helical scans due to under-sampling of z-planes during thin slice reconstruction. There are a few hardware-based solutions like flying focal spot (FFS), which aims at generating more samples during the acquisition, thus enabling aliasing free thin slice images. However, these methods are expensive both in manufacturing capability, as well as impact on hardware life. Deep learning (DL) based methods have shown significant improvement in pin-wheel artifact reduction. Most DL-based methods use images generated from FFS or similar hardware-based enhancement to train the deep learning network, thus restricting usability of these methods on systems without these hardware enhancements. This work proposes a novel DL method to generate pin-wheel free thin-slice images from helical scans for systems not equipped with these hardware capabilities. Artifact-free thin slice images, which are used as targets for artifact reduction network are generated through DL-based super-resolution along z-direction from thick slice images reconstructed from the same scan. The framework is trained with ~16000 coronal/sagittal slices from GE-Revolution system. Clinical image review and statistical analysis of the inferencing results have shown significant artifact reduction and improved diagnostic image quality while reducing noise. A Likert score study shows significant enhancement of proposed method over other image processing-based solutions available.
CT systems with large detector size suffer from lower z-resolution leading to pixelated images and inability to detect small structures thus adversely impacting the diagnosis and screening. Overlap reconstruction can partially reduce the stair-step artifacts but does not improve the effect of wider slice sensitivity profile (SSP) and thus continues to have reduced visibility of smaller structures. In this work, we propose a supervised deep learning method for z-resolution enhancement such that (a) the effective SSP of resulting image is reduced, (b) quantitative values of tissue (CT numbers) and tissue-contrast are preserved; (c) very limited noise enhancement and (d) improved tissue interface in bone/soft tissue. The proposed method devises a super resolution (SURE) network which is trained to map the low resolution (LR) slices to the corresponding high resolution (HR) slices. A 2D network is trained with sagittal and coronal slices with the LR-HR pair sets. The training is performed using ground truth HR slices obtained from high end systems, and the corresponding LR slices are synthesized by either using retro reconstruction with higher slice thickness and spacing or through averaging of slices in z-direction from HR images. The network is trained with both these types of images with helical acquisition volumes from a range of scanners. Qualitative and quantitative analysis is done on the predicted HR images and compared with the original HR images. FWHM for SSP of the predicted HR images reduced from ~0.98 to ~0.73, when the target was 0.64, thus improving the real z-resolution. HU distribution of different tissue types also showed stability in terms of mean value. Noise measured through standard deviation was slightly higher than the LR image but lower than that of original HR images. PSNR also showed consistent improvement on all the cases across 3 different systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.