Clinical dual energy computed tomography (DECT) scanners have a material decomposition application to display the contrast-enhanced computed tomography (CT) scan as if it were scanned without contrast agent: virtual-non-contrast (VNC) imaging. The clinical benefit of VNC imaging can potentially be increased using photon counting detector-based multi energy CT (MECT) scanners. Furthermore, dose efficiency and contrast- to-noise ratio (CNR) may be improved in MECT. Effectively, the material decomposition can be performed in image domain. However, material decomposition increases the noise of the material images. Therefore, we generalized an image filter to achieve less noisy decomposed material images. The image-based noise reduction for the material images can be achieved by adding the highpass of the CNR optimized energy image to the lowpass filtered material image. In this way, the image-based noise reduction has the potential to recover some subtle structures that are less visible in the unfiltered images. In this study, we generalize the measurement-dependent filter of Macovski et al. to the case of MECT. The method is performed using phantom measurements from the Siemens SOMATOM Definition Flash scanner in single energy scan mode at tube voltages 80 kV, 100 kV, 120 kV and 140 kV to mimic 4 energy bins of a photon counting CT. Using the image-based noise reduction, a factor of 4 noise reduction in the material images can be achieved.
Sabrina Dorn, Shuqing Chen, Stefan Sawall, David Simons, Matthias May, Joscha Maier, Michael Knaup, Heinz-Peter Schlemmer, Andreas Maier, Micheal Lell, Marc Kachelrieß
In this work, we present a novel method to combine mutually exclusive CT image properties that emerge from different reconstruction kernels and display settings into a single organ-specific image reconstruction and display. We propose a context-sensitive reconstruction that locally emphasizes desired image properties by exploiting prior anatomical knowledge. Furthermore, we introduce an organ-specific windowing and display method that aims at providing a superior image visualization. Using a coarse-to-fine hierarchical 3D fully convolutional network (3D U-Net), the CT data set is segmented and classified into different organs, e.g. the heart, vasculature, liver, kidney, spleen and lung, as well as into the tissue types bone, fat, soft tissue and vessels. Reconstruction and display parameters most suitable for the organ, tissue type, and clinical indication are chosen automatically from a predefined set of reconstruction parameters on a per-voxel basis. The approach is evaluated using patient data acquired with a dual source CT system. The final context-sensitive images simultaneously link the indication-specific advantages of different parameter settings and result in images joining tissue-related desired image properties. A comparison with conventionally reconstructed and displayed images reveals an improved spatial resolution in highly attenuating objects and air while maintaining a low noise level in soft tissue in the compound image. The images present significantly more information to the reader simultaneously and dealing with multiple volumes may no longer be necessary. The presented method is useful for the clinical workflow and bears the potential to increase the rate of incidental findings.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.