Robust training of a deep convolutional neural network (DCNN) requires a very large number of annotated datasets that are currently not available in CT colonography (CTC). We previously demonstrated that deep transfer learning provides an effective approach for robust application of a DCNN in CTC. However, at high detection accuracy, the differentiation of small polyps from non-polyps was still challenging. In this study, we developed and evaluated a deep ensemble learning (DEL) scheme for reviewing of virtual endoluminal images to improve the performance of computer-aided detection (CADe) of polyps in CTC. Nine different types of image renderings were generated from virtual endoluminal images of polyp candidates detected by a conventional CADe system. Eleven DCNNs that represented three types of publically available pre-trained DCNN models were re-trained by transfer learning to identify polyps from the virtual endoluminal images. A DEL scheme that determines the final detected polyps by a review of the nine types of VE images was developed by combining the DCNNs using a random forest classifier as a meta-classifier. For evaluation, we sampled 154 CTC cases from a large CTC screening trial and divided the cases randomly into a training dataset and a test dataset. At 3.9 falsepositive (FP) detections per patient on average, the detection sensitivities of the conventional CADe system, the highestperforming single DCNN, and the DEL scheme were 81.3%, 90.7%, and 93.5%, respectively, for polyps ≥6 mm in size. For small polyps, the DEL scheme reduced the number of false positives by up to 83% over that of using a single DCNN alone. These preliminary results indicate that the DEL scheme provides an effective approach for improving the polyp detection performance of CADe in CTC, especially for small polyps.
As the capability of high-resolution displays grows, high-resolution images are often required in Computed Tomography
(CT). However, acquiring high-resolution images takes a higher radiation dose and a longer scanning time. In this study,
we applied the Sparse-coding-based Super-Resolution (ScSR) method to generate high-resolution images without
increasing the radiation dose. We prepared the over-complete dictionary learned the mapping between low- and highresolution
patches and seek a sparse representation of each patch of the low-resolution input. These coefficients were
used to generate the high-resolution output. For evaluation, 44 CT cases were used as the test dataset. We up-sampled
images up to 2 or 4 times and compared the image quality of the ScSR scheme and bilinear and bicubic interpolations,
which are the traditional interpolation schemes. We also compared the image quality of three learning datasets. A total of
45 CT images, 91 non-medical images, and 93 chest radiographs were used for dictionary preparation respectively. The
image quality was evaluated by measuring peak signal-to-noise ratio (PSNR) and structure similarity (SSIM). The
differences of PSNRs and SSIMs between the ScSR method and interpolation methods were statistically significant.
Visual assessment confirmed that the ScSR method generated a high-resolution image with sharpness, whereas
conventional interpolation methods generated over-smoothed images. To compare three different training datasets, there
were no significance between the CT, the CXR and non-medical datasets. These results suggest that the ScSR provides a
robust approach for application of up-sampling CT images and yields substantial high image quality of extended images
in CT.
Single image super-resolution (SR) method can generate a high-resolution (HR) image from a low-resolution (LR) image
by enhancing image resolution. In medical imaging, HR images are expected to have a potential to provide a more
accurate diagnosis with the practical application of HR displays. In recent years, the super-resolution convolutional
neural network (SRCNN), which is one of the state-of-the-art deep learning based SR methods, has proposed in
computer vision. In this study, we applied and evaluated the SRCNN scheme to improve the image quality of magnified
images in chest radiographs. For evaluation, a total of 247 chest X-rays were sampled from the JSRT database. The 247
chest X-rays were divided into 93 training cases with non-nodules and 152 test cases with lung nodules. The SRCNN
was trained using the training dataset. With the trained SRCNN, the HR image was reconstructed from the LR one. We
compared the image quality of the SRCNN and conventional image interpolation methods, nearest neighbor, bilinear and
bicubic interpolations. For quantitative evaluation, we measured two image quality metrics, peak signal-to-noise ratio
(PSNR) and structural similarity (SSIM). In the SRCNN scheme, PSNR and SSIM were significantly higher than those
of three interpolation methods (p<0.001). Visual assessment confirmed that the SRCNN produced much sharper edge
than conventional interpolation methods without any obvious artifacts. These preliminary results indicate that the
SRCNN scheme significantly outperforms conventional interpolation algorithms for enhancing image resolution and that
the use of the SRCNN can yield substantial improvement of the image quality of magnified images in chest radiographs.
Accurate electronic cleansing (EC) for CT colonography (CTC) enables the visualization of the entire colonic surface without residual materials. In this study, we evaluated the accuracy of a novel multi-material electronic cleansing (MUMA-EC) scheme for non-cathartic ultra-low-dose dual-energy CTC (DE-CTC). The MUMA-EC performs a wateriodine material decomposition of the DE-CTC images and calculates virtual monochromatic images at multiple energies, after which a random forest classifier is used to label the images into the regions of lumen air, soft tissue, fecal tagging, and two types of partial-volume boundaries based on image-based features. After the labeling, materials other than soft tissue are subtracted from the CTC images. For pilot evaluation, 384 volumes of interest (VOIs), which represented sources of subtraction artifacts observed in current EC schemes, were sampled from 32 ultra-low-dose DE-CTC scans. The voxels in the VOIs were labeled manually to serve as a reference standard. The metric for EC accuracy was the mean overlap ratio between the labels of the reference standard and the labels generated by the MUMA-EC, a dualenergy EC (DE-EC), and a single-energy EC (SE-EC) scheme. Statistically significant differences were observed between the performance of the MUMA/DE-EC and the SE-EC methods (p<0.001). Visual assessment confirmed that the MUMA-EC generated less subtraction artifacts than did DE-EC and SE-EC. Our MUMA-EC scheme yielded superior performance over conventional SE-EC scheme in identifying and minimizing subtraction artifacts on noncathartic ultra-low-dose DE-CTC images.
The detection of very subtle lesions and/or lesions overlapped with vessels on CT images is a time consuming and
difficult task for radiologists. In this study, we have developed a 3D temporal subtraction method to enhance interval
changes between previous and current multislice CT images based on a nonlinear image warping technique. Our
method provides a subtraction CT image which is obtained by subtraction of a previous CT image from a current CT
image. Reduction of misregistration artifacts is important in the temporal subtraction method. Therefore, our
computerized method includes global and local image matching techniques for accurate registration of current and
previous CT images. For global image matching, we selected the corresponding previous section image for each
current section image by using 2D cross-correlation between a blurred low-resolution current CT image and a blurred
previous CT image. For local image matching, we applied the 3D template matching technique with translation and
rotation of volumes of interests (VOIs) which were selected in the current and the previous CT images. The local shift
vector for each VOI pair was determined when the cross-correlation value became the maximum in the 3D template
matching. The local shift vectors at all voxels were determined by interpolation of shift vectors of VOIs, and then the
previous CT image was nonlinearly warped according to the shift vector for each voxel. Finally, the warped previous
CT image was subtracted from the current CT image. The 3D temporal subtraction method was applied to 19 clinical
cases. The normal background structures such as vessels, ribs, and heart were removed without large misregistration
artifacts. Thus, interval changes due to lung diseases were clearly enhanced as white shadows on subtraction CT
images.
The cardio-thoracic ratio (CTR) is commonly measured manually for the evaluation of cardiomegaly. To determine the CTR automatically, we have developed a computerized scheme based on gray-level histogram analysis and an edge detection technique with feature analysis. The database used in this study consisted of 392 chest radiographs, which included 304 normals and 88 abnormals with cardiomegaly. The pixel size and the quantization level of the image were 0.175 mm and 1024, respectively. We performed a nonlinear density correction to maintain consistency in the density and contrast of the image. Initial heart edge detection was performed by selection of a certain range of pixel values in the histogram of a rectangular area at the center of a low-resolution image. Feature analysis with use of an edge gradient and with the orientation obtained by a Sobel operator was applied for accurate identification of the heart edges, which tend to have large edge gradients in a certain range of orientations. In addition, to determine the CTR, we detected the ribcage edges automatically by using image profile analysis. In 94.9% of all of the cases, the heart edges were detected accurately by use of this scheme. The area under the ROC curve (Az value) in distinguishing between normals and abnormals with cardiomegaly based on the CTR was 0.912. Because the CTR is measured automatically and quickly (in less than 1 sec.), radiologists could save reading time. The computerized scheme will be useful for the assessment of cardiomegaly on chest radiographs.
For computerized detection of interstitial lung disease on chest radiographs, we developed three different methods: texture analysis based on the Fourier transform, geometric- pattern feature analysis, and artificial neural network (ANN) analysis of image data. With these computer-aided diagnostic methods, quantitative measures can be obtained. To improve the diagnostic accuracy, we investigated combined classification schemes by using the results obtained with the three methods for distinction between normal and abnormal chest radiographs with interstitial opacities. The sensitivities of texture analysis, geometric analysis, and ANN analysis were 88.0+/- 1.6%, 91.0+/- 2.6%, and 87.5+/- 1.9%, respectively, at a specificity of 90.0%, whereas the sensitivity of a combined classification scheme with the logical OR operation was improved to 97.1%+/- 1.5% at the same specificity of 90.0%. The combined scheme can achieve higher accuracy than the individual methods for distinction between normal and abnormal cases with interstitial opacities.
KEYWORDS: Chest imaging, Chest, Lung, Databases, Radiography, Picture Archiving and Communication System, Computer aided diagnosis and therapy, CAD systems, Digital imaging, Medical imaging
For implementation of computer-aided diagnostic systems for chest radiographs, it is important to correctly identify the view position, i.e., posteroanterior (PA) or lateral view. Our purpose is to develop an advanced computerized method by using a template matching technique for correctly identifying either PA or lateral view, and to apply this method to approximately 48,000 PA and 16,000 lateral chest radiographs. To evaluate the similarity with templates, correlation values of a chest image with various templates were obtained and compared for determining whether a chest image is either a PA or a lateral view. By considering the variation in terms of patient's sizes, lung opacities, and lung sizes, we produced 24 templates of PA and lateral views. In the first step, two largest correlation values of an unknown case with 3 PA and 2 lateral templates for medium-size patients were compared for determining the view position. In the second step, the unidentifiable cases in the first step were re-examined by comparison of the correlation values with 11 to 19 templates for small and large patients. With the computerized method based on a template matching technique, 99.99%(=63,788/63,791) of chest images in the large database were correctly identified in terms of PA or lateral views.
The purpose of this study is to investigate the influence of the scattered x rays on the signal sharpness on the radiographs produced by using a computed radiography (CR) system by measuring the spatial frequency spectra of the signal image. By using a 0.1 mm slit on the polymethyl methacrylate (PMMA) for thicknesses of 0.5 cm to 20.5 cm, the slit images were acquired as a signal by use of imaging plates at tube voltages of 50 kV to 120 kV. The relative exposure profiles for the slit images were Fourier transformed to obtain the spatial frequency spectra. For comparison of the frequency spectra with and without the scattered x rays, we defined the scattered x-ray influence factor (SIF) representing the magnitude of the influence of the scattered x rays on the spatial frequency spectra of the signal image. To investigate the contribution of the primary and scatter components to the degradation of the signal sharpness, we proposed a method for separating the spatial frequency spectrum of the signal image into the primary and scatter components. By obtaining the SIF, we found that, for very lower frequencies (less than about 0.3 mm^-1), the shape of the spatial frequency spectra of the signal image depends on the scattered x rays, but, for higher frequencies, hardly depends. As a result of the separation of the frequency spectra of the signal image, we found that the contribution of the scatter component for very lower frequencies (less than about 0.2 mm^-1) to the total spectrum of the signal image was not negligible and became greater as the scattering material thickness and the tube voltage increased. On the contrary, for higher frequencies, the primary component was dominant compared with the scatter component for all thicknesses and tube voltages.
The presampling modulation transfer function (MTF) can be determined by the edge spread function in which the sampling interval is narrower than the pixel-to-pixel interval from slight angled edge image. It is important that the precision of the presampling MTF depend on the precision of the edge angle. In this study, we have developed the automated method, which includes a precise edge angle determination process for the measurement of the presampling MTF.
We have developed a computerized scheme for detection of interstitial lung disease by using artificial neural networks (ANNs) on quantitative analysis of digital image data. Three separate ANNs wee applied for the ANN scheme. The first ANN was trained with horizontal profiles in the ROIs selected from digital chest radiographs. The second ANN was trained with vertical output pattern obtained from the 1st ANN in each ROI. The output from the 2nd ANN was used to distinguish between normal and abnormal ROIs. In order to improve the performance, we attempted a density correction and rib edge removal. The Az value was improved from 0.906 to 0.934 by incorporating density correction. For the classification of each chest image, we employed a rule-based method and a rule-based plus the third ANN method. A high Az value was obtained with the rule-based plus ANN method. The ANNs can learn certain statistical properties associate with patterns of interstitial infiltrates in chest radiographs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.