Lung cancer stands as the deadliest cancer worldwide, and early detection of pulmonary nodules is the focus of many studies to enhance the survival rate. As with many diseases, deep learning is becoming a commonly used technique for computer-aided diagnosis (CAD) in detecting lung nodules. Most lung CAD systems rely on a detection module followed by a false positive (FP) reduction module (FPR); however, FPR removes FPs as well as true positives (TPs). Thus, as a tradeoff, in order to retain high sensitivity, a large number of FPs remain. In our experience, small pulmonary vessels have been the primary source of FPs. Hence, we propose an additional module cascaded on normal FPR module to specifically reduce the number of FPs due to pulmonary vessel. Utilizing a 3D deep learning architecture, we find that the inclusion of various fields of view (FOVs) improves the accuracy of the chosen model. We explore the impact of the selection of the FOVs, the method used to integrate the features from each FOV, and using the FOV as a data augmentation method. We show that this vessel specific FPR module significantly improves the CAD system’s FP rate while only sacrificing 5% of the previously achieved sensitivity.
Tomosynthesis images of the breast suffer from artifacts caused by the presence of highly absorbing materials. These can be either induced by metal objects like needles or clips inserted during biopsy devices, or larger calcifications inside the examined breast. Mainly two different kinds of artifacts appear after the filtered backprojection procedure. The first type is undershooting artifacts near edges of high-contrast objects caused by the filtering step. The second type is out-of-plane (ripple) artifacts that appear even in slices where the metal object or macrocalcifications does not exist. Due to the limited angular range of tomosynthesis systems, overlapping structures have high influence on neighboring regions. To overcome these problems, a segmentation of artifact introducing objects is performed on the projection images. Both projection versions, with and without high-contrast objects are filtered independently to avoid undershootings. During backprojection a decision is made for each reconstructed voxel, if it is artifact or high-contrast object. This is based on a mask image, gained from the segmentation of high-contrast objects. This procedure avoids undershooting artifacts and additionally reduces out-of-plane ripple. Results are demonstrated for different kinds of artifact inducing objects and calcifications.
In this work, we provide an initial characterization of a novel twin robotic X-ray system. This system is equipped
with two motor-driven telescopic arms carrying X-ray tube and flat-panel detector, respectively. 2D radiographs
and fluoroscopic image sequences can be obtained from different viewing angles. Projection data for 3D cone-beam
CT reconstruction can be acquired during simultaneous movement of the arms along dedicated scanning
trajectories. We provide an initial evaluation of the 3D image quality based on phantom scans and clinical
images. Furthermore, initial evaluation of patient dose is conducted. The results show that the system delivers
high image quality for a range of medical applications. In particular, high spatial resolution enables adequate
visualization of bone structures. This system allows 3D X-ray scanning of patients in standing and weight-bearing
position. It could enable new 2D/3D imaging workflows in musculoskeletal imaging and improve diagnosis of
musculoskeletal disorders.
A new algorithm is suggested to compute one or several virtual projection images directly from cone-beam data
acquired in a tomosynthesis geometry. One main feature of this algorithm is that it does not involve the explicit
reconstruction of a 3D volume, and a subsequent forward-projection operation, but rather operates using solely
2D image processing steps. The required 2D processing is furthermore based on the use of pre-computed entities,
so that a significant speed-up in the computations can be obtained. The presented algorithm can be applied
to a variety of CT geometries, and is here investigated for a mammography application, to simulate virtual
mammograms from a set of low-dose tomosynthesis projection images. A first evaluation from real measured
data is given.
In breast tomosynthesis (BT) a number of 2D projection images are acquired from different angles along a limited arc.
The imaged breast volume is reconstructed from the projection images, providing 3D information. The purpose of the
study was to investigate and optimize different reconstruction methods for BT in terms of image quality using human
observers viewing clinical cases. Sixty-six cases with suspected masses and calcifications were collected from 55
patients.
KEYWORDS: Reconstruction algorithms, Breast, Digital breast tomosynthesis, Expectation maximization algorithms, Image restoration, Mammography, Tissues, Spatial resolution, Stochastic processes, Signal to noise ratio
Digital Breast Tomosynthesis (DBT) suffers from incomplete data and poor quantum statistics limited by the total dose absorbed in the breast. Hence, statistical reconstruction assuming the photon statistics to follow a Poisson distribution may have some advantages. This study investigates state-of-art iterative maximum likelihood (ML) statistical reconstruction algorithms for DBT and compares the results with simple backprojection (BP), filtered backprojection (FBP), and iFBP (FBP with filter derived from iterative reconstruction).
The gradient-ascent and convex optimization variants of the transmission ML algorithm are evaluated with phantom
and clinical data. Convergence speed is very similar for both iterative statistical algorithms and after approximately 5
iterations all significant details are well displayed, although we notice increasing noise. We found empirically that a
relaxation factor between 0.25 and 0.5 provides the optimal trade-off between noise and contrast. The ML-convex
algorithm gives smoother results than the ML-gradient algorithm. The low-contrast CNR of the ML algorithms is between CNR for simple backprojection (highest) and FBP (lowest). Spatial resolution of iterative statistical and iFBP algorithms is similar to that of FBP but the quantitative density representation better resembles conventional mammograms. The iFBP algorithm provides the benefits of statistical iterative reconstruction techniques and requires much shorter computation time.
Characterization and quantification of the severity of diffuse parenchymal lung diseases (DPLDs) using Computed
Tomography (CT) is an important issue in clinical research. Recently, several classification-based computer-aided
diagnosis (CAD) systems [1-3] for DPLD have been proposed. For some of those systems, a degradation of performance
[2] was reported on unseen data because of considerable inter-patient variances of parenchymal tissue patterns.
We believe that a CAD system of real clinical value should be robust to inter-patient variances and be able to classify
unseen cases online more effectively. In this work, we have developed a novel adaptive knowledge-driven CT image
search engine that combines offline learning aspects of classification-based CAD systems with online learning aspects of
content-based image retrieval (CBIR) systems. Our system can seamlessly and adaptively fuse offline accumulated
knowledge with online feedback, leading to an improved online performance in detecting DPLD in both accuracy and
speed aspects. Our contribution lies in: (1) newly developed 3D texture-based and morphology-based features; (2) a
multi-class offline feature selection method; and, (3) a novel image search engine framework for detecting DPLD. Very
promising results have been obtained on a small test set.
Segmentation of blood vessels is a challenging problem due to poor contrast, noise, and specifics of vessels'
branching and bending geometry. This paper describes a robust semi-automatic approach to extract the surface
between two or more user-supplied end points for tubular- or vessel-like structures. We first use a minimal path
technique to extract the shortest path between the user-supplied points. This path is the global minimizer of
an active contour model's energy along all possible paths joining the end-points. Subsequently, the surface of
interest is extracted using an edge-based level set segmentation approach. To prevent leakage into adjacent
tissues, the algorithm uses a diameter constraint that does not allow the moving front to grow wider than the
predefined diameter. Points constituting the extracted path(s) are automatically used as initialization seeds for
the evolving level set function. To cope with any further leaks that may occur in the case of large variations of
the vessel width between the user-supplied end-points, a freezing mechanism is designed to prevent the moving
front to leak into undesired areas. The regions to be frozen are determined from few clicks by the user. The
potential of the proposed approach is demonstrated on several synthetic and real images.
The purpose of this study was to investigate feasibility of computer-aided detection of masses and calcification clusters in breast tomosynthesis images and obtain reliable estimates of sensitivity and false positive rate on an independent test set. Automatic mass and calcification detection algorithms developed for film and digital mammography images were applied without any adaptation or retraining to tomosynthesis projection images. Test set contained 36 patients including 16 patients with 20 known malignant lesions, 4 of which were missed by the radiologists in conventional mammography images and found only in retrospect in tomosynthesis. Median filter was applied to tomosynthesis projection images. Detection algorithm yielded 80% sensitivity and 5.3 false positives per breast for calcification and mass detection algorithms combined. Out of 4 masses missed by radiologists in conventional mammography images, 2 were found by the mass detection algorithm in tomosynthesis images.
For cancer polyp detection based on CT colonography we investigate the sample variance of two methods for estimating the sensitivity and specificity. The goal is the reduction of sample variance for both error estimates, as a first step towards comparison with other detection schemes. Our detection scheme is based on a committee of support vector machines. The two estimates of sensitivity and specificity studied here are a smoothed bootstrap (the 632+ estimator), and ten-fold cross-validation. It is shown that the 632+ estimator generally has lower sample variance than the usual cross-validation estimator. When the number of nonpolyps in the training set is relatively small we obtain approximately 80% sensitivity and 50% specificity (for either method). On the other hand, when the number of nonpolyps in the training set is relatively large, estimated sensitivity (for either method) drops considerably. Finally, we consider the intertwined roles of relative sample sizes (polyp/nonpolyp), misclassification costs, and bias-variance reduction.
To improve computer aided diagnosis (CAD) for CT colonography we designed a hybrid classification scheme that uses a committee of support vector machines (SVMs) combined with a genetic algorithm (GA) for variable selection. The genetic algorithm selects subsets of four features, which are later combined to form a committee, with majority vote for classification across the base classifiers. Cross validation was used to predict the accuracy (sensitivity, specificity, and combined accuracy) of each base classifier SVM. As a comparison for GA, we analyzed a popular approach to feature selection called forward stepwise search (FSS). We conclude that genetic algorithms are effective in comparison to the forward search procedure when used in conjunction with a committee of support vector machine classifiers for the purpose of colonic polyp identification.
We have developed a new method employing the Canny edge detector and Radon transformation to segment images of polyp candidates for CT colonography (CTC) computer aided polyp detection and obtain features useful for distinguishing true polyps from false positive detections.
The technique is applied to two-dimensional subimages of polyp candidates selected using various 3-D shape and curvature characteristics. We detect boundaries using the Canny operator. The baseline of the colon wall is detected by applying the Radon transform to the edge image and locating the strongest peak in the resulting transform matrix. The following features are calculated and used to classify detections as true positives (TP) and false positives (FP): polyp boundary length, polyp base length, polyp internal area, average intensity, polyp height, and inscribed circle radius.
The segmentation technique was applied to a data set of 15 polyps larger than 3 mm and 617 false positives taken from 80 CTC studies (supine and prone screening of 40 patients). The sensitivity was 100% (15 of 15). 58% of the FP's were eliminated leaving an average of 3 false positives per study.
Our method is able to segment polyps and quantitatively measure polyp features independently of orientation and shape.
A multi-network decision classification scheme for colonic polyp detection is presented. The approach is based on the results of voting over several neural networks using different variable sets of size N which are selected randomly or by an expert from a general variable set of size M. Detection of colonic polyps is complicated by a large variety of polypoid looking shapes (haustral folds, leftover stool) on the colon surface. Using various shape and curvature characteristics, intensity, size measurements and texture features to distinguish real polyps from false positives leads to an intricate classification problem. We used 17 features including region density, Gaussian and average curvature and sphericity, lesion size, colon wall thickness, and their means and standard deviations in the vicinity of the prospective polyp. Selection of the most important parameters to reduce a feature set to acceptable size is a generally unsolved problem. The method suggested in this paper uses a collection of subsets of variables. These sets of variables are weighted by their effectiveness. The effectiveness cost function is calculated on the basis of the training and test sample mis-classification rates obtained by the training neural net with the given variable set. The final decision is based on the majority vote across the networks generated using the variable subsets, and takes into account the weighted votes of all nets. This method reduces the flst positive rate by a factor of 1.7 compared to single net decisions. The overall sensitivity and specificity rates reached are 100% and 95% correspondingly. Best specificity and sensitivity rates were reached using back propagation neural nets with one hidden layer trained with the Levenberg-Marquardt algorithm. Ten-fold cross-validation is used to better estimate the true error rates.
The paper describes a neural-based method for matching spatially distorted image sets. The matching of partially overlapping images is important in many applications-- integrating information from images formed from different spectral ranges, detecting changes in a scene and identifying objects of differing orientations and sizes. Our approach consists of extracting contour features from both images, describing the contour curves as sets of line segments, comparing these sets, determining the corresponding curves and their common reference points, calculating the image-to-image co-ordinate transformation parameters on the basis of the most successful variant of the derived curve relationships. The main steps are performed by custom neural networks. The algorithms describe in this paper have been successfully tested on a large set of images of the same terrain taken in different spectral ranges, at different seasons and rotated by various angles. In general, this experimental verification indicates that the proposed method for image fusion allows the robust detection of similar objects in noisy, distorted scenes where traditional approaches often fail.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.