Schwannomas and meningiomas account for large proportion of primary spinal tumors and need surgical procedures. Although preoperative discrimination of schwannomas and meningiomas is crucial, differentiation between the two is challenging based on magnetic resonance imaging. The two have not only different patterns of magnetic resonance imaging but also different types of epidemiology. TabNet was recently invented as a deep neural network for tabular data and achieved state-of-the-art results on several datasets. As TabNet is a deep neural network, we can simultaneously train TabNet and a convolutional neural network, allowing simultaneous image and tabular data analysis. We aim to build a bi-modal model combining a convolutional neural network and TabNet and evaluate its performance for differentiating between schwannomas and meningiomas based upon integrated magnetic resonance imaging and clinical factors.
Distinguishing the types of liver tumors is important for determining treatment strategy. Dynamic contrast-enhanced computed tomography (DCE-CT), which captures images at multiple timings after injecting contrast medium, provides essential characteristics to distinguish tumor types without biopsy. However, recognizing such characteristics takes a lot of time for radiologists because it requires to distinguish ambiguous image features comparing multiple images. Although several studies have proposed systems that classify tumor types from DCE-CT images, these systems usually output only classification results without the basis of classification such as the tumor characteristics. In this study, we propose a novel liver tumor characterization system that analyzes multi-phase DCE-CT images to help radiologists classify tumor types. We defined a list of eight essential tumor characteristics that radiologists commonly find to distinguish tumor types such as hepatocellular carcinomas (HCC), hemangiomas, and metastases. To deal with variable number of input images, we propose three deep neural network classification models that can take both two- and three-phase DCE-CT images as input. Using a dataset consisting of 3,318 tumors with labeled characteristics, each model was trained to classify the eight characteristics and validated. Evaluation results showed high discrimination performance exceeding 91% in ROC-AUC on average.
Diffuse lung diseases (DLD) are widely distributed in lungs. Because the opacity patterns of DLD on CT images are complex and various, the diagnostic results may be different between doctors depending on their experience and subjective decision on them. In order to solve this problem, performing image analysis using CAD (Computer-Aided Diagnosis) systems attracts attention. To achieve high performance in diagnosis by using these CAD systems, it is necessary to first perform lung region extraction as preprocessing for limiting the target domain. However, by using the existing systems, it is difficult to extract lung regions from all five typical shadow patterns of DLD and normal lungs. In this study, we aimed to extract lung regions from CT slices containing DLD shadows using the U-net for improving the CAD performance.
In this study, we aimed to classify lung cancers in chest CT images into adenocarcinoma (AD) and squamous cell carcinoma (SQ) using 3D Convolutional Neural Networks (CNN), and to visualize grounds used in the classification process by CNN. Although CNN is a powerful tool for classifying types of lung cancers, it does not provide grounds for decision explicitly, and there is a possibility that doctors and patients may not be satisfied with the decision by CNN. First, we developed a CNN based classifier to classify lung tumors into AD and SQ. The recognition rate of the proposed method was 69.9 ± 3.8%. Furthermore, the grounds of the classification by CNN was visualized by using Gradient-weighted Class Activation Mapping (Grad-CAM)[1].
Computer-aided diagnosis system for diffuse lung diseases (DLDs) is necessary for the objective assessment of the lung diseases. In this paper, we develop semantic segmentation model for 5 kinds of DLDs. DLDs considered in this work are consolidation, ground glass opacity, honeycombing, emphysema, and normal. Convolutional neural network (CNN) is one of the most promising technique for semantic segmentation among machine learning algorithms. While creating annotated dataset for semantic segmentation is laborious and time consuming, creating partially annotated dataset, in which only one chosen class is annotated for each image, is easier since annotators only need to focus on one class at a time during the annotation task. In this paper, we propose a new weak supervision technique that effectively utilizes partially annotated dataset. The experiments using partially annotated dataset composed 372 CT images demonstrated that our proposed technique significantly improved segmentation accuracy.
In recent years, many medical image analysis methods based on the Deep Learning techniques have been proposed. The Deep Learning techniques have been used for various medical applications such as organ segmentation and cancer detection. Segmentation of lung region from chest X-ray (CXR) images is also important task for computer-aided diagnosis (CAD). However, many methods based on Deep Learning techniques for this purpose were proposed, the regions where the lung and the heart overlap have been excluded from the target to be extracted in spite of the importance for detection of diseases. The aim of this paper is to extract whole lung regions from CRX images by using the U-net based method. As widely known, the U-net shows its high performance for various applications. As the result of the experiment, the authors archive 0.91 in the average of the Dice coefficient.
Research on Computer-Aided Diagnosis (CAD), which discriminates the presence or absence of diseases by machine learning and supports doctors’ diagnosis, has been actively conducted. However, training of machine learning requires many training data with annotations. Since the annotations are done by radiologists manually, annotating hundreds to thousands of images is very hard work. This study proposes classifiers using convolutional neural network (CNN) with transfer learning for efficient opacity classification of diffuse lung diseases, and the effects of transfer learning are analyzed under various conditions. In detail, classifiers with nine different conditions of transfer learning and without transfer learning are compared to show the best conditions.
Research on computer-aided diagnosis (CAD) for medical images using machine learning has been actively conducted. However, machine learning, especially deep learning, requires a large number of training data with annotations. Deep learning often requires thousands of training data, but it is tough work for radiologists to give normal and abnormal labels to many images. In this research, aiming the efficient opacity annotation of diffuse lung diseases, unsupervised and semi-supervised opacity annotation algorithms are introduced. Unsupervised learning makes clusters of opacities based on the features of the images without using any opacity information, and semi-supervised learning efficiently uses the small number of training data with annotation for training classifiers. The performance evaluation is carried out by the classification of six kinds of opacities of diffuse lung diseases: consolidation, ground-glass opacity, honeycombing, emphysema, nodular and normal, and the effectiveness of the methods is clarified.
Burdens of doctors for chest X-ray (CXR) examination have increased because number of X-ray images increases. Furthermore, since diagnosis is based on the experience and subjectivity of them, there is a possibility that a misdiagnosis may occur. Therefore, we performed Computer-Aided Diagnosis (CAD). In this study, we detected pulmonary nodules using R-CNN (Region with Convolutional Neural Network)[1] which is a kind of Deep Learning. First, we created CNN (Convolutional Neural Network) which classified data into classes of nodule opacities and nonnodule opacities. Next, we detected the object candidate regions from the chest X-ray images by Selective Search[2], and applied the CNN to the candidate regions to classify them and estimate the detailed position of the object. Thus, we propose a method to detect pulmonary nodules from the chest X-ray images.
We developed and evaluated the effect of our deep-learning-derived radiomic features, called deep radiomic features (DRFs), together with their combination with clinical predictors, on the prediction of the overall survival of patients with rheumatoid arthritis-associated interstitial lung disease (RA-ILD). We retrospectively identified 70 RA-ILD patients with thin-section lung CT and pulmonary function tests. An experienced observer delineated regions of interest (ROIs) from the lung regions on the CT images, and labeled them into one of four ILD patterns (ground-class opacity, reticulation, consolidation, and honeycombing) or a normal pattern. Small image patches centered at individual pixels on these ROIs were extracted and labeled with the class of the ROI to which the patch belonged. A deep convolutional neural network (DCNN), which consists of a series of convolutional layers for feature extraction and a series of fully connected layers, was trained and validated with 5-fold cross-validation for classifying the image patches into one of the above five patterns. A DRF vector for each patch was identified as the output of the last convolutional layer of the DCNN. Statistical moments of each element of the DRF vectors were computed to derive a DRF vector that characterizes the patient. The DRF vector was subjected to a Cox proportional hazards model with elastic-net penalty for predicting the survival of the patient. Evaluation was performed by use of bootstrapping with 2,000 replications, where concordance index (C-index) was used as a comparative performance metric. Preliminary results on clinical predictors, DRFs, and their combinations thereof showed (a) Gender and Age: C-index 64.8% [95% confidence interval (CI): 51.7, 77.9]; (b) gender, age, and physiology (GAP index): C-index: 78.5% [CI: 70.50 86.51], P < 0.0001 in comparison with (a); (c) DRFs: C-index 85.5% [CI: 73.4, 99.6], P < 0.0001 in comparison with (b); and (d) DRF and GAP: C-index 91.0% [CI: 84.6, 97.2], P < 0.0001 in comparison with (c). Kaplan-Meier survival curves of patients stratified to low- and high-risk groups based on the DRFs showed a statistically significant (P < 0.0001) difference. The DRFs outperform the clinical predictors in predicting patient survival, and a combination of the DRFs and GAP index outperforms either one of these predictors. Our results indicate that the DRFs and their combination with clinical predictors provide an accurate prognostic biomarker for patients with RA-ILD.
Consolidation and ground-glass opacity (GGO) are two major types of opacities associated with diffuse lung diseases. Accurate detection and classification of such opacities are crucially important in the diagnosis of lung diseases, but the process is subjective, and suffers from interobserver variability. Our study purpose was to develop a deep neural network convolution (NNC) system for distinguishing among consolidation, GGO, and normal lung tissue in high-resolution CT (HRCT). We developed ensemble of two deep NNC models, each of which was composed of neural network regression (NNR) with an input layer, a convolution layer, a fully-connected hidden layer, and a fully-connected output layer followed by a thresholding layer. The output layer of each NNC provided a map for the likelihood of being each corresponding lung opacity of interest. The two NNC models in the ensemble were connected in a class-selection layer. We trained our NNC ensemble with pairs of input 2D axial slices and “teaching” probability maps for the corresponding lung opacity, which were obtained by combining three radiologists’ annotations. We randomly selected 10 and 40 slices from HRCT scans of 172 patients for each class as a training and test set, respectively. Our NNC ensemble achieved an area under the receiver-operating-characteristic (ROC) curve (AUC) of 0.981 and 0.958 in distinction of consolidation and GGO, respectively, from normal opacity, yielding a classification accuracy of 93.3% among 3 classes. Thus, our deep-NNC-based system for classifying diffuse lung diseases achieved high accuracies for classification of consolidation, GGO, and normal opacity.
This research proposes a multi-channel deep convolutional neural network (DCNN) for computer-aided diagnosis (CAD) that classifies normal and abnormal opacities of diffuse lung diseases in Computed Tomography (CT) images. Because CT images are gray scale, DCNN usually uses one channel for inputting image data. On the other hand, this research uses multi-channel DCNN where each channel corresponds to the original raw image or the images transformed by some preprocessing techniques. In fact, the information obtained only from raw images is limited and some conventional research suggested that preprocessing of images contributes to improving the classification accuracy. Thus, the combination of the original and preprocessed images is expected to show higher accuracy. The proposed method realizes region of interest (ROI)-based opacity annotation. We used lung CT images taken in Yamaguchi University Hospital, Japan, and they are divided into 32 × 32 ROI images. The ROIs contain six kinds of opacities: consolidation, ground-glass opacity (GGO), emphysema, honeycombing, nodular, and normal. The aim of the proposed method is to classify each ROI into one of the six opacities (classes). The DCNN structure is based on VGG network that secured the first and second places in ImageNet ILSVRC-2014. From the experimental results, the classification accuracy of the proposed method was better than the conventional method with single channel, and there was a significant difference between them.
CT images are considered as effective for differential diagnosis of diffuse lung diseases. However, the
diagnosis of diffuse lung diseases is a difficult problem for the radiologists, because they show a variety of
patterns on CT images. So, our purpose is to construct a computer-aided diagnosis (CAD) system for
classification of patterns for diffuse lung diseases in thoracic CT images, which gives both quantitative and
objective information as a second opinion, to decrease the burdens of radiologists. In this article, we propose a
CAD system based on the conventional pattern recognition framework, which consists of two sub-systems;
one is feature extraction part and the other is classification part. In the feature extraction part, we adopted a
Gabor filter, which can extract patterns such like local edges and segments from input textures, as a feature
extraction of CT images. In the recognition part, we used a boosting method. Boosting is a kind of voting
method by several classifiers to improve decision precision. We applied AdaBoost algorithm for boosting
method. At first, we evaluated each boosting component classifier, and we confirmed they had not enough
performances in classification of patterns for diffuse lung diseases. Next, we evaluated the performance of
boosting method. As a result, by use of our system, we could improve the classification rate of patterns for
diffuse lung diseases.
It is important for diagnosis of pulmonary diseases to measure volume of accumulating pleural effusion in threedimensional
thoracic CT images quantitatively. However, automated extraction of pulmonary effusion correctly is
difficult. Conventional extraction algorithm using a gray-level based threshold can not extract pleural effusion from
thoracic wall or mediastinum correctly, because density of pleural effusion in CT images is similar to those of thoracic
wall or mediastinum. So, we have developed an automated extraction method of pulmonary effusion by use of extracting
lung area with pleural effusion. Our method used a template of lung obtained from a normal lung for segmentation of
lungs with pleural effusions. Registration process consisted of two steps. First step was a global matching processing
between normal and abnormal lungs of organs such as bronchi, bones (ribs, sternum and vertebrae) and upper surfaces of
livers which were extracted using a region-growing algorithm. Second step was a local matching processing between
normal and abnormal lungs which were deformed by the parameter obtained from the global matching processing.
Finally, we segmented a lung with pleural effusion by use of the template which was deformed by two parameters
obtained from the global matching processing and the local matching processing. We compared our method with a
conventional extraction method using a gray-level based threshold and two published methods. The extraction rates of
pleural effusions obtained from our method were much higher than those obtained from other methods. Automated
extraction method of pulmonary effusion by use of extracting lung area with pleural effusion is promising for diagnosis
of pulmonary diseases by providing quantitative volume of accumulating pleural effusion.
Accurate segmentation of small pulmonary nodules (SPNs) on thoracic CT images
is an important technique for volumetric doubling time estimation and feature characterization for the diagnosis of SPNs.
Most of the nodule segmentation algorithms that have been previously presented were designed to handle solid pulmonary nodules.
However, SPNs with ground-glass opacity (GGO) also affects a diagnosis.
Therefore, we have developed an automated volumetric segmentation algorithm of SPNs with GGO on thoracic CT images.
This paper presents our segmentation algorithm with multiple fixed-thresholds,
template-matching method, a distance-transformation method, and a watershed method.
For quantitative evaluation of the performance of our algorithm,
we used the first dataset provided by NCI Lung Image Database Consortium (LIDC).
In the evaluation, we employed the coincident rate which was calculated
with both the computerized segmented region of a SPN and the matching probability map (pmap) images provided by LIDC.
As the result of 23 cases, the mean of the total coincident rate was 0.507 +/- 0.219.
From these results, we concluded that our algorithm is useful for extracting SPNs
with GGO and solid pattern as well as wide variety of SPNs in size.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.