Stroke is a major cause of death and permanent disability. Magnetic resonance imaging (MRI) is often the modality for evaluating lesion extension, affected brain, area and classification between hemorrhagic or ischemic, which are critical for treatment and rehabilitation decisions. MRI manual evaluation and lesion delineation is time-consuming and subject to inter- and intra-observer variation. Although promising, convolutional neural network (CNN) approaches face challenges in dealing with small-size lesions, irregular morphology, and idiosyncrasy. The best-published segmentation approaches based on the ATLAS dataset and CNNs are either fully 3D, or use up to eight anatomically specific 2D CNNs, incurring high computation costs and limited deployment feasibility. We developed a more straightforward segmentation method using only three CNNs. First, an Attention UNet and an Attention ResUNet are trained only on a lesion-wise balanced sample of slice patches. By working on the patch level, these networks learn to segment lesions’ texture and shape irregularities. Then, another Attention ResUNet uses the previous two CNNs output patches reassembled and stacked along with the original slice. By having the broader slice context available, this second step combines the first step segmentations, handling disagreements while keeping the segmentation coherence on the slice level. We validated the method on 239 exams from the multicenter ATLAS dataset, using a 5-fold cross-validation. In the test set of 45,171 slices, we observed a mean slice Dice coefficient of 0.8070±0.05, a state-of-the-art result in this dataset, showing generalization capacity on different centers and acquisition conditions.
Chest X-ray (CXR) images have a high potential in the monitoring and examination of various lung diseases, including COVID-19. However, the screening of a large number of patients with diagnostic hypothesis for COVID-19 poses a major challenge for physicians. In this paper, we propose a deep learning-based approach that can simultaneously suggest a diagnose and localize lung opacity areas in CXR images. We used a public dataset containing 5, 639 posteroanterior CXR images. Due to unbalanced classes (69.2% of the images are COVID-19 positive), data augmentation was applied only to images belonging to the normal category. We split the dataset into train and test sets with proportional rate at 90:10. To the classification task, we applied 5-fold cross-validation to the training set. The EfficientNetB4 architecture was used to perform this classification. We used a YOLOv5 pre-trained in COCO dataset to the detection task. Evaluations were based on accuracy and area under the ROC curve (AUROC) metrics to the classification task and mean average precision (mAP) to the detection task. The classification task achieved an average accuracy of 0.83 ± 0.01 (95% CI [0.81, 0.84]) and AUC of 0.88 ± 0.02 (95% CI [0.85, 0.89]) in 5-fold over the test dataset. The best result was reached in fold 3 (0.84 and 0.89 of accuracy and AUC, respectively). Positive results were evaluated by the opacity detector, which achieved a mAP of 59.51%. Thus, the good performance and rapid diagnostic prediction make the system a promising means to assist radiologists in decision making tasks.
Dual-energy subtraction (DES) is a technique that separates soft tissue from bones in a chest radiograph (CR). As DES requires specialized equipment, we propose an automatic method based on convolutional neural networks (CNNs) to generate virtual soft tissue images. A dataset comprising 35 pairs of CR and its soft-tissue version split in training (28 image pairs) and testing (7 image pairs) sets were used with data augmentation. We tested two types of images: the lung region’s cropped image and the segmented lung image. The ribs suppression was treated as a local problem, so each image was divided into 784 patches. The U-Net architecture was used to perform bone suppression. We tested two types of loss functions: mean squared error (Lmse) and Lsm, which combines Lmse with the structural similarity index measure (SSIM). Due to the patches overlapping, it was necessary to interpolate the gray levels on the reconstructed image from the predicted patches. Evaluations were based on SSIM and root mean square error (RMSE) over the reconstructed lung area. The combination that presented the best results used the loss Lsm and the segmented lung image as input to the U-Net (SSIM of 0.858 and RMSE of 0.033). We observed that the U-Net has poor performance when trained with cropped images containing all information from the chest cavity and how the loss using local information can improve CR rib bone suppression. Our results suggest that it is possible removing the rib bones accurately in CR using CNN and a patch-based approach.
Cardiomegaly is a medical condition that leads to an increase in cardiac size. It can be manually assessed using the cardiothoracic ratio from chest x-rays (CXRs). However, as that task can be challenging in such limited examinations, we propose the fully automated cardiomegaly detection in CXR. For this, we first trained convolutional networks (ConvNets) to classify the CXR as positive or negative to cardiomegaly and then evaluated the generalization potential of the trained ConvNets on independent cohorts. This work used frontal CXR images from a public dataset for training/testing and another public and one private dataset to test the models’ generalization externally. Training and testing were performed using images cropped with a previously developed U-Net model. Experiments were performed with five topologically different ConvNets, data augmentation techniques, and a 50-50 class-weighing strategy to improve performance and reduce the possibility of bias to the majority class. The receiver operating characteristic curve assessed the performance of the models. DenseNet yielded the highest area under the curve (AUC) on testing (0.818) and external validation (0.809) datasets. Moreover, DenseNet obtained the highest sensitivity overall, yielding up to 0.971 on the private dataset with patients from our hospital. Therefore, DenseNet had a statistically higher potential to identify cardiomegaly. The proposed models, especially those trained with DenseNet convolutional core, automatically detected cardiomegaly with high sensitivity. To the best of our knowledge, this was the first work to design a novel general model for classifying specific deep-learning patterns of cardiomegaly in CXRs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.