Deep learning approaches have been used extensively for medical image segmentation tasks. Training deep networks for segmentation, however, typically requires manually delineated examples which provide a ground truth for optimization of the network. In this work, we present a neural network architecture that segments vascular structures in retinal OCTA images without the need of direct supervision. Instead, we propose a variational intensity cross channel encoder that finds vessel masks by exploiting the common underlying structure shared by two OCTA images of the the same region but acquired on different devices. Experimental results demonstrate significant improvement over three existing methods that are commonly used.
Monitoring retinal thickness of persons with multiple sclerosis (MS) provides important bio-markers for disease progression. However, changes in retinal thickness can be small and concealed by noise in the acquired data. Consistent longitudinal retinal layer segmentation methods for optical coherence tomography (OCT) images are crucial for identifying the real longitudinal retinal changes of individuals with MS. In this paper, we propose an iterative registration and deep learning based segmentation method for longitudinal 3D OCT scans. Since 3D OCT scans are usually anisotropic with large slice separation, we extract B-scan features using 2D deep networks and utilize inter-B-scan context with convolutional long-short-term memory (LSTM). To incorporate longitudinal information, we perform fundus registration and interpolate the smooth retinal surfaces of the previous visit to use as a prior on the current visit.
Deep networks provide excellent image segmentation results given copious amounts of supervised training data (source data). However, when a trained network is applied to data acquired at a different clinical center or on a different imaging device (target data), a significant drop in performance can occur due to the domain shift between the test data and the network training data. To solve this problem, unsupervised domain adaptation methods retrain the model with labeled source data and unlabeled target data. In real practice, retraining the model is time consuming and the labeled source data may not be available for people deploying the model. In this paper, we propose a straightforward unsupervised domain adaptation method for multi-device retinal OCT image segmentation which does not require labeled source data and does not require retraining of the segmentation model. The segmentation network is trained with labeled Spectralis images and tested on Cirrus images. The core idea is to use a domain adaptor to convert target domain images (Cirrus) to a domain that can be segmented well by the already trained segmentation network. Unlabeled Spectralis and Cirrus images are used to train this domain adaptor. The domain adaptation block is used before the trained network and a discriminator is used to differentiate the segmentation results from Spectralis and Cirrus. The domain adaptation portion of our network is fully unsupervised and does not change the previously trained segmentation network.
KEYWORDS: Image segmentation, Computed tomography, Pathology, Data modeling, Medical imaging, Brain, Magnetic resonance imaging, Neural networks, Statistical modeling, Head
Medical images are often used to detect and characterize pathology and disease; however, automatically identifying and segmenting pathology in medical images is challenging because the appearance of pathology across diseases varies widely. To address this challenge, we propose a Bayesian deep learning method that learns to translate healthy computed tomography images to magnetic resonance images and simultaneously calculates voxel-wise uncertainty. Since high uncertainty occurs in pathological regions of the image, this uncertainty can be used for unsupervised anomaly segmentation. We show encouraging experimental results on an unsupervised anomaly segmentation task by combining two types of uncertainty into a novel quantity we call scibilic uncertainty.
To better understand cerebellum-related diseases and functional mapping of the cerebellum, quantitative measurements of cerebellar regions in magnetic resonance (MR) images have been studied in both clinical and neurological studies. Such studies have revealed that different spinocerebellar ataxia (SCA) subtypes have different patterns of cerebellar atrophy and that atrophy of different cerebellar regions is correlated with specific functional losses. Previous methods to automatically parcellate the cerebellum, that is, to identify its sub-regions, have been largely based on multi-atlas segmentation. Recently, deep convolutional neural network (CNN) algorithms have been shown to have high speed and accuracy in cerebral sub-cortical structure segmentation from MR images. In this work, two three-dimensional CNNs were used to parcellate the cerebellum into 28 regions. First, a locating network was used to predict a bounding box around the cerebellum. Second, a parcellating network was used to parcellate the cerebellum using the entire region within the bounding box. A leave-one-out cross validation of fifteen manually delineated images was performed. Compared with a previously reported state-ofthe-art algorithm, the proposed algorithm shows superior Dice coefficients. The proposed algorithm was further applied to three MR images of a healthy subject and subjects with SCA6 and SCA8, respectively. A Singularity container of this algorithm is publicly available.
Purpose: OCT offers high in-plane micrometer resolution, enabling studies of neurodegenerative and ocular-disease mechanisms via imaging of the retina at low cost. An important component to such studies is inter-scanner deformable image registration. Image quality of OCT, however, is suboptimal with poor signal-to-noise ratio and through-plane resolution. Geometry of OCT is additionally improperly defined. We developed a diffeomorphic deformable registration method incorporating constraints accommodating the improper geometry and a decentralized-modality-insensitiveneighborhood-descriptors (D-MIND) robust against degradation of OCT image quality and inter-scanner variability. Method: The method, called D-MIND Demons, estimates diffeomorphisms using D-MINDs under constraints on the direction of velocity fields in a MIND-Demons framework. Descriptiveness of D-MINDs with/without denoising was ranked against four other shape/texture-based descriptors. Performance of D-MIND Demons and its variants incorporating other descriptors was compared for cross-scanner, intra- and inter-subject deformable registration using clinical retina OCT data. Result: D-MINDs outperformed other descriptors with the difference in mutual descriptiveness between high-contrast and homogenous regions > 0.2. Among Demons variants, D-MIND-Demons was computationally efficient, demonstrating robustness against OCT image degradation (noise, speckle, intensity-non-uniformity, and poor throughplane resolution) and consistent registration accuracy [(4±4 μm) and (4±6 μm) in cross-scanner intra- and inter-subject registration] regardless of denoising. Conclusions: A promising method for cross-scanner, intra- and inter-subject OCT image registration has been developed for ophthalmological and neurological studies of retinal structures. The approach could assist image segmentation, evaluation of longitudinal disease progression, and patient population analysis, which in turn, facilitate diagnosis and patient-specific treatment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.