Multiple sclerosis (MS) is a disease with heterogeneous evolution among the patients. Quantitative analysis of longitudinal Magnetic Resonance Images (MRI) provides a spatial analysis of the brain tissues which may lead to the discovery of biomarkers of disease evolution. Better understanding of the disease will lead to a better discovery of pathogenic mechanisms, allowing for patient-adapted therapeutic strategies. To characterize MS lesions, we propose a novel paradigm to detect white matter lesions based on a statistical framework. It aims at studying the benefits of using multi-channel MRI to detect statistically significant differences between each individual MS patient and a database of control subjects. This framework consists in two components. First, intensity standardization is conducted to minimize the inter-subject intensity difference arising from variability of the acquisition process and different scanners. The intensity normalization maps parameters obtained using a robust Gaussian Mixture Model (GMM) estimation not affected by the presence of MS lesions. The second part studies the comparison of multi-channel MRI of MS patients with respect to an atlas built from the control subjects, thereby allowing us to look for differences in normal appearing white matter, in and around the lesions of each patient. Experimental results demonstrate that our technique accurately detects significant differences in lesions consequently improving the results of MS lesion detection.
Multiple sclerosis (MS) is a disease with heterogeneous evolution among the patients. Some classifications have
been carried out according to either the clinical course or the immunopathological profiles. Epidemiological data
and imaging are showing that MS is a two-phase neurodegenerative inflammatory disease. At the early stage it
is dominated by focal inflammation of the white matter (WM), and at a later stage it is dominated by diffuse
lesions of the grey matter and spinal cord. A Clinically Isolated Syndrome (CIS) is a first neurological episode
caused by inflammation/demyelination in the central nervous system which may lead to MS. Few studies have
been carried out so far about this initial stage. Better understanding of the disease at its onset will lead to a
better discovery of pathogenic mechanisms, allowing suitable therapies at an early stage.
We propose a new data processing framework able to provide an early characterization of CIS patients
according to lesion patterns, and more specifically according to the nature of the inflammatory patterns of these
lesions. The method is based on a two layers classification. Initially, the spatio-temporal lesion patterns are
classified using a tensor-like representation. The discovered lesion patterns are then used to identify group of
patients and their correlation to 15 months follow-up total lesion loads (TLL), which is so far the only image-based
figure that can potentially infer future evolution of the pathology.
We expect that the proposed framework can infer new prospective figures from the earliest imaging sign of
MS since it can provide a classification of different types of lesion across patients.
Shape, scale, orientation and position, the physical features associated with white matter DTI tracts, can, either individually or in combination, be used to define feature spaces. Recent work by Mani et al.1 describes a Riemannian framework in which these joint feature spaces are considered. In this paper, we use the tools and metrics defined within this mathematical framework to study morphological changes due to disease progression. We look at sections of the anterior corpus callosum, which describes a deep arc along the mid-sagittal plane, and show how multiple sclerosis and normal control populations have different joint shape-orientation signatures.
Two popular segmentation methods used today are atlas based and graph cut based segmentation techniques. The atlas
based method deforms a manually segmented image onto a target image, resulting in an automatic segmentation. The
graph cut segmentation method utilizes the graph cut paradigm by treating image segmentation as a max-flow problem.
A specialized form of this algorithm was developed by Lecoeur et al [1], called the spectral graph cut algorithm. The
goal of this paper is to combine both of these methods, creating a more stable atlas based segmentation algorithm that is
less sensitive to the initial manual segmentation. The registration algorithm is used to automate and initialize the spectral
graph cut algorithm as well as add needed spatial information, while the spectral graph cut algorithm is used to increase
the robustness of the atlas method. To calculate the sensitivity of the algorithms, the initial manual segmentation of the
atlas was both dilated and eroded 2 mm and the segmentation results were calculated. Results show that the atlas based
segmentation segments the thalamus well with an average Dice Similarity Coefficient (DSC) of 0.87. The spectral graph
cut method shows similar results with an average DSC measure of 0.88, with no statistical difference between the two
methods. The atlas based method's DSC value, however, was reduced to 0.76 and 0.67 when dilated and eroded
respectively, while the combined method retained a DSC value of 0.81 and 0.74, with a statistical difference found
between the two methods.
KEYWORDS: Tissues, Magnetic resonance imaging, Denoising, Signal to noise ratio, Medical imaging, Image processing, Current controlled current source, Blood, Spatial resolution, Blood circulation
Arterial spin labeling (ASL) is a noninvasive MRI method that uses magnetically labeled blood to measure cerebral perfusion.
Spatial resolution of ASL is relatively small and as a consequence perfusion from different tissue types is mixed in each pixel.
An average ratio of gray matter (GM) to white matter (WM) blood flow is 3.2 to 1. Disregarding the partial volume effects (PVE) can thus cause
serious errors of perfusion quantification. PVE also complicates spatial filtering of ASL images
as apart from noise there is a spatial signal variation due to tissue partial volume.
Recently, an algorithm for correcting PVE has been published by Asllani et al. It represents the measured magnetization as a sum of different
tissue magnetizations weighted by their fractional volume in a pixel.
With the knowledge of the partial volume obtained from a high-resolution MRI image, it is possible to separate the individual tissue contributions by linear regression on a neighborhood
of each pixel.
We propose an extension of this algorithm by minimizing the
total-variation of the tissue specific magnetization. This makes the algorithm more flexible to local changes in perfusion. We show that this
method can be used to denoise ASL images without mixing the WM and GM signal.
KEYWORDS: Signal to noise ratio, Blood, Gaussian filters, Magnetic resonance imaging, Neuroimaging, Brain, Image quality, Medical imaging, Image processing, Current controlled current source
Arterial spin labeling (ASL) is an MRI method for imaging brain perfusion by magnetically
labeling blood in brain feeding arteries. The perfusion is obtained from the
difference between images with and without prior labeling.
Image noise is one of the main problems of ASL as the difference is around
0.5-2% of the image magnitude. Usually, 20-40 pairs of images need to be
acquired and averaged to reach a satisfactory quality.
The images are acquired shortly after the labeling to allow the labeled blood to reach the
imaged slice. A sequence of images with multiple delays is more suitable for quantification
of the cerebral blood flow as it gives more information about the blood arrival and relaxation.
Although the quantification methods are sensitive to noise, no filtering or only Gaussian filtering is
used to denoise the data in the temporal domain prior to quantification.
In this article, we propose an efficient way
to use the redundancy of information in the time sequence of each pixel
to suppress noise. For this purpose, the vectorial NL-means method is adapted to work in the temporal
domain. The proposed method is tested on simulated and real 3T MRI data. We demonstrate a clear improvement
of the image quality as well as a better performance compared to Gaussian and normal spatial NL-means
filtering.
KEYWORDS: Magnetic resonance imaging, RGB color model, Image segmentation, Tissues, Medical imaging, Image processing, Current controlled current source, Neuroimaging, Brain
A new segmentation framework is presented taking advantage of multimodal image signature of the different brain tissues (healthy and/or pathological). This is achieved by merging three different modalities of gray-level MRI sequences into a single RGB-like MRI, hence creating a unique 3-dimensional signature for each tissue by utilising the complementary information of each MRI sequence.
Using the scale-space spectral gradient operator, we can obtain a spatial gradient robust to intensity inhomogeneity. Even though it is based on psycho-visual color theory, it can be very efficiently applied to the RGB colored images. More over, it is not influenced by the channel assigment of each MRI.
Its optimisation by the graph cuts paradigm provides a powerful and accurate tool to segment either healthy or pathological tissues in a short time (average time about ninety seconds for a brain-tissues classification).
As it is a semi-automatic method, we run experiments to quantify the amount of seeds needed to perform a correct segmentation (dice similarity score above 0.85). Depending on the different sets of MRI sequences used, this amount of seeds (expressed as a relative number in pourcentage of the number of voxels of the ground truth) is between 6 to 16%.
We tested this algorithm on brainweb for validation purpose (healthy tissue classification and MS lesions segmentation) and also on clinical data for tumours and MS lesions dectection and tissues classification.
Background: Reported error rates for initial clinical diagnosis in parkinsonian disorders can reach up to 35%. Reducing this initial error rate is an important research goal. The objective of this work is to evaluate the ability of an automated MR-based classification technique in the differential diagnosis of Parkinson's disease (PD), multiple systems atrophy (MSA) and progressive supranuclear palsy (PSP).
Methods: A total of 172 subjects were included in this study: 152 healthy subjects, 10 probable PD patients and 10 age-matched patients with diagnostic of either probable MSA or PSP. T1-weighted (T1w) MR images were acquired and subsequently corrected, scaled, resampled and aligned within a common referential space. Tissue transformation and deformation features were then automatically extracted. Classification of patients was performed using forward, stepwise linear discriminant analysis within a multidimensional transformation/deformation feature space built from healthy subjects data. Leave-one-out classification was used to avoid over-determination.
Findings: There were no age difference between groups. Highest accuracy (agreement with long-term clinical follow-up) of 85% was achieved using a single MR-based deformation feature.
Interpretation: These preliminary results demonstrate that a classification approach based on quantitative parameters of 3D brainstem morphology extracted automatically from T1w MRI has the potential to perform differential diagnosis of PD versus MSA/PSP with high accuracy.
We propose to use a recently introduced optimisation method in the context of rigid registration of medical
images. This optimisation method, introduced by Powell and called NEWUOA, is compared with two other
widely used algorithms: Powell's direction set and Nelder-Mead's downhill simplex method. This paper performs
a comparative evaluation of the performances of these algorithms to optimise different image similarity measures
for different mono- and multi-modal registrations. Images from the BrainWeb project are used as a gold standard
for validation purposes. This paper exhibits that the proposed optimisation algorithm is more robust, more
accurate and faster than the two other methods.
KEYWORDS: Wavelets, 3D image processing, Magnetic resonance imaging, 3D image restoration, Image processing, Medical imaging, Ultrasonography, Data modeling, Image restoration, Image segmentation
The multiplicity of sensors used in medical imaging leads to different noises. Non informative noise can damage the image interpretation process and the performance of automatic analysis. The method proposed in this paper allows compensating highly noisy image data from non informative noise without sophisticated modeling of the noise statistics. This generic approach uses jointly a wavelet decomposition scheme and a non-isotropic Total Variation filtering of the transform coefficients. This framework benefits from both the hierarchical capabilities of the wavelet transform and the well-posed regularization scheme of the Total Variation. This algorithm has been tested and validated on test-bed data, as well as different clinical MR and 3D ultrasound images, enhancing the capabilities of the proposed method to cope with different noise models.
This paper presents a general statistical framework for modeling deformable object. This model is devoted being used in digital brain atlases. We first present a numerical modeling of brain sulci. We present also a method to characterize the high inter-individual variability of basic cortical structures on which the description of the cerebral cortex is based. The aimed applications use numerical modeling of brain sulci to assist non-linear registration of human brains by inter-individual anatomical matching or to better compare neuro-functional recordings performed on a series of individuals. The utilization of these methods is illustrated using a few examples.
Nowadays, neurosurgeons have access to 3D multimodal imaging when planning and performing surgical procedures. 3D multimodal registration algorithms are available to establish geometrical relationships between different modalities. For a given 3D point, most multimodal applications merely display a cursor on the corresponding point in the other modality. The surgeon needs tools allowing the visual fusion of these heterogeneous data in the same coordinate system but also in the same visual space in order to facilitate comprehension of the data. This problem is particularly crucial when using these images in the operating room. The goal of this paper is to analyze different methods to obtain this visual fusion between real images and virtual images. We discuss the relevance of different solutions depending on (1) the type of information shared between these different modalities and (2) the hardware location of this visual fusion. Two new approaches are presented to illustrate our purposes: a neuro- navigational microscope which provides an augmented reality feature through a microscope and a new technique for matching 2D real images with 3D virtual data sets. We introduce this second technique illustrated by the mapping of a 2D intra-operative photograph of the patient's anatomy onto 3D MRI images. Unlike other solutions which display virtual images in the real worked, our method involves ray traced texture mapping in order to display real images in a computed world.
All retrospective image registration methods have attached to them some intrinsic estimate of registration error. However, this estimate of accuracy may not always be a good indicator of the distance between actual and estimated positions of targets within the cranial cavity. This paper describes a project whose principal goal is to use a prospective method based on fiducial markers as a 'gold standard' to perform an objective, blinded evaluation of the accuracy of several retrospective image-to-image registration techniques. Image volumes of three modalities -- CT, MR, and PET -- were taken of patients undergoing neurosurgery at Vanderbilt University Medical Center. These volumes had all traces of the fiducial markers removed, and were provided to project collaborators outside Vanderbilt, who then performed retrospective registrations on the volumes, calculating transformations from CT to MR and/or from PET to MR, and communicated their transformations to Vanderbilt where the accuracy of each registration was evaluated. In this evaluation the accuracy is measured at multiple 'regions of interest,' i.e. areas in the brain which would commonly be areas of neurological interest. A region is defined in the MR image and its centroid C is determined. Then the prospective registration is used to obtain the corresponding point C' in CT or PET. To this point the retrospective registration is then applied, producing C' in MR. Statistics are gathered on the target registration error (TRE), which is the disparity between the original point C and its corresponding point C'. A second goal of the project is to evaluate the importance of correcting geometrical distortion in MR images, by comparing the retrospective TRE in the rectified images, i.e., those which have had the distortion correction applied, with that of the same images before rectification. This paper presents preliminary results of this study along with a brief description of each registration technique and an estimate of both preparation and execution time needed to perform the registration .
KEYWORDS: Brain, 3D modeling, Magnetic resonance imaging, Cerebral cortex, Image segmentation, Visualization, Modeling, 3D visualizations, Data modeling, Feature extraction
We propose a method for the segmentation of cerebral sulci, representing them by surfaces. This method is based on the computation of the differential characteristics of MRI data. The computation of curvature information, using the Lvv operator, allows one to differentiate sulcal and gyral regions, resulting in a global detection of the cortical scheme. The analytical description of a particular sulcus is obtained by initializing an active model on its trace upon the brain surface. The result is a surface representing the buried part of the sulcus. The 'snake-spline' model allows one to define an algorithm which is simpler and more robust than the classical snake. This method of segmentation yields good results for the 3D segmentation and visualization of cortical sulci.
The application of image matching to the problem of localizing structural anatomy in images of the human brain forms the specific aim of our work. The interpretation of such images is a difficult task for human observers because of the many ways in which the identity of a given structure can be obscured. Our approach is based on the assumption that a common topology underlies the anatomy of normal individuals. To the degree that this assumption holds, the localization problem can be solved by determining the mapping from the anatomy of a given individual to some reverential atlas of cerebral anatomy. Previous such approaches have in many cases relied on a physical interpretation of this mapping. In this paper, we examine a more general Bayesian formulation of the image matching problem and demonstrate the approach on two dimensional magnetic resonance images.
This paper reports about first results obtained in a project aiming at developing a computerized system to manage knowledge about brain anatomy. The emphasis is put on the design of a knowledge base which includes a symbolic model of cerebral anatomical structures (grey nuclei, cortical structures such as gyri and sulci, verntricles, vessels, etc.) and of hypermedia facilities allowing to retrieve and display information associated with the objects (texts, drawings, images). Atlas plates digitized from a stereotactic atlas are also used to provide natural and effective communication means between the user and the system.
The efficacy of using intensity edges, curvature of iso-intensity contours, and tissue classified data for image matching are examined. The image matching problem is formulated in such a way that the different features are handled uniformly, allowing the same code to be used in each instance. The results using both simulated and real brain images indicate that each feature affected and improvement in the correspondence after matching with it.
KEYWORDS: Image registration, Magnetic resonance imaging, Angiography, Data fusion, Medical imaging, Magnetoencephalography, Data storage, Image processing, Computed tomography, Data acquisition
A computer software package named BDREC was designed and implemented in order to store and retrieve registration data information and to have some registration tools available. The aim is to facilitate multimodal application development by managing all geometrical issues.
KEYWORDS: Image processing, Data fusion, Image fusion, Brain, Magnetic resonance imaging, Computed tomography, Medical imaging, 3D image processing, Data modeling, Image segmentation
Among the studies concerning the segmentation and the identification of anatomical structures from medical images, one of the major problems is the fusion of heterogeneous data for the recognition of these structures. In this domain, the fusion of inter-patient data for the constitution of anatomical models for instance is particularly critical especially with regards to the identification of complex cerebral structures like the cortical gyri. The goal of this work is to find anatomical markers which can be useful to characterize specific regions in brain images by using either CT or MR images. We have focused this study on the definition of a geometrical operator based on the detection of local extremum curvatures. The main issues addressed by this work concern the fusion of multimodal data from one patient (e.g. between CT and MRI) and moreover the fusion of inter-patient data as a first step toward the modelling of brain morphological deformations. Examples are shown upon 2D MR and CT brain images.
The new magnetic resonance imaging systems (MRI) are able to perform a brain scan with fairly good three-dimensional resolution. In order to allow the physician, and especially the neuroanatomist, to deal with the prime information borne by the images, the prevalent data have to be enhanced with regards to the medical objective. The aim of the work presented in this paper is to recognize and to label the head structures from MR images. This is done by computing probabilities for a pixel to belong to pre-specified head structures (i.e., skin, bone, CSF, ventricular system, grey and white matter, and brain). Several ways are presented and discussed in this paper, including the computation of statistical properties like `Markov parameters' and `fractal dimension.' From these statistical parameters, computed from a single MR image or a 3-D isotropic MR database, clustering and classification processes are used to issue fuzzy membership coefficients representing the probabilities for a pixel to belong to a particular structure. Improvements are proposed with regard to the expressed choices and examples are presented.
KEYWORDS: Fuzzy logic, Magnetic resonance imaging, Brain, Image segmentation, 3D displays, Image processing, Binary data, 3D image processing, Surgery, 3D magnetic resonance imaging
The overall objective in neurosurgery is to localize and to treat a target volume within the cerebral medium as well as to understand its environment. To complete this objective, the 3D display of multimodality information is required; among them CT, MRI, angiography or atlas are particularly important. During the last decade solutions have been proposed to improve the rendering of 3D CT data sets. Applied to MRI without preprocessing these methods are not able to provide a good display quality for the brain anatomy for instance. This paper presents one year of experience in the 3D display of MRI volumes, oriented to the preparation of neurosurgery procedures (e.g. biopsy, epilepsy surgery): the main issues concerning the volume anisotropy, the brain segmentation and the volume rendering are explained. Emphasis is also given to the original way we propose to solve the brain segmentation problem by using automatic segmentation techniques (fuzzy masks and region valley following). The volume rendering technique is also presented and discussed (binary segmentation vs fuzzy segmentation). Finally, examples are presented concerning the use of 3D MRI images.
Francoise Fresne, G. Le Gall, Christian Barillot, Bernard Gibaud, Jean-Pierre Manens, Christine Toumoulin, Didier Lemoine, C. Chenal, Jean-Marie Scarabin
KEYWORDS: 3D displays, 3D acquisition, Radiotherapy, 3D image processing, Angiography, Head, Skin, Picture Archiving and Communication System, Computed tomography, 3D modeling
A Multibeam radiation therapy treatment is a non-invasive technique devoted to treat a lesion within the cerebral medium by focusing photon-beams on the same target from a high number of entrance points. We present here a computer assisted dosimetric planning procedure which includes: (1) an analysis module to define the target volume by using 2D and 3D displays, (2) a planing module to issue a treatment strategy including the dosimetric simulations and (3) a treatment module setting up the parameters to order the robotized treatment system (i.e. chair- framework, radiation unit machine). Another important feature of this system is its connection to the PACS system SIRENE settled in the University hospital of Rennes which makes possible the archiving and the communication of the multimodal images (CT, MRI, Angiography) used by this application. The corporate use of stereotactic methods and the multimodality imagery ensures spatial coherence and makes the target definition and the cognition of the structures environment more accurate. The dosimetric planning suited to the spatial reference (i.e. the stereotactic frame) guarantees an optimal distribution of the dose computed by an original 3D volumetric algorithm. The robotic approach of the treatment stage has consisted to design a computer driven chair-framework cluster to position the target volume at the radiation unit isocenter.
KEYWORDS: Magnetic resonance imaging, Data modeling, Image registration, 3D modeling, In vivo imaging, Picosecond phenomena, Angiography, 3D displays, 3D acquisition, Brain
The aim of this application is to interactively transfer information between CT, MRI or DSA data and a 3D stereotactic
atlas digitized on a C. Based on a 3D organization of data, this system is devoted to assist a neurosurgeon in surgical
planning by numerically cross-assigning information between heterogeneous data (in-vivo or atlas). All these images can be
retrieved in digital form from the PACS central archive (SIRENE PACS system).
The basic feature of this confrontation is the Talairach's proportional squaring which consists in dividing the 3D cerebral
space in independently deformable sub-parts. This 3D model is based on anatomical structures such as the AC-PC line and
its two associated vertical lines VAC and VPC. Based on this proportional squaring, the atlas has been digitized in order to
get atlas plates along the three orthogonal directions of this geometrical reference (axial, coronal, sagittal).
The registration of in-vivo data to the proportional squaring is done by extracting either external framework landmarks or
anatomical reference structures (i.e. AC and PC structures on the MRI sagittal mid-plane image). Geometrical
transformations and scaling are then recorded for each modality or acquisition according to the proportional squaring. These
transformations make for instance possible the transfer of a 3D point of a MRI examination to its 3D location within the
proportional squaring and furthermore to its 3D location within another data set (in-vivo or atlas). From that stage, the
application gives the choice to the neurosurgeon to select any confrontation between input data (in-vivo images or atlas) and
output data (id).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.