SignificanceThe accurate correlation between optical measurements and pathology relies on precise image registration, often hindered by deformations in histology images. We investigate an automated multi-modal image registration method using deep learning to align breast specimen images with corresponding histology images.AimWe aim to explore the effectiveness of an automated image registration technique based on deep learning principles for aligning breast specimen images with histology images acquired through different modalities, addressing challenges posed by intensity variations and structural differences.ApproachUnsupervised and supervised learning approaches, employing the VoxelMorph model, were examined using a dataset featuring manually registered images as ground truth.ResultsEvaluation metrics, including Dice scores and mutual information, demonstrate that the unsupervised model exceeds the supervised (and manual) approaches significantly, achieving superior image alignment. The findings highlight the efficacy of automated registration in enhancing the validation of optical technologies by reducing human errors associated with manual registration processes.ConclusionsThis automated registration technique offers promising potential to enhance the validation of optical technologies by minimizing human-induced errors and inconsistencies associated with manual image registration processes, thereby improving the accuracy of correlating optical measurements with pathology labels.
SignificanceDuring breast-conserving surgeries, it is essential to evaluate the resection margins (edges of breast specimen) to determine whether the tumor has been removed completely. In current surgical practice, there are no methods available to aid in accurate real-time margin evaluation.AimIn this study, we investigated the diagnostic accuracy of diffuse reflectance spectroscopy (DRS) combined with tissue classification models in discriminating tumorous tissue from healthy tissue up to 2 mm in depth on the actual resection margin of in vivo breast tissue.ApproachWe collected an extensive dataset of DRS measurements on ex vivo breast tissue and in vivo breast tissue, which we used to develop different classification models for tissue classification. Next, these models were used in vivo to evaluate the performance of DRS for tissue discrimination during breast conserving surgery. We investigated which training strategy yielded optimum results for the classification model with the highest performance.ResultsWe achieved a Matthews correlation coefficient of 0.76, a sensitivity of 96.7% (95% CI 95.6% to 98.2%), a specificity of 90.6% (95% CI 86.3% to 97.9%) and an area under the curve of 0.98 by training the optimum model on a combination of ex vivo and in vivo DRS data.ConclusionsDRS allows real-time margin assessment with a high sensitivity and specificity during breast-conserving surgeries.
SignificanceAccurately distinguishing tumor tissue from normal tissue is crucial to achieve complete resections during soft tissue sarcoma (STS) surgery while preserving critical structures. Incomplete tumor resections are associated with an increased risk of local recurrence and worse patient prognosis.AimWe evaluate the performance of diffuse reflectance spectroscopy (DRS) to distinguish tumor tissue from healthy tissue in STSs.ApproachDRS spectra were acquired from different tissue types on multiple locations in 20 freshly excised sarcoma specimens. A k-nearest neighbors classification model was trained to predict the tissue types of the measured locations, using binary and multiclass approaches.ResultsTumor tissue could be distinguished from healthy tissue with a classification accuracy of 0.90, sensitivity of 0.88, and specificity of 0.93 when well-differentiated liposarcomas were included. Excluding this subtype, the classification performance increased to an accuracy of 0.93, sensitivity of 0.94, and specificity of 0.93. The developed model showed a consistent performance over different histological subtypes and tumor locations.ConclusionsAutomatic tissue discrimination using DRS enables real-time intra-operative guidance, contributing to more accurate STS resections.
Optical technologies are widely used for tissue sensing purposes, however maneuvering conventional probe designs with flat-tipped fibers in narrow spaces can be challenging, such as in pelvic colorectal cancer surgery. In this study, a compact side-firing fiber probe was developed for tissue discrimination during colorectal cancer surgery using diffuse reflectance spectroscopy. The light behavior was compared to flat-tipped fibers using both Monte Carlo simulations and the tissue classification performance was examined using freshly excised colorectal cancer specimens. Using the developed probe and classification algorithm, we achieved an accuracy of 0.92 for the discrimination of colorectal tumor tissue from healthy tissue.
Achieving adequate resection margins during breast-conserving surgery is crucial for minimizing the risk of tumor recurrence in patients with breast cancer but remains challenging due to the lack of intraoperative feedback. Here, we evaluated the use of hyperspectral imaging to discriminate healthy tissue from tumor tissue in lumpectomy specimens of 121 patients. A dataset on tissue slices was used to develop and evaluate three convolutional neural networks. Subsequently, these networks were fine-tuned with lumpectomy data to predict the tissue percentages on the lumpectomy resection surface. We achieved a MCC of 0.92 on the tissue slices and an RMSE of 9% on the lumpectomy resection surface.
Diffuse reflectance spectroscopy (DRS) has already been successfully used for tissue discrimination during colorectal cancer surgery. In clinical practice, however, tissue often consists of several layers. Therefore, a novel multi-output convolutional neural network (CNN) was designed to classify multiple layers of colorectal cancer tissue simultaneously. DRS data was acquired with an array of six fibers with different fiber distances to sample at multiple depths. After training a 2D CNN with the DRS data as input, the first, second, and third tissue layers could be classified with mean accuracies of 0.90, 0.71, and 0.62, respectively.
Establishing adequate resection margins during colorectal cancer surgery is challenging. Currently, in up to 30% of the cases the tumor is not completely removed, which emphasizes the lack of a real-time tissue discrimination tool that can assess resection margins up to multiple millimeters in depth. Therefore, we propose to combine spectral data from diffuse reflectance spectroscopy (DRS) with spatial information from ultrasound (US) imaging to evaluate multi-layered tissue structures. First, measurements with animal tissue were performed to evaluate the feasibility of the concept. The phantoms consisted of muscle and fat layers, with a varying top layer thickness of 0-10 mm. DRS spectra of 250 locations were obtained and corresponding US images were acquired. DRS features were extracted using the wavelet transform. US features were extracted based on the graph theory and first-order gradient. Using a regression analysis and combined DRS and US features, the top layer thickness was estimated with an error of up to 0.48 mm. The tissue types of the first and second layers were classified with accuracies of 0.95 and 0.99 respectively, using a support vector machine model.
Significance: In breast-preserving tumor surgery, the inspection of the excised tissue boundaries for tumor residue is too slow to provide feedback during the surgery. The discovery of positive margins requires a new surgery which is difficult and associated with low success. If the re-excision could be done immediately this is believed to improve the success rate considerably.
Aim: Our aim is for a fast microscopic analysis that can be done directly on the excised tissue in or near the operating theatre.
Approach: We demonstrate the combination of three nonlinear imaging techniques at selected wavelengths to delineate tumor boundaries. We use hyperspectral coherent anti-Stokes Raman scattering (CARS), second harmonic generation (SHG), and two-photon excited fluorescence (TPF) on excised patient tissue.
Results: We show the discriminatory power of each of the signals and demonstrate a sensitivity of 0.87 and a specificity of 0.95 using four CARS wavelengths in combination with SHG and TPF. We verify that the information is independent of sample treatment.
Conclusions: Nonlinear multispectral imaging can be used to accurately determine tumor boundaries. This demonstration using microscopy in the epi-direction directly on thick tissue slices brings this technology one step closer to clinical implementation.
A pipeline of unsupervised image analysis methods for extraction of geometrical features from retinal fundus images has previously been developed. Features related to vessel caliber, tortuosity and bifurcations, have been identified as potential biomarkers for a variety of diseases, including diabetes and Alzheimer’s. The current computationally expensive pipeline takes 24 minutes to process a single image, which impedes implementation in a screening setting. In this work, we approximate the pipeline with a convolutional neural network (CNN) that enables processing of a single image in a few seconds. As an additional benefit, the trained CNN is sensitive to key structures in the retina and can be used as a pretrained network for related disease classification tasks. Our model is based on the ResNet-50 architecture and outputs four biomarkers that describe global properties of the vascular tree in retinal fundus images. Intraclass correlation coefficients between the predictions of the CNN and the results of the pipeline showed strong agreement (0.86 - 0.91) for three of four biomarkers and moderate agreement (0.42) for one biomarker. Class activation maps were created to illustrate the attention of the network. The maps show qualitatively that the activations of the network overlap with the biomarkers of interest, and that the network is able to distinguish venules from arterioles. Moreover, local high and low tortuous regions are clearly identified, confirming that a CNN is sensitive to key structures in the retina.
Neoadjuvant radiotherapy, as part of the conventional treatment of rectal cancer, can induce fibrotic tissue formation around the tumor. This complicates the exact determination of the tumor borders during surgery, which might increase the chance of positive resection margins. In a previous ex vivo study, we distinguished tumor tissue from healthy rectal wall and fat with an accuracy of 0.95, using diffuse reflectance spectroscopy (DRS). Since this study did not include fibrosis, the aim of the current ex vivo study was to examine whether differentiation of tumor and fibrosis with DRS is possible.
DRS measurements from freshly resected specimen of 16 patients were obtained. In eight patients fibrosis was measured, in the other eight patients tumor was measured. The measurements were performed using a DRS probe with a source-detector distance of 2 mm. The spectra were obtained in the wavelength range of 450-1600 nm. Classification of the measurements was done using a support vector machine (SVM) and a set of features extracted from the spectra. The SVM was evaluated using an eight-fold cross-validation, which was repeated ten times.
For all repetitions, the area under the ROC curve was greater than 0.85 (mean = 0.87, STD = 0.02). The mean sensitivity and specificity were 0.85 (STD = 0.03) and 0.88 (STD = 0.01) respectively. It can be concluded that tumor tissue can be distinguished from fibrosis based on spectral features from DRS measurements. The next step will be to conduct an in vivo study, to verify these results during surgery.
This ex-vivo study evaluates the feasibility of diffuse reflectance spectroscopy (DRS) for discriminating tumor from healthy tissue, with the aim to develop a technology that can assess resection margins for the presence of tumor cells during oral cavity cancer surgery. Diffuse reflectance spectra were acquired on fresh surgical specimens from 28 patients with oral cavity squamous cell carcinoma. The spectra (400 to 1600 nm) were detected after illuminating tissue with a source fiber at 0.3-, 0.7-, 1.0-, and 2.0-mm distances from a detection fiber, obtaining spectral information from different sampling depths. The spectra were correlated with histopathology. A total of 76 spectra were obtained from tumor tissue and 110 spectra from healthy muscle tissue. The first- and second-order derivatives of the spectra were calculated and a classification algorithm was developed using fivefold cross validation with a linear support vector machine. The best results were obtained by the reflectance measured with a 1-mm source–detector distance (sensitivity, specificity, and accuracy are 89%, 82%, and 86%, respectively). DRS can accurately discriminate tumor from healthy tissue in an ex-vivo setting using a 1-mm source–detector distance. Accurate validation methods are warranted for larger sampling depths to allow for guidance during oral cavity cancer excision.
The Arteriolar-to-Venular Ratio (AVR) is a popular dimensionless measure which allows the assessment of patients’ condition for the early diagnosis of different diseases, including hypertension and diabetic retinopathy. This paper presents two new approaches for AVR computation in retinal photographs which include a sequence of automated processing steps: vessel segmentation, caliber measurement, optic disc segmentation, artery/vein classification, region of interest delineation, and AVR calculation. Both approaches have been tested on the INSPIRE-AVR dataset, and compared with a ground-truth provided by two medical specialists. The obtained results demonstrate the reliability of the fully automatic approach which provides AVR ratios very similar to at least one of the observers. Furthermore, the semi-automatic approach, which includes the manual modification of the artery/vein classification if needed, allows to significantly reduce the error to a level below the human error.
The retinal vasculature is the only part of the blood circulation system that can be observed non-invasively using fundus cameras. Changes in the dynamic properties of retinal blood vessels are associated with many systemic and vascular diseases, such as hypertension, coronary heart disease and diabetes. The assessment of the characteristics of the retinal vascular network provides important information for an early diagnosis and prognosis of many systemic and vascular diseases. The manual analysis of the retinal vessels and measurement of quantitative biomarkers in large-scale screening programs is a tedious task, time-consuming and costly. This paper describes a reliable, automated, and efficient retinal health information and notification system (acronym RHINO) which can extract a wealth of geometric biomarkers in large volumes of fundus images. The fully automated software presented in this paper includes vessel enhancement and segmentation, artery/vein classification, optic disc, fovea, and vessel junction detection, and bifurcation/crossing discrimination. Pipelining these tools allows the assessment of several quantitative vascular biomarkers: width, curvature, bifurcation geometry features and fractal dimension. The brain-inspired algorithms outperform most of the state-of-the-art techniques. Moreover, several annotation tools are implemented in RHINO for the manual labeling of arteries and veins, marking optic disc and fovea, and delineating vessel centerlines. The validation phase is ongoing and the software is currently being used for the analysis of retinal images from the Maastricht study (the Netherlands) which includes over 10,000 subjects (healthy and diabetic) with a broad spectrum of clinical measurements
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.