KEYWORDS: Artificial neural networks, Tissues, Scattering, Hyperspectral imaging, Blood, Oxygen, Diffuse reflectance spectroscopy, Skin, Monte Carlo methods, Education and training
SignificanceHyperspectral cameras capture spectral information at each pixel in an image. Acquired spectra can be analyzed to estimate quantities of absorbing and scattering components, but the use of traditional fitting algorithms over megapixel images can be computationally intensive. Deep learning algorithms can be trained to rapidly analyze spectral data and can potentially process hyperspectral camera data in real time.AimA hyperspectral camera was used to capture 1216×1936 pixel wide-field reflectance images of in vivo human tissue at 205 wavelength bands from 420 to 830 nm.ApproachThe optical properties of oxyhemoglobin, deoxyhemoglobin, melanin, and scattering were used with multi-layer Monte Carlo models to generate simulated diffuse reflectance spectra for 24,000 random combinations of physiologically relevant tissue components. These spectra were then used to train an artificial neural network (ANN) to predict tissue component concentrations from an input reflectance spectrum.ResultsThe ANN achieved low root mean square errors in a test set of 6000 independent simulated diffuse reflectance spectra while calculating concentration values more than 4000× faster than a conventional iterative least squares approach.ConclusionsIn vivo finger occlusion and gingival abrasion studies demonstrate the ability of this approach to rapidly generate high-resolution images of tissue component concentrations from a hyperspectral dataset acquired from human subjects.
KEYWORDS: Teeth, Cameras, RGB color model, Image segmentation, Education and training, Neural networks, Diagnostics, Deep learning, Color, Data modeling
The development of a deep learning framework specifically designed for the analysis of intraoral soft and hard tissue conditions is presented in this paper, with a focus on remote healthcare and intraoral diagnostic applications. The framework Faster R-CNN ResNet-50 FPN was trained on a dataset comprising 4,173 anonymized images of teeth obtained from buccal, lingual, and occlusal surfaces of 7 subjects. Ground truth annotations were generated through manual labeling, encompassing tooth number and tooth segmentation. The deep learning framework was built using platforms and APIs within Amazon Web Services (AWS), including SageMaker, S3, and EC2. It leveraged their GPU systems to train and deploy the models. The framework demonstrated high accuracy in tooth identification and segmentation, achieving an accuracy exceeding 60% for tooth numbering. Another framework for detecting teeth shades was trained using 25,519 RGB and 25,519 LAB values from VITA Classical shades. It used a basic neural network leading to 85 % validation accuracy. By leveraging the power of Faster R-CNN and the scalability of AWS, the framework provides a robust solution for real-time analysis of intraoral images, facilitating timely detection and monitoring of oral health issues. The initial results provide accurate identification of tooth numbering and valuable insights into tooth shades. The results achieved by the deep learning framework demonstrates its potential as a tool for analyzing intraoral soft and hard tissue parameters such as tooth staining. It presents an opportunity to enhance accuracy and efficiency in connected health and intraoral diagnostics applications, ultimately advancing the field of oral health assessment.
Tooth color is an important parameter in cosmetic dentistry, to measure staining, effects of whitening products, or for matching the appearance of implants to neighboring teeth. The apparent color of teeth is affected by surface (extrinsic) and sub-surface (intrinsic) factors and is still assessed qualitatively by the dentist’s visual impression. This study used a new color polarization camera to quantify tooth color. Recent commercial availability of snapshot color polarization cameras offers a new approach to rapidly quantify tissue color with depth selectivity. We applied this technology to quantify tooth color and are currently investigating its use in assessment of enamel demineralization.
We present a novel technique for volumetric super-resolution imaging. Our technique, which is based on the principles of single molecule localization microscopy, utilizes a mirror cavity with a series of pinholes on one of the mirrors allowing for simultaneous optical sectioning of different imaging planes. In addition, we employ a unique machine learned algorithm for 3D localization of events that occur between different imaging planes. Our technique enables high-resolution imaging of thicker volumes than what is currently available using other single molecule localization techniques.
Hyperspectral imaging can capture light reflected from tissue with high spectral and spatial resolution. Fitting algorithms can be applied to the spectrum at each pixel to estimate tissue chromophore concentrations, including blood, melanin, water, and fat. Traditional fitting methods are computationally intensive and slow when applied over an entire image. This study developed an artificial neural network (ANN) to rapidly calculate tissue oxygenation, blood, and melanin content from hyperspectral images. Linearly polarized light from a halogen lamp was delivered through a ring illuminator placed 20 cm from the tissue surface. A 1024x1224 pixel hyperspectral camera captured diffusely reflected light through an orthogonal polarizer at 299 wavelengths between 400-1000nm. To train an ANN, diffusion theory was used to generate reflectance spectra from 440-800nm for a uniform tissue containing 24,000 random combinations of physiologically relevant concentrations of oxyhemoglobin, deoxyhemoglobin, melanin, and scattering. The ANN was then tested by generating another 6,000 reflectance spectra from diffusion theory using physiological values and comparing the chromophore concentrations output by the ANN to ground truth values. The ANN demonstrated a root-mean-square error less than 0.01 in predicting each chromophore concentration from reflectance spectra simulated by diffusion theory. An in vivo finger occlusion experiment demonstrated the ability of the system to quantify changes in oxygen saturation and blood volume. This work demonstrates a new deep learning approach to rapidly process hyperspectral image data and accurately quantify tissue components.
In this study, we present an integrated stereoscopic and hyperspectral imaging system designed to overcome the limitations of traditional quantitative hyperspectral imaging, notably the dependency on precise camera-sample distance measurements. Our approach combines advanced depth-sensing technology with a compact hyperspectral camera, featuring integrated RGB sensors, to facilitate automated synchronization, system integration, and reconstruction through epipolar geometry and image co-registration. The system acquires hyperspectral data cubes along predefined camera trajectories, enabling full 3D hyperspectral representations via global alignment, a significant enhancement over conventional methods that lack depth resolution. This methodology has the potential to eliminate the need for strict camera-sample distance calibration and appends a morphological dimension to hyperspectral tissue analysis. The system's efficacy is demonstrated in vivo, focusing on non-contact human skin imaging. The integration of stereoscopic depth and hyperspectral data in our system marks a significant advancement in spectroscopic tissue analysis, with promising applications in telehealth, enhancing both the diagnostic capabilities and accessibility of advanced imaging technologies.
KEYWORDS: Cameras, Education and training, Teeth, Deep learning, Color, Neural networks, Algorithm development, Data modeling, RGB color model, Image processing
To address an increasing demand for accessible and affordable tools for at-home oral health assessment, this paper presents the development of a low-cost intraoral camera integrated with a deep learning approach for image analysis. The camera captures and analyzes images of soft and hard oral tissues, enabling real-time feedback on potential tooth staining and empowering users to proactively manage their oral health. The system utilizes an Azdent intraoral USB camera with the Raspberry Pi 400 computer and Intel® Neural Computing Stick for real-time image acquisition and processing. A neural network was trained on a dataset comprising 102,062 CIELAB and RGB values from the VITA classical shade guide. Ground truth annotations were generated through manual labeling, encompassing tooth number and stain levels. The deep learning approach demonstrated high accuracy in tooth stain identification with a testing accuracy exceeding 0.6. This study demonstrates the capacity of low-cost camera hardware and deep learning algorithms to effectively categorize tooth stain levels with high accuracy. By bridging the gap between professional care and homebased oral health monitoring, the development of this low-cost platform holds promise in facilitating early detection and monitoring of oral health issues.
Studying the concentrations of water and lipids in human tissue can give insights into biological processes and diseases. This study shows that shortwave-infrared (SWIR) light from light-emitting diodes (LEDs) can be used in spatial frequency domain imaging (SFDI) to quantify water and lipid concentrations in tissue. In contrast to near-infrared (NIR) wavelengths, the SWIR wavelength range offers deeper tissue penetration and coincides with strong absorption bands of water and lipids. The system developed in this work uses 970 nm, 1050 nm, and 1200 nm LEDs with a digital micromirror device for DC and AC illumination. An InGaAs camera and optics image the diffusely reflected light. A 10% Intralipid phantom was used to calibrate the system, allowing conversion of demodulated pixel values to diffuse reflectance. Measurement of sample lipid and water concentrations was performed for several different known dilutions of Intralipid. Water content in biological tissue was measured using SWIR-SFDI in ex vivo porcine skin tissue samples and validated by measuring the change in mass due to water during desiccation, showing a mean error of 0.9% in prediction of initial water content. SWIR-SFDI measurements were taken in human subjects before and after light exercise, showing distinct changes in tissue absorption and reduced scattering. These results show the potential of a LED-based SWIR-SFDI system for noninvasive quantification and mapping of important tissue chromophores.
We present on blood optical property alterations induced by lipids. Mie simulations were conducted to estimate the magnitude of μ_s^' changes due to changes in lipoprotein particles in blood after a meal. Longitudinal SFDI measurements were performed on the dorsal surface of volunteers’ hands pre and post high fat meal for 5 hours to monitor optical property changes within superficial vessels. The results show an increase in μ_s^' and a decrease in μ_a with higher changes observed in SFDI measurements compared to Mie simulations, potentially due to hemodynamic alterations that occur after a meal.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.