Automatic intraoral imaging-based assessment of oral conditions is important for clinical and consumer-level oral health monitoring. But there is a lack of publicly available intraoral datasets. To address this, we developed a StyleGAN2-based framework to generate synthetic 2D intraoral images. The StyleGAN2 network was trained on 3724 images with a Frechet Inception Distance 12.10. Dental professionals evaluated image quality and determined if images were real or synthetic. Approximately 83.75% of generated images were deemed real. We created a framework that utilizes pseudo-labeling to incorporate the StyleGAN2-synthesized 2D intraoral images into a tooth type classification model. Our experiments demonstrated that the StyleGAN2 synthesized images can effectively augment the training set and improve the performance of the tooth type classification model.
KEYWORDS: Teeth, Cameras, RGB color model, Image segmentation, Education and training, Neural networks, Diagnostics, Deep learning, Color, Data modeling
The development of a deep learning framework specifically designed for the analysis of intraoral soft and hard tissue conditions is presented in this paper, with a focus on remote healthcare and intraoral diagnostic applications. The framework Faster R-CNN ResNet-50 FPN was trained on a dataset comprising 4,173 anonymized images of teeth obtained from buccal, lingual, and occlusal surfaces of 7 subjects. Ground truth annotations were generated through manual labeling, encompassing tooth number and tooth segmentation. The deep learning framework was built using platforms and APIs within Amazon Web Services (AWS), including SageMaker, S3, and EC2. It leveraged their GPU systems to train and deploy the models. The framework demonstrated high accuracy in tooth identification and segmentation, achieving an accuracy exceeding 60% for tooth numbering. Another framework for detecting teeth shades was trained using 25,519 RGB and 25,519 LAB values from VITA Classical shades. It used a basic neural network leading to 85 % validation accuracy. By leveraging the power of Faster R-CNN and the scalability of AWS, the framework provides a robust solution for real-time analysis of intraoral images, facilitating timely detection and monitoring of oral health issues. The initial results provide accurate identification of tooth numbering and valuable insights into tooth shades. The results achieved by the deep learning framework demonstrates its potential as a tool for analyzing intraoral soft and hard tissue parameters such as tooth staining. It presents an opportunity to enhance accuracy and efficiency in connected health and intraoral diagnostics applications, ultimately advancing the field of oral health assessment.
This paper highlights the importance of objective evaluation of perspiration and the limitations of current methods in studying sweat gland function and assessing antiperspirant efficacy. To overcome these limitations, the authors introduce infrared thermography (IRT) as a non-contact imaging modality. They demonstrate the feasibility of IRT through two approaches: high-resolution thermal imaging of sweat pores and pore activation, and quantitative mapping of sweat retention in clothing. IRT offers a non-invasive and versatile tool for studying the effectiveness of antiperspirants and understanding sweat pore behavior. It has the potential to enhance our knowledge of antiperspirant performance and aid in the development of improved formulations. With its detailed insights into sweat pore dynamics, IRT can advance research in the field of human perspiration and serve as a valuable tool for evaluating antiperspirant products.
We present a novel technique for volumetric super-resolution imaging. Our technique, which is based on the principles of single molecule localization microscopy, utilizes a mirror cavity with a series of pinholes on one of the mirrors allowing for simultaneous optical sectioning of different imaging planes. In addition, we employ a unique machine learned algorithm for 3D localization of events that occur between different imaging planes. Our technique enables high-resolution imaging of thicker volumes than what is currently available using other single molecule localization techniques.
In this study, we present an integrated stereoscopic and hyperspectral imaging system designed to overcome the limitations of traditional quantitative hyperspectral imaging, notably the dependency on precise camera-sample distance measurements. Our approach combines advanced depth-sensing technology with a compact hyperspectral camera, featuring integrated RGB sensors, to facilitate automated synchronization, system integration, and reconstruction through epipolar geometry and image co-registration. The system acquires hyperspectral data cubes along predefined camera trajectories, enabling full 3D hyperspectral representations via global alignment, a significant enhancement over conventional methods that lack depth resolution. This methodology has the potential to eliminate the need for strict camera-sample distance calibration and appends a morphological dimension to hyperspectral tissue analysis. The system's efficacy is demonstrated in vivo, focusing on non-contact human skin imaging. The integration of stereoscopic depth and hyperspectral data in our system marks a significant advancement in spectroscopic tissue analysis, with promising applications in telehealth, enhancing both the diagnostic capabilities and accessibility of advanced imaging technologies.
KEYWORDS: Cameras, Education and training, Teeth, Deep learning, Color, Neural networks, Algorithm development, Data modeling, RGB color model, Image processing
To address an increasing demand for accessible and affordable tools for at-home oral health assessment, this paper presents the development of a low-cost intraoral camera integrated with a deep learning approach for image analysis. The camera captures and analyzes images of soft and hard oral tissues, enabling real-time feedback on potential tooth staining and empowering users to proactively manage their oral health. The system utilizes an Azdent intraoral USB camera with the Raspberry Pi 400 computer and Intel® Neural Computing Stick for real-time image acquisition and processing. A neural network was trained on a dataset comprising 102,062 CIELAB and RGB values from the VITA classical shade guide. Ground truth annotations were generated through manual labeling, encompassing tooth number and stain levels. The deep learning approach demonstrated high accuracy in tooth stain identification with a testing accuracy exceeding 0.6. This study demonstrates the capacity of low-cost camera hardware and deep learning algorithms to effectively categorize tooth stain levels with high accuracy. By bridging the gap between professional care and homebased oral health monitoring, the development of this low-cost platform holds promise in facilitating early detection and monitoring of oral health issues.
Modern intraoral scanners are handheld devices that can produce point cloud-based representations of the human jaw. These scanners achieve 3-dimensional spatial resolution on the order of tens of micrometers by measuring light reflected from hard and soft intraoral tissue and applying advanced depth estimation techniques. In this work, a series of deep learning-based segmentation and registration methods for 3D intraoral data was developed for longitudinal monitoring of plaque accumulation and gingival inflammation. An intraoral scanner was used to acquire point cloud data from the upper and lower jaws of human subjects after an initial professional cleaning and then after multiple days abstaining from some oral hygiene. Individual teeth and gum regions within longitudinal datasets were identified using a deep learning algorithm for 3D instance segmentation. Next, automated spatial alignment of teeth and gum regions acquired over multi-day studies was achieved using a multiway registration method. The minimum distances between closest-correlated points were then calculated, allowing changes in tissue and plaque volume to be quantified. Differences in these measured quantities were found to correlate with the extent of plaque and inflammation assessed visually by a trained clinician. These methods provided precise measurements of morphological differences in patient tissue over longitudinal studies, allowing quantification of plaque accumulation and gingival inflammation. Integration of deep learning algorithms with commercial intraoral 3D scanning systems may provide a new approach for expanded screening of intraoral diseases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.