KEYWORDS: 3D modeling, Thermal modeling, Thermography, 3D image processing, 3D acquisition, Infrared cameras, Temperature metrology, Cameras, Infrared imaging
Thermography is a highly beneficial non-invasive and non-contact tool that finds applications in various fields, such as building inspection, industrial equipment monitoring, quality control, and medical evaluations. Analyzing the surface temperature of an object at different points in time, and under varying conditions, can help detect defects, cracks, and anomalies in industry components. In this study, we propose a framework for reproducible and quantitative measurement of surface temperature changes over time using thermal 3D models created with low-cost and portable devices. We present the application of this framework in two cases: to analyze temperature changes over time in a plastic container and to analyze temperature changes before and after medical treatment of a chronic wound. The results on a plastic container and on a chronic wound, show that our approach for multi-temporal registration of thermal 3D models could be a cost-effective and practical solution for studying temperature changes in various applications.
Purpose: We present a markerless vision-based method for on-the-fly three-dimensional (3D) pose estimation of a fiberscope instrument to target pathologic areas in the endoscopic view during exploration.
Approach: A 2.5-mm-diameter fiberscope is inserted through the endoscope’s operating channel and connected to an additional camera to perform complementary observation of a targeted area such as a multimodal magnifier. The 3D pose of the fiberscope is estimated frame-by-frame by maximizing the similarity between its silhouette (automatically detected in the endoscopic view using a deep learning neural network) and a cylindrical shape bound to a kinematic model reduced to three degrees-of-freedom. An alignment of the cylinder axis, based on Plücker coordinates from the straight edges detected in the image, makes convergence faster and more reliable.
Results: The performance on simulations has been validated with a virtual trajectory mimicking endoscopic exploration and on real images of a chessboard pattern acquired with different endoscopic configurations. The experiments demonstrated a good accuracy and robustness of the proposed algorithm with errors of 0.33 ± 0.68 mm in distance position and 0.32 ± 0.11 deg in axis orientation for the 3D pose estimation, which reveals its superiority over previous approaches. This allows multimodal image registration with sufficient accuracy of <3 pixels.
Conclusion: Our pose estimation pipeline was executed on simulations and patterns; the results demonstrate the robustness of our method and the potential of fiber-optical instrument image-based tracking for pose estimation and multimodal registration. It can be fully implemented in software and therefore easily integrated into a routine clinical environment.
Color, shape (size and volume), and temperature are important clinical features for chronic wound monitoring that could impact diagnosis and treatment. Noninvasive 3D measurement are better and more accurate than those in 2D, but expensive equipment and complexity of the setup prevent their use at hospitals. Therefore, the use of affordable and lightweight devices with straightforward protocol to acquire images for evaluations is fundamental to provide a functional and useful evaluation of the wound. In this work, an automated methodology to generate color and thermal 3D models is presented by using portable devices: a commercial mobile device with a connected portable thermal camera. The 3D model of the wound surface is estimated from a series of color images using structure-from-motion (SfM) while thermal information is overlaid to the ulcer’s relief for multimodal wound evaluation. The proposed methodology contributes with a proof of concept for multimodal wound monitoring in the hospital environment with a simple hand-held shooting protocol. The system was used efficiently with 5 patients on wounds of various sizes and types.
A large corpus of ceramic sherds dating from the High Middle Ages has been extracted in Saran (France). The sherds have an engraved frieze made by the potter with a carved wooden wheel. These relief patterns can be used to date the sherds in order to study the diffusion of ceramic production. The aim of the ARCADIA project was to develop an automatic classification of this archaeological heritage. The sherds were scanned using a three-dimensional (3-D) laser scanner. After projecting the 3-D point cloud onto a depth map, the local variance highlighted the shallow relief patterns. The saliency region focused on the motif was extracted by a density-based spatial clustering of FAST points. An adaptive thresholding was then applied to the depth to obtain a binary pattern close to manual sampling. The five most representative types of motif were classified by training an SVM model with a pyramid histogram of visual words descriptor. Compared with other state-of-the-art methods, the proposed approach succeeded in classifying up to 84% of the binary patterns on a dataset of 377 scanned sherds. The automatic method is extremely time-saving compared to manual stamping.
Computer aided planning for orthodontic treatment requires knowing occlusion of separately scanned dental casts. A visual guided registration is conducted starting by extracting corresponding features in both photographs and 3D scans. To achieve this, dental neck and occlusion surface are firstly extracted by image segmentation and 3D curvature analysis. Then, an iterative registration process is conducted during which feature positions are refined, guided by previously found anatomic edges. The occlusal edge image detection is improved by an original algorithm which follows Canny’s poorly detected edges using a priori knowledge of tooth shapes. Finally, the influence of feature extraction and position optimization is evaluated in terms of the quality of the induced registration. Best combination of feature detection and optimization leads to a positioning average error of 1.10 mm and 2.03°.
Carbon blacks are widely used as filler in industrial products to modify their mechanical, electrical and optical properties. For rubber products, they are the subject of a standard classification system relative to their surface area, particle size and structure. The electron microscope remains the most accurate means of measuring these characteristics on condition that boundaries of aggregates and particles are correctly detected. In this paper, we propose an image processing chain allowing subsequent characterization for automatic grading of the carbon black aggregates. Based on literature review, 31 features are extracted from TEM images to obtain reliable information on the particle size, the shape and microstructure of the carbon black aggregates. Then, they are used for training several classifiers to compare their results for automatic grading. To obtain better results, we suggest to use a cluster identification of aggregates in place of the individual characterization of aggregates.
Chronic wounds are a major problem worldwide which mainly affects to the geriatric population or patients with limited mobility. In tropical countries, Cutaneous Leishmaniasis(CL)s is also a cause for chronic wounds,being endemic in Peru in the 75% of the country. Therefore, the monitoring of these wounds represents a big challenge due to the remote location of the patients. This papers aims to develop a low-cost user-friendly technique to obtain a 3D reconstruction for chronic wounds oriented to clinical monitoring and assessment. The video is taken using a commercial hand-held video camera without the need of a rig. The algorithm has been specially designed for skin wounds which have certain characteristics in texture where techniques used in regular SFM applications with undefined edges wouldn’t work. In addition, the technique has been developed using open source libraries. The 3D cloud point estimated allows the computation of metrics as volume, depth, superficial area which recently have been used by CL specialists showing good results in clinical assessment. Initial results in cork phantoms and CL wounds show an average distance error of less than 1mm when compared against models obtained with a industrial 3D laser scanner.
A significant recent breakthrough in medical imaging is the development of a new non-invasive modality based on
multispectral and hyperspectral imaging that can be easily integrated in the operating room. This technology consists of collecting series of images at wavelength intervals of only few nanometers and in which single pixels have spectral
information content relevant to the scene under observation. Before becoming of practical interest for the clinician, such system should meet important requirements. Firstly, it should enable real reflectance measurements and high quality images to dispose of valuable physical data after spatial and spectral calibration. Secondly, quick band pass scanning and a smart interface are needed for intra-operative mode. Finally, experimentation is required to develop expert knowledge for hyperspectral image interpretation and result display on RGB screens, to assist the surgeon with tissue detection and diagnostic capabilities during an intervention. This paper is focused mainly on the two first specifications of this methodology applied to a liquid crystal tunable filter (LCTF) based visible and near infrared spectral imaging system. The system consists of an illumination unit and a spectral imager that includes a monochrome camera, two LCTFs and a fixed focal lens. It also involves a computer with the data acquisition software. The system can capture hyperspectral images in the spectral range of 400 – 1100 nm. Results of preclinical experiments indicated that anatomical tissues can be distinguished especially in near infrared bands. This promises a great capability of hyperspectral imaging to bring efficient assistance for surgeons.
We propose a new system that makes possible to monitor the evolution of scars after the excision of a tumorous
dermatosis. The hardware part of this system is composed of a new optical innovative probe with which two types of
images can be acquired simultaneously: an anatomic image acquired under a white light and a functional one based on
autofluorescence from the protoporphyrin within the cancer cells. For technical reasons related to the maximum size of
the area covered by the probe, acquired images are too small to cover the whole scar. That is why a sequence of
overlapping images is taken in order to cover the required area.
The main goal of this paper is to describe the creation of two panoramic images (anatomic and functional). Fluorescence
images do not have enough salient information for matching the images; stitching algorithms are applied over each
couple of successive white light images to produce an anatomic panorama of the entire scar. The same transformations
obtained from this step are used to register and stitch the functional images. Several experiments have been implemented
using different stitching algorithms (SIFT, ASIFT and SURF), with various transformation parameters (angles of
rotation, projection, scaling, etc…) and different types of skin images. We present the results of these experiments that
propose the best solution.
Thus, clinician has two panoramic images superimposed and usable for diagnostic support. A collaborative layer is
added to the system to allow sharing panoramas among several practitioners over different places.
In orthodontics, a common practice used to diagnose and plan the treatment is the dental cast. After digitization by a CT-scan
or a laser scanner, the obtained 3D surface models can feed orthodontics numerical tools for computer-aided
diagnosis and treatment planning. One of the pre-processing critical steps is the 3D registration of dental arches to obtain
the occlusion of these numerical models. For this task, we propose a vision based method to automatically compute the
registration based on photos of patient mouth. From a set of matched singular points between two photos and the dental
3D models, the rigid transformation to apply to the mandible to be in contact with the maxillary may be computed by
minimizing the reprojection errors. During a precedent study, we established the feasibility of this visual registration
approach with a manual selection of singular points. This paper addresses the issue of automatic point detection. Based
on a priori knowledge, histogram thresholding and edge detection are used to extract specific points in 2D images.
Concurrently, curvatures information detects 3D corresponding points. To improve the quality of the final registration,
we also introduce a combined optimization of the projection matrix with the 2D/3D point positions. These new
developments are evaluated on real data by considering the reprojection errors and the deviation angles after registration
in respect to the manual reference occlusion realized by a specialist.
Accurate wound assessment is a critical task for patient care and health cost reduction at hospital but even still worse in
the context of clinical studies in laboratory. This task, completely devoted to nurses, still relies on manual and tedious
practices. Wound shape is measured with rules, tracing papers or rarely with alginate castings and serum injection. The
wound tissues proportion is also estimated by a qualitative visual assessment based on the red-yellow-black code.
Further to our preceding works on wound 3D complete assessment using a simple freehanded digital camera, we explore
here the adaptation of this tool to wounds artificially created for experimentation purposes. It results that tissue
uniformity and flatness leads to a simplified approach but requires multispectral imaging for enhanced wound
delineation. We demonstrate that, in this context, a simple active contour method can successfully replace more complex
tools such as SVM supervised classification, as no training step is required and that one shot is enough to deal with
perspective projection errors. Moreover, involving all the spectral response of the tissue and not only RGB components
provides a higher discrimination for separating healed epithelial tissue from granulation tissue. This research work is part
of a comparative preclinical study on healing wounds. It aims to compare the efficiency of specific medical honeys with
classical pharmaceuticals for wound care. Results revealed that medical honey competes with more expensive
pharmaceuticals.
In many tasks of machine vision applications, it is important that recorded colors remain constant, in the real world scene, even under changes of the illuminants and the cameras. Contrary to the human vision system, a machine vision system exhibits inadequate adaptability to the variation of lighting conditions. Automatic white balance control available in commercial cameras is not sufficient to provide reproducible color classification. We address this problem of color constancy on a large image database acquired with varying digital cameras and lighting conditions. A device-independent color representation may be obtained by applying a chromatic adaptation transform, from a calibrated color checker pattern included in the field of view. Instead of using the standard Macbeth color checker, we suggest selecting judicious colors to design a customized pattern from contextual information. A comparative study demonstrates that this approach ensures a stronger constancy of the colors-of-interest before vision control thus enabling a wide variety of applications.
In telemedicine environments, a standardized and reproducible assessment of wounds, using a simple free-handled digital camera, is an essential requirement. However, to ensure robust tissue classification, particular attention must be paid to the complete design of the color processing chain. We introduce the key steps including color correction, merging of expert labeling, and segmentation-driven classification based on support vector machines. The tool thus developed ensures stability under lighting condition, viewpoint, and camera changes, to achieve accurate and robust classification of skin tissues. Clinical tests demonstrate that such an advanced tool, which forms part of a complete 3-D and color wound assessment system, significantly improves the monitoring of the healing process. It achieves an overlap score of 79.3 against 69.1% for a single expert, after mapping on the medical reference developed from the image labeling by a college of experts.
KEYWORDS: Image segmentation, Tissues, RGB color model, 3D modeling, Databases, Image classification, 3D metrology, Light sources and illumination, Data modeling, Natural surfaces
This work is part of the ESCALE project dedicated to the design of a complete 3D and color wound assessment tool
using a simple hand held digital camera. The first part was concerned with the computation of a 3D model for wound
measurements using uncalibrated vision techniques. This article presents the second part, which deals with color
classification of wound tissues, a prior step before combining shape and color analysis in a single tool for real tissue
surface measurements. We have adopted an original approach based on unsupervised segmentation prior to
classification, to improve the robustness of the labelling stage. A database of different tissue types is first built; a simple
but efficient color correction method is applied to reduce color shifts due to uncontrolled lighting conditions. A ground
truth is provided by the fusion of several clinicians manual labellings. Then, color and texture tissue descriptors are
extracted from tissue regions of the images database, for the learning stage of an SVM region classifier with the aid of a
ground truth resulting from. The output of this classifier provides a prediction model, later used to label the segmented
regions of the database. Finally, we apply unsupervised color region segmentation on wound images and classify the
tissue regions. Compared to the ground truth, the result of automatic segmentation driven classification provides an
overlap score, (66 % to 88%) of tissue regions higher than that obtained by clinicians.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.