Visual stimulation-induced increase in the metabolic activity of retinal neurons leads to temporary vasodilation of retinal blood vessels and an increase in the retinal blood flow, which is often referred to as functional hyperaemia. Neurodegenerative retinal diseases such as glaucoma, age-related macular degeneration, and diabetic retinopathy have been known to cause progressive damage to the retinal morphology, blood perfusion and retinal blood flow, and eventually lead to blindness. In this study, we utilize a combined OCT+ERG system to investigate functional hyperemia in the human retina.
A novel method is proposed for correcting aberrations and diffraction-induced artifacts in optical coherence tomography (OCT) images. The method leverages light backpropagation models combined with region-based despeckling and sharpness optimization algorithms to improve overall OCT image quality across a variety of sample types. The algorithm was applied to data acquired using a Line-Field OCT (LF-OCT) system with high numerical aperture (NA) and short depth-of-focus (DOF) Significant improvements were made to images acquired at different depths within various sample types. Such improvement holds the promise of providing ultra-high resolution volumetric OCT data without the need for depth scanning.
The optical design of a second generation Powell Lens-based Line-Field OCT system is presented. The new design offers improved FOV, DOF and sensitivity to allow for contactless, volumetric in-vivo imaging of the human cornea. Images acquired from healthy subjects reveal the cellular structure of the corneal epithelial and endothelial layers, sub-basal and stromal nerves. The high axial resolution allows for both visualization and morphometry of the thin corneal nerves such as the endothelium, Descemet’s membrane and pre-Descemet’s (Dua) layer. Visualization of endothelial nuclei allows for fast and easy counting of endothelial cells.
This study presents a novel method for correcting aberrations and diffraction-induced artifacts in optical coherence tomography (OCT) images. The method takes advantage of light backpropagation models in combination with non-stationary despeckling and sharpness optimization algorithms to improve the overall quality of OCT images. The algorithm's application to the eye data acquired using a Powell Lens-based Line-Field OCT (PL-LF-OCT) system with a high numerical aperture (NA) and short depth-of-focus (DOF) resulted in significant enhancements in images captured at different depths. This promising improvement signifies the potential for providing ultra-high resolution volumetric OCT data without the need for depth scanning.
Visually-evoked retinal neuronal simulation-induced transient vasodilation of retinal blood vessels is referred to as neurovascular coupling. Both systemic diseases such as Diabetes and high blood pressure, as well as potentially blinding retinal diseases such as Glaucoma have been shown to damage the elasticity of the blood vessels’ walls. In this study, a research-grade, high-resolution OCT system is combined with a commercial electroretinography (ERG) system to investigate flicker-stimulus-induced transient vasodilation in retinal blood vessels around the Optic Nerve Head and study neurovascular coupling in the human retina.
The accurate estimation of global chlorophyll-a (Chla) concentration from the large remote sensing data in a timely manner is crucial for supporting various applications. Moderate resolution imaging spectroradiometer (MODIS) is one of the most widely used earth observation data sources, which has the characteristics of global coverage, high spectral resolution, and short revisit period. So the estimation of global Chla concentration from MODIS imagery in a fast and accurate manner is significant. Nevertheless, the estimation of Chla concentration from MODIS using traditional machine learning approaches is challenging due to their limited modeling capability to capture the complex relationship between MODIS spatial–spectral observations and the Chla concentration, and also their low computational efficiency to address large MODIS data in a timely manner. We, therefore, explore the potential of deep convolutional neural networks (CNNs) for Chla concentration estimation from MODIS imagery. The Ocean Color Climate Change Initiative (OC-CCI) Chla concentration image is used as ground truth because it is a well-recognized Chla concentration product that is produced by assimilating different satellite data through a complex data processing steps. A total of 12 monthly OC-CCI global Chla concentration maps and the associated MODIS images are used to investigate the CNN approach using a cross-validation approach. The classical machine learning approach, i.e., the supported vector regression (SVR), is used to compare with the proposed CNN approach. Comparing with the SVR, the CNN performs better with the mean log root-mean-square error and R2 of being 0.129 and 0.901, respectively, indicating that using the MODIS images alone, the CNN approach can achieve results that is close to the OC-CCI Chla concentration images. These results demonstrate that CNNs may provide Chla concentration images that are reliable, stable and timely, and as such CNN constitutes a useful technique for operational Chla concentration estimation from large MODIS data.
The standard practice employed by dermatologists for examining skin lesions is dermoscopy, where an epiluminesence microscope (ELM) is used to examine the skin chrominance and micro-structural characteristics for anomalies. Conventional ELM instruments are being replaced by digital ELM instruments that enable der- matologists and other health care practitioners to digitally capture, archive, and analyze skin lesions using computer-aided diagnosis (CAD) software. One of the limiting factors of digital ELM is the fundamental trade- off between spatial resolution and field-of-view (FOV), where a larger FOV (which is needed to allow for larger skin lesions to be examined in their entirety) can be achieved by reducing magnification at the cost of spatial resolution (leading to a loss of fine details that can be indicative of malignancy and disease). Here, we introduce deep computational optics (DCO) for the purpose of resolution-enhanced digital ELM to improve the balance between spatial resolution and FOV. More specifically, the multitude of parameters of a deep computational model for numerically magnifying digital ELM images are learned through a wealth of low-resolution and high- resolution digital ELM image pairs. The proposed DCO approach was experimentally validated, demonstrating improvements in the spatial resolution of the resolution-enhanced digital ELM by two-fold while maintaining FOV.
Brightfield microscopy is a standard method for the identification and enumeration of different micro-organisms, specifically for analyzing different types of algae and planktonic organisms in water samples. Typically, bright- field microscopy is performed in a broadband visible spectrum configuration; however, important distinguishing features in various micro-organisms are much better captured using a narrow-band multispectral configuration. One challenge with leveraging multispectral microscopy, particularly in low-cost field-portable instrument setups, is the presence of significant chromatic aberrations. Therefore, we introduce a multispectral Bayesian-based computational microscopy method for enhancing image quality by jointly correcting for chromatic aberrations, illumination inhomogeneities and noise across multiple spectral wavelengths within a probabilistic framework. To test the efficacy of this method, calibration parameters associated with a field-portable multi-spectral mi- croscopy instrument are measured by characterizing the point spread functions at different spectral wavelengths ranging from 465 nm - 655 nm with a pinhole target. We demonstrate the effective optical resolution improvements of the microscopy instrument augmented with the proposed method using the 1951 USAF resolution test chart. Finally, we evaluate the qualitative performance of this instrument by imaging Anabaena flos-aqua, a toxin-producing cyanobacteria, as well as Ankistrodesmus falcatus, a type of green algae. The efficacy of this proposed framework shows the potential of having an in-situ instrument to observe biological organisms at mul- tiple narrow-band wavelengths, providing both additional spectral information and the ability for continuous detection and monitoring of micro-organisms.
The corneal sub-basal nerve plexus (SNP) is a network of thin, unmyelinated nerve fibers located between the basal epithelium and the Bowman’s membrane. Both corneal and systemic diseases such as keratoconus and diabetic can alter the nerve fiber density, thickness and tortuosity. Recent developments of cellular resolution OCT technology allowed for in-vivo visualization and mapping of the corneal SNP. We have developed a fully automated algorithm for segmentation of corneal nerves. The performance of the algorithm was tested on a series of enface UHR-OCT images acquired in-vivo from healthy human subjects. The proposed algorithm traces most of the sub-basal corneal nerves correctly. The achieved processing time and tracing quality are the major advantages of the proposed method. Results show the potential application of proposed method for nerve analysis and morphometric quantification of human sub-basal corneal nerves which is an important tool in corneal related diseases.
Lensfree on-chip microscopy, which harnesses holography principles to capture interferometric light-field encodings without the need of lenses, is an emerging microscopy modality with widespread interest given the large field-of-view (FOV) compared to lens-based microscopy systems. In particular, there is a growing interest on the development of high-quality lensfree on-chip color microscopy. In this study, we propose a multi-laser spectral lightfield fusion microscopy using deep computational optics for achieving lensfree on-chip color microscopy. We will demonstrate that leveraging deep computational optics can enable imaging resolution beyond the diffraction limit without the use of any complex hardware-based super-resolution techniques, such as aperture scanning. The capabilities of the microscope are examined for whole-slide pathology. The superior imaging resolution of the instrument is demonstrated by imaging of a series of biological specimens demonstrating the true color imaging capability of the instrument while showcasing the large FOV of the instrument.
Obstructive sleep apnea (OSA) affects 20% of the adult population, and is associated with cardiovascular and cognitive morbidities. However, it is estimated that up to 80% of treatable OSA cases remain undiagnosed. Cur- rent methods for diagnosing OSA are expensive, labor-intensive, and involve uncomfortable wearable sensors. This study explored the feasibility of non-contact biophotonic assessment of OSA cardiovascular biomarkers via photoplethysmography imaging (PPGI). In particular, PPGI was used to monitor the hemodynamic response to obstructive respiratory events. Sleep apnea onset was simulated using Muller's maneuver in which breathing was obstructed by a respiratory clamp. A custom PPGI system, coded hemodynamic imaging (CHI), was positioned 1 m above the bed and illuminated the participant's head with 850 nm light, providing non-intrusive illumination for night-time monitoring. A video was recorded before, during and following an apnea event at 60 fps, yielding 17 ms temporal resolution. Per-pixel absorbance signals were extracted using a Beer-Lambert derived light transport model, and subsequently denoised. The extracted hemodynamic signal exhibited dynamic temporal modulation during and following the apnea event. In particular, the pulse wave amplitude (PWA) decreased during obstructed breathing, indicating vasoconstriction. Upon successful inhalation, the PWA gradually increased toward homeostasis following a temporal phase delay. This temporal vascular tone modulation provides insight into autonomic and vascular response, and may be used to assess sleep apnea using non-contact biophotonic imaging.
Observing the circular dichroism (CD) caused by organic molecules in biological fluids can provide powerful indicators of patient health and provide diagnostic clues for treatment. Methods for this kind of analysis involve tabletop devices that weigh tens of kilograms with costs on the order of tens of thousands of dollars, making them prohibitive in point-of-care diagnostic applications. In an e ort to reduce the size, cost, and complexity of CD estimation systems for point-of-care diagnostics, we propose a novel method for CD estimation that leverages a vortex half-wave retarder in between two linear polarizers and a two-dimensional photodetector array to provide an overall complexity reduction in the system. This enables the measurement of polarization variations across multiple polarizations after they interact with a biological sample, simultaneously, without the need for mechanical actuation. We further discuss design considerations of this methodology in the context of practical applications to point-of-care diagnostics.
Dysphagia (swallowing difficulty) increases risk for malnutrition and affects at least 15% of American older adults, and 590 million people worldwide. Malnutrition is associated with increased mortality, increased morbidity, decreased quality of life, and accounts for over $15 billion (USD) health-care related costs each year. While modified texture diets (e.g., puréed food) reduce the risk of choking, quality assurance is necessary for monitoring nutrient density to ensure food meets nutritional requirements. However, current methods are subjective and time consuming. The purpose of this study was to investigate the feasibility of optical techniques for an objective assessment of food nutrient density in puréed samples. Motivated by a theoretical optical dilution model, broadband spectral images of commercially prepared purée samples were acquired. Specifically, 13 flavors at five dilutions relative to initial concentration, each with six replicates, were acquired for a total of 390 samples. Purée samples were prepared and loaded onto a white reflectance back plane to maximize photon traversal path length through the purée. The sample was illuminated with a tungsten-halogen illumination source fitted with a front glass fabric diffuser for spatially homogeneous illumination. This broadband illuminant was chosen to observe as many food-light spectral absorbance interactions as possible. Flavor-stratified correlation analysis was performed on this food image dataset to investigate the relationship between nutritional information and color space transformations. A special case of blueberry is presented as the effect of anthocyanins was quantitatively observed through normalized spectral trends in response to pH perturbations across dilutions.
Prostate cancer is a leading cause of cancer-related death among men. Multiparametric magnetic resonance imaging has become an essential part of the diagnostic evaluation of prostate cancer. The internationally accepted interpretation scheme (Pi-Rads v2) has different algorithms for scoring of the transition zone (TZ) and peripheral zone (PZ) of the prostate as tumors can appear different in these zones. Computer-aided detection tools have shown different performances in TZ and PZ and separating these zones for training and detection is essential. The TZ-PZ segmentation which requires the segmentation of prostate whole gland and TZ is typically done manually. We present a fully automatic algorithm for delineation of the prostate gland and TZ in diffusion-weighted imaging (DWI) via a stack of fully convolutional neural networks. The proposed algorithm first detects the slices that contain a portion of prostate gland within the three-dimensional DWI volume and then it segments the prostate gland and TZ automatically. The segmentation stage of the algorithm was applied to DWI images of 104 patients and median Dice similarity coefficients of 0.93 and 0.88 were achieved for the prostate gland and TZ, respectively. The detection of image slices with and without prostate gland had an average accuracy of 0.97.
An ideal laser is a useful tool for the analysis of biological systems. In particular, the polarization property of lasers can allow for the concentration of important organic molecules in the human body, such as proteins, amino acids, lipids, and carbohydrates, to be estimated. However, lasers do not always work as intended and there can be effects such as mode hopping and thermal drift that can cause time-varying intensity fluctuations. The causes of these effects can be from the surrounding environment, where either an unstable current source is used or the temperature of the surrounding environment is not temporally stable. This intensity fluctuation can cause bias and error in typical organic molecule concentration estimation techniques. In a low-resource setting where cost must be limited and where environmental factors, like unregulated power supplies and temperature, cannot be controlled, the hardware required to correct for these intensity fluctuations can be prohibitive. We propose a method for computational laser intensity stabilisation that uses Bayesian state estimation to correct for the time-varying intensity fluctuations from electrical and thermal instabilities without the use of additional hardware. This method will allow for consistent intensities across all polarization measurements for accurate estimates of organic molecule concentrations.
Photoplethysmographic imaging (PPGI) systems are relatively new non-contact biophotonic diffuse reflectance systems able to assess arterial pulsations through transient changes in light-tissue interaction. Many PPGI studies have focused on extracting heart rate from the face or hand. Though PPGI systems can be used for widefield imaging of any anatomical area, whole-body investigations are lacking. Here, using a novel PPGI system, coded hemodynamic imaging (CHI), we explored and analyzed the pulsatility at major arterial locations across the whole body, including the neck (carotid artery), arm/wrist (brachial, radial and ulnar arteries), and leg/feet (popliteal and tibial arteries). CHI was positioned 1.5 m from the participant, and diffuse reactance from a broadband tungsten-halogen illumination was filtered using 850{1000 nm bandpass filter for deep tissue penetration. Images were acquired over a highly varying 24-participant sample (11/13 female/male, age 28.7±12.4 years, BMI 25.5±5.2 kg/m2), and a preliminary case study was performed. B-mode ultrasound images were acquired to validate observations through planar arterial characteristics.
A number of factors can degrade the resolution and contrast of OCT images, such as: (1) changes of the OCT pointspread
function (PSF) resulting from wavelength dependent scattering and absorption of light along the imaging depth
(2) speckle noise, as well as (3) motion artifacts. We propose a new Super Resolution OCT (SR OCT) imaging
framework that takes advantage of a Stochastically Fully Connected Conditional Random Field (SF-CRF) model to
generate a Super Resolved OCT (SR OCT) image of higher quality from a set of Low-Resolution OCT (LR OCT)
images. The proposed SF-CRF SR OCT imaging is able to simultaneously compensate for all of the factors mentioned
above, that degrade the OCT image quality, using a unified computational framework. The proposed SF-CRF SR OCT
imaging framework was tested on a set of simulated LR human retinal OCT images generated from a high resolution,
high contrast retinal image, and on a set of in-vivo, high resolution, high contrast rat retinal OCT images. The
reconstructed SR OCT images show considerably higher spatial resolution, less speckle noise and higher contrast
compared to other tested methods. Visual assessment of the results demonstrated the usefulness of the proposed
approach in better preservation of fine details and structures of the imaged sample, retaining biological tissue boundaries
while reducing speckle noise using a unified computational framework. Quantitative evaluation using both Contrast to
Noise Ratio (CNR) and Edge Preservation (EP) parameter also showed superior performance of the proposed SF-CRF
SR OCT approach compared to other image processing approaches.
Cardiovascular disease is a major contributor to US morbidity. Taking preventive action can greatly reduce or eliminate the impact on quality of life. However, many issues often go undetected until the patient presents a physical symptom. Non-intrusive continuous cardiovascular monitoring systems may make detecting and monitoring abnormalities earlier feasible. One candidate system is photoplethysmographic imaging (PPGI), which is able to assess arterial blood pulse characteristics in one or multiple individuals remotely from a distance. In this case study, we showed that PPGI can be used to detect cardiac arrhythmia that would otherwise require contact-based monitoring techniques. Using a novel system, coded hemodynamic imaging (CHI), strong temporal blood pulse waveform signals were extracted at a distance of 1.5 m from the participant using 850-1000 nm diffuse illumination for deep tissue penetration. Data were recorded at a sampling rate of 60 Hz, providing a temporal resolution of 17 ms. The strong fidelity of the signal allowed for both temporal and spectral assessment of abnormal blood pulse waveforms, ultimately to detect the onset of abnormal cardiac events. Data from a participant with arrhythmia was analyzed and compared against normal blood pulse waveform data to validate CHI’s ability to assess cardiac arrhythmia. Results indicate that CHI can be used as a non-intrusive continuous cardiac monitoring system.
Photoplethysmographic imaging (PPGI) is a widefield noncontact biophotonic technology able to remotely monitor cardiovascular function over anatomical areas. Although spatial context can provide insight into physiologically relevant sampling locations, existing PPGI systems rely on coarse spatial averaging with no anatomical priors for assessing arterial pulsatility. Here, we developed a continuous probabilistic pulsatility model for importance-weighted blood pulse waveform extraction. Using a data-driven approach, the model was constructed using a 23 participant sample with a large demographic variability (11/12 female/male, age 11 to 60 years, BMI 16.4 to 35.1 kg·m−2). Using time-synchronized ground-truth blood pulse waveforms, spatial correlation priors were computed and projected into a coaligned importance-weighted Cartesian space. A modified Parzen–Rosenblatt kernel density estimation method was used to compute the continuous resolution-agnostic probabilistic pulsatility model. The model identified locations that consistently exhibited pulsatility across the sample. Blood pulse waveform signals extracted with the model exhibited significantly stronger temporal correlation (W=35,p<0.01) and spectral SNR (W=31,p<0.01) compared to uniform spatial averaging. Heart rate estimation was in strong agreement with true heart rate [r2=0.9619, error (μ,σ)=(0.52,1.69) bpm].
Traditional photoplethysmographic imaging (PPGI) systems use the red, green, and blue (RGB) broadband measurements of a consumer digital camera to remotely estimate a patients heart rate; however, these broadband RGB signals are often corrupted by ambient noise, making the extraction of subtle fluctuations indicative of heart rate difficult. Therefore, the use of narrow-band spectral measurements can significantly improve the accuracy. We propose a novel digital spectral demultiplexing (DSD) method to infer narrow-band spectral information from acquired broadband RGB measurements in order to estimate heart rate via the computation of motion- compensated skin erythema fluctuation. Using high-resolution video recordings of human participants, multiple measurement locations are automatically identified on the cheeks of an individual, and motion-compensated broadband reflectance measurements are acquired at each measurement location over time via measurement location tracking. The motion-compensated broadband reflectance measurements are spectrally demultiplexed using a non-linear inverse model based on the spectral sensitivity of the camera's detector. A PPG signal is then computed from the demultiplexed narrow-band spectral information via skin erythema fluctuation analysis, with improved signal-to-noise ratio allowing for reliable remote heart rate measurements. To assess the effectiveness of the proposed system, a set of experiments involving human motion in a front-facing position were performed under ambient lighting conditions. Experimental results indicate that the proposed system achieves robust and accurate heart rate measurements and can provide additional information about the participant beyond the capabilities of traditional PPGI methods.
Impact trauma may cause a hematoma, which is the leakage of venous blood into surrounding tissues. Large hematomas can be dangerous as they may inhibit local blood ow. Hematomas are often diagnosed visually, which may be problematic if the hematoma leaks deeper than the visible penetration depth. Furthermore, vascular wound healing is often monitored at home without the aid of a clinician. We therefore investigated the use of near infrared (NIR) re ectance photoplethysmographic imaging (PPGI) to assess vascular damage resulting from a hematoma, and monitor the healing process. In this case study, the participant experienced internal vascular damage in the form of a hematoma. Using a PPGI system with dual-mode temporally coded illumination for ambient-agnostic data acquisition and mounted optical elements, the tissue was illuminated with a spatially uniform irradiance pattern of 850 nm wavelength light for increased tissue penetration and high oxy-to-deoxyhemoglobin absorption ratio. Initial and follow-up PPGI data collection was performed to assess vascular damage and healing. The tissue PPGI sequences were spectrally analyzed, producing spectral maps of the tissue area. Experimental results show that spatial differences in spectral information can be observed around the damaged area. In particular, the damaged site exhibited lower pulsatility than the surrounding healthy tissue. This pulsatility was largely restored in the follow-up data, suggesting that the tissue had undergone vascular healing. These results indicate that hematomas can be assessed and monitored in a non-contact visual manner, and suggests that PPGI can be used for tissue health assessment, with potential extensions to peripheral vascular disease.
Continuous heart rate monitoring can provide important context for quantitative clinical assessment in scenarios such as long-term health monitoring and disability prevention. Photoplethysmographic imaging (PPGI) systems are particularly useful for such monitoring scenarios as contact-based devices pose problems related to comfort and mobility. Each pixel can be regarded as a virtual PPG sensor, thus enabling simultaneous measurements of multiple skin sites. Existing PPGI systems analyze temporal PPGI sensor uctuations related to hemodynamic pulsations across a region of interest to extract the blood pulse signal. However, due to spatially varying optical properties of the skin, the blood pulse signal may not be consistent across all PPGI sensors, leading to inaccurate heart rate monitoring. To increase the hemodynamic signal-to-noise ratio (SNR), we propose a novel spectral PPGI sensor fusion method for enhanced estimation of the true blood pulse signal. Motivated by the observation that PPGI sensors with high hemodynamic SNR exhibit a spectral energy peak at the heart rate frequency, an entropy-based fusion model was formulated to combine PPGI sensors based on the sensors' spectral energy distribution. The optical PPGI device comprised a near infrared (NIR) sensitive camera and an 850 nm LED. Spatially uniform irradiance was achieved by placing optical elements along the LED beam, providing consistent illumination across the skin area. Dual-mode temporally coded illumination was used to negate the temporal effect of ambient illumination. Experimental results show that the spectrally weighted PPGI method can accurately and consistently extract heart rate information where traditional region-based averaging fails.
Multispectral sensing is specifically designed to provide quantitative spectral information about various materials or scenes. Using spectral information, various properties of objects can be measured and analysed. Microscopy, the observing and imaging of objects at the micron- or nano-scale, is one application where multispectral sensing can be advantageous, as many fields of science and research that use microscopy would benefit from observing a specimen in multiple wavelengths. Multispectral microscopy is available, but often requires the operator of the device to switch filters which is a labor intensive process. Furthermore, the need for filter switching makes such systems particularly limiting in cases where the sample contains live species that are constantly moving or exhibit transient phenomena. Direct methods for capturing multispectral data of a live sample simultaneously can also be challenging for microscopy applications as it requires an elaborate optical systems design which uses beamsplitters and a number of detectors proportional to the number of bands sought after. Such devices can therefore be quite costly to build and difficult to maintain, particularly for microscopy. In this paper, we present the concept of virtual spectral demultiplexing imaging (VSDI) microscopy for low-cost in-situ multispectral microscopy of transient phenomena. In VSDI microscopy, the spectral response of a color detector in the microscope is characterized and virtual spectral demultiplexing is performed on the simultaneously-acquired broadband detector measurements based on the developed spectral characterization model to produce microscopic imagery at multiple wavelengths. The proposed VSDI microscope was used to observe colorful nanowire arrays at various wavelengths simultaneously to illustrate its efficacy.
One method to acquire multispectral images is to sequentially capture a series of images where each image contains information from a different bandwidth of light. Another method is to use a series of beamsplitters and dichroic filters to guide different bandwidths of light onto different cameras. However, these methods are very time consuming and expensive and perform poorly in dynamic scenes or when observing transient phenomena. An alternative strategy to capturing multispectral data is to infer this data using sparse spectral reflectance measurements captured using an imaging device with overlapping bandpass filters, such as a consumer digital camera using a Bayer filter pattern. Currently the only method of inferring dense reflectance spectra is the Wiener adaptive filter, which makes Gaussian assumptions about the data. However, these assumptions may not always hold true for all data. We propose a new technique to infer dense reflectance spectra from sparse spectral measurements through the use of a non-linear regression model. The non-linear regression model used in this technique is the random forest model, which is an ensemble of decision trees and trained via the spectral characterization of the optical imaging system and spectral data pair generation. This model is then evaluated by spectrally characterizing different patches on the Macbeth color chart, as well as by reconstructing inferred multispectral images. Results show that the proposed technique can produce inferred dense reflectance spectra that correlate well with the true dense reflectance spectra, which illustrates the merits of the technique.
The broadband spectrum contains more information than what the human eye can detect. Spectral information from different wavelengths can provide unique information about the intrinsic properties of an object. Recently compressed sensing imaging systems with low acquisition time have been introduced. To utilize compressed sensing strategies, strong reconstruction algorithms that can reconstruct a signal from sparse observations are required. This work proposes a cross-spectral multi-layered conditional random field (CS-MCRF) approach for sparse reconstruction of multi-spectral compressive sensing data in multi-spectral stereoscopic vision imaging systems. The CS-MCRF will use information between neighboring spectral bands to better utilize available information for reconstruction. This method was evaluated using simulated compressed sensing multi-spectral imaging data. Results show improvement over existing techniques in preserving spectral fidelity while effectively inferring missing information from sparsely available observations.
Fluorescent imaging, often synonymous with microscopic imaging, is an imaging modality whereby various features of a target are observed based on assignment of chemical labels. These labels are in most cases indirect tracers of specific structures or chemical compounds which cannot be otherwise identified. The tracers are excited by an illuminating source and they in turn emit light at specific wavelengths. This light is then captured by an imaging device and represented as an indirect observation of the specific feature in the sample. The process of excitation and imaging of the emitted light is performed sequentially and is proportional to the number of tracers or fluorescence species present in the sample. We present an imaging system that can image fluorescent tracers, in the visible and the near Infra-red, simultaneously. This system is capable of illuminating the target with different excitation light sources and capture the corresponding fluorescence images in one snapshot using a series of mirrors to capture different views of the sample. The simultaneously captured image are fused using a computational reconstruction process to present a coherent multispectral fluorescence image. The system is proposed for use in applications where the rapid enumeration of fluorescent species in a large field of view is paramount as opposed to their microscopic image in a narrow field of view. The system was tested using a controlled cocktail solution of four different types fluorescent microspheres and was able to enumerate the microspheres based on their different fluorescent signatures as captured by the system.
Safe drinking water is essential for human health, yet over a billion people worldwide do not have access to safe drinking water. Due to the presence and accumulation of biological contaminants in natural waters (e.g., pathogens and neuro-, hepato-, and cytotoxins associated with algal blooms) remain a critical challenge in the provision of safe drinking water globally. It is not financially feasible and practical to monitor and quantify water quality frequently enough to identify the potential health risk due to contamination, especially in developing countries. We propose a low-cost, small-profile multispectral (MS) system based on Digital Holographic Microscopy (DHM) and investigate methods for rapidly capturing holographic data of natural water samples. We have developed a test-bed for an MSDHM instrument to produce and capture holographic data of the sample at different wavelengths in the visible and the near Infra-red spectral region, allowing for resolution improvement in the reconstructed images. Additionally, we have developed high-speed statistical signal processing and analysis techniques to facilitate rapid reconstruction and assessment of the MS holographic data being captured by the MSDHM instrument. The proposed system is used to examine cyanobacteria as well as Cryptosporidium parvum oocysts which remain important and difficult to treat microbiological contaminants that must be addressed for the provision of safe drinking water globally.
Polarimetry is a common technique used in chemistry for solution characterization and analysis, giving insight into the molecular structure of a solution measured through the rotation of linearly polarized light. This rotation is characterized by the Boits law. Without large optical path lengths, or high concentrations of solution, these optical rotations are typically very small, requiring elaborate and costly apparatuses. To ensure that the rotation measurements are accurate, these devices usually perform complex optical procedures or time-averaged point measurements to ensure that any intensity variation seen is a product of optical rotation and not from inherent noise sources in the system, such as sensor or shot noise. Time averaging is a lengthy process and rarely utilizes all of the information available on the sensor. To this end, we have developed a novel integrated, miniature, computational imaging system that enhances polarimetric measurements by taking advantage of the full spot size observed on an array detector. This computational imaging system is capable of using a single acquisition at unity gain to enhance the polarimetric measurements using a probabilistic framework, which accounts for inherent noise and optical characteristics in the acquisition process, to take advantage of spatial intensity relations. This approach is faster than time-averaging methods and can better account for any measurement uncertainties. In preliminary experiments, this system has produced comparably consistent measurements across multiple trials with the same chemical solution than time averaging techniques.
Melanin is a pigment that is highly absorptive in the UV and visible electromagnetic spectra. It is responsible for perceived skin tone, and protects against harmful UV effects. Abnormal melanin distribution is often an indicator for melanoma. We propose a novel approach for non-contact melanin distribution via multispectral temporal illumination coding to estimate the two-dimensional melanin distribution based on its absorptive characteristics. In the proposed system, a novel multispectral, cross-polarized, temporally-coded illumination sequence is synchronized with a camera to measure reflectance under both multispectral and ambient illumination. This allows us to eliminate the ambient illumination contribution from the acquired reflectance measurements, and also to determine the melanin distribution in an observed region based on the spectral properties of melanin using the Beer-Lambert law. Using this information, melanin distribution maps can be generated for objective, quantitative assessment of skin type of individuals. We show that the melanin distribution map correctly identifies areas with high melanin densities (e.g., nevi).
Non-contact camera-based imaging photoplethysmography (iPPG) is useful for measuring heart rate in conditions
where contact devices are problematic due to issues such as mobility, comfort, and sanitation. Existing iPPG
methods analyse the light-tissue interaction of either active or passive (ambient) illumination. Many active
iPPG methods assume the incident ambient light is negligible to the active illumination, resulting in high power
requirements, while many passive iPPG methods assume near-constant ambient conditions. These assumptions
can only be achieved in environments with controlled illumination and thus constrain the use of such devices. To
increase the number of possible applications of iPPG devices, we propose a dual-mode active iPPG system that is
robust to changes in ambient illumination variations. Our system uses a temporally-coded illumination sequence
that is synchronized with the camera to measure both active and ambient illumination interaction for determining
heart rate. By subtracting the ambient contribution, the remaining illumination data can be attributed to the
controlled illuminant. Our device comprises a camera and an LED illuminant controlled by a microcontroller.
The microcontroller drives the temporal code via synchronizing the frame captures and illumination time at the
hardware level. By simulating changes in ambient light conditions, experimental results show our device is able
to assess heart rate accurately in challenging lighting conditions. By varying the temporal code, we demonstrate
the trade-off between camera frame rate and ambient light compensation for optimal blood pulse detection.
We present a novel non-contact photoplethysmographic (PPG) imaging system based on high-resolution video recordings of ambient reflectance of human bodies that compensates for body motion and takes advantage of skin erythema fluctuations to improve measurement reliability for the purpose of remote heart rate monitoring. A single measurement location for recording the ambient reflectance is automatically identified on an individual, and the motion for the location is determined over time via measurement location tracking. Based on the determined motion information motion-compensated reflectance measurements at different wavelengths for the measurement location can be acquired, thus providing more reliable measurements for the same location on the human over time. The reflectance measurement is used to determine skin erythema fluctuations over time, resulting in the capture of a PPG signal with a high signal-to-noise ratio. To test the efficacy of the proposed system, a set of experiments involving human motion in a front-facing position were performed under natural ambient light. The experimental results demonstrated that skin erythema fluctuations can achieve noticeably improved average accuracy in heart rate measurement when compared to previously proposed non-contact PPG imaging systems.
Block-transform lossy image compression is the most widely-used approach for compressing and storing images or video. A novel algorithm to restore highly compressed images with greater image quality is proposed. Since many block-transform coefficients are reduced to zero after quantization, the compressed image restoration problem can be treated as a sparse reconstruction problem where the original image is reconstructed based on sparse, degraded measurements in the form of highly quantized block-transform coefficients. The sparse reconstruction problem is solved by minimizing a homotopic regularized function, subject to data fidelity in the block-transform domain. Experimental results using compressed natural images at di erent levels of compression show improved performance by using the proposed algorithm compared to other methods.
The lateral resolution of a Spectral Domain Optical Coherence Tomography (SD-OCT) image is limited by the focusing properties of the OCT imaging probe optics, the wavelength range which SD-OCT system operates at, spherical and chromatic aberrations induced by the imaging optics, the optical properties of the imaged object, and in the special case of in-vivo retinal imaging by the optics of the eye. This limitation often results in challenges with resolving fine details and structures of the imaged sample outside of the Depth-Of-Focus (DOF) range. We propose a novel technique for generating Laterally Resolved OCT (LR-OCT) images using OCT measurements acquired with intentional imbrications. The proposed, novel method is based on a Maximum A Posteriori (MAP) reconstruction framework which takes advantage of a Stochastic Fully Connected Conditional Random Field (SFCRF) model to compensate for the artifacts and noise when reconstructing a LR-OCT image from imbricated OCT measurement. The proposed lateral resolution enhancement method was tested on synthetic OCT measurement as well as on a human cornea SDOCT image to evaluate the usefulness of the proposed approach in lateral resolution enhancement. Experimental results show that applying this method to OCT images, noticeably improves the sharpness of morphological features in the OCT image and in lateral direction, thus demonstrating better delineation of fine dot shape details in the synthetic OCT test, as well as better delineation of the keratocyte cells in the human corneal OCT test image.
The axial resolution of Spectral Domain Optical Coherence Tomography (SD-OCT) images degrades with scanning depth due to the limited number of pixels and the pixel size of the camera, any aberrations in the spectrometer optics and wavelength dependent scattering and absorption in the imaged object [1]. Here we propose a novel algorithm which compensates for the blurring effect of these factors of the depth-dependent axial Point Spread Function (PSF) in SDOCT images. The proposed method is based on a Maximum A Posteriori (MAP) reconstruction framework which takes advantage of a Stochastic Fully Connected Conditional Random Field (SFCRF) model. The aim is to compensate for the depth-dependent axial blur in SD-OCT images and simultaneously suppress the speckle noise which is inherent to all OCT images. Applying the proposed depth-dependent axial resolution enhancement technique to an OCT image of cucumber considerably improved the axial resolution of the image especially at higher imaging depths and allowed for better visualization of cellular membrane and nuclei. Comparing the result of our proposed method with the conventional Lucy-Richardson deconvolution algorithm clearly demonstrates the efficiency of our proposed technique in better visualization and preservation of fine details and structures in the imaged sample, as well as better speckle noise suppression. This illustrates the potential usefulness of our proposed technique as a suitable replacement for the hardware approaches which are often very costly and complicated.
Mobile robots that rely on vision, for navigation and object detection, use saliency approaches to identify a set
of potential candidates to recognize. The state of the art in saliency detection for mobile robotics often rely upon
visible light imaging, using conventional camera setups, to distinguish an object against its surroundings based
on factors such as feature compactness, heterogeneity and/or homogeneity. We are demonstrating a novel multi-
polarimetric saliency detection approach which uses multiple measured polarization states of a scene. We leverage
the light-material interaction known as Fresnel reflections to extract rotationally invariant multi-polarimetric
textural representations to then train a high dimensional sparse texture model. The multi-polarimetric textural
distinctiveness is characterized using a conditional probability framework based on the sparse texture model
which is then used to determine the saliency at each pixel of the scene. It was observed that through the
inclusion of additional polarized states into the saliency analysis, we were able to compute noticeably improved
saliency maps in scenes where objects are difficult to distinguish from their background due to color intensity
similarities between the object and its surroundings.
The prevalence of compressive sensing is continually growing in all facets of imaging science. Com- pressive sensing allows for the capture and reconstruction of an entire signal from a sparse (under- sampled), yet sufficient, set of measurements that is representative of the target being observed. This compressive sensing strategy reduces the duration of the data capture, the size of the acquired data, and the cost of the imaging hardware as well as complexity while preserving the necessary underlying information. Compressive sensing systems require the accompaniment of advanced re- construction algorithms to reconstruct complete signals from the sparse measurements made. Here, a new reconstruction algorithm is introduced specifically for the reconstruction of compressive multispectral (MS) sensing data that allows for high-quality reconstruction from acquisitions at sub-Nyquist rates. We propose a multilayered conditional random field (MCRF) model, which extends upon the CRF model by incorporating two joint layers of certainty and estimated states. The proposed algorithm treats the reconstruction of each spectral channel as a MCRF given the sparse MS measurements. Since the observations are incomplete, the MCRF incorporates an extra layer determining the certainty of the measurements. The proposed MCRF approach was evaluated using simulated compressive MS data acquisitions, and is shown to enable fast acquisition of MS sensing data with reduced imaging hardware cost and complexity.
Imaging time can be reduced using despeckled tomograms, which have similar image metrics to those obtained
by averaging several low speed tomograms or many high speed tomograms. Quantitative analysis was used to
compare the performance of two speckle denoising approaches, algorithmic despeckling and frame averaging, as
applied to retinal OCT images. Human retinal tomograms were acquired from healthy subjects with a research
grade 1060nm spectral domain UHROCT system with 5μm axial resolution in the retina. Single cross-sectional
retinal tomograms were processed with a novel speckle denoising algorithm and compared with frame averaged
retinal images acquired at the same location. Image quality metrics such as the image SNR and contrast-to-noise
ratio (CNR) were evaluated for both cases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.