PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Khan M. Iftekharuddin,1 Abdul A. S. Awwal,2 Victor Hugo Diaz-Ramirez3
1Old Dominion Univ. (United States) 2Lawrence Livermore National Lab. (United States) 3Ctr. de Investigación y Desarrollo de Tecnología Digital (Mexico)
This PDF file contains the front matter associated with SPIE Proceedings Volume 12673, including the Title Page, Copyright information, Table of Contents, and Conference Committee information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an in-depth exploration of a Neural Network designed to recolor grayscale images with minimal input requirements. The paper delves into the intricate process of training the network, which involves carefully selecting a fitness function and creating an effective adversarial network. Throughout the paper, various alternatives are considered and evaluated until a suitable approach is identified for further training. Notably, the implementation adopts a random batch sampling approach to gather images in each batch selection, allowing for diverse and comprehensive training. Moreover, several techniques, including Batch Normalization, Leaky ReLU, and Label Smoothing, are strategically employed to tackle challenges related to generalization and achieve a balanced interplay between the generator and discriminator. The experimental results are thoroughly discussed, showcasing the substantial progress achieved in addressing the problem at hand. Remarkably, the Neural Network attains a Structural Similarity Index (SSIM) of -0.5944 on the test set and -0.5922 on the training set, signifying its proficiency in accurately recoloring grayscale images. This paper contributes valuable insights into the realm of image recoloring using neural networks and demonstrates the effectiveness of the proposed methodology in achieving good results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Semantic segmentation is a high-level task in computer vision that associates each pixel of an image with a semantic(class) label. Fine-semantic segmentation is a pixel-level task that provides detailed information necessary to easily identify the region of the object of interest. Hands are one of the main channels for communication, enhancing human-object and human-environment interaction, and in egocentric videos, they appear to be ubiquitous and at the center of vision and activities, hence our interest in hand segmentation. Fine-semantic segmentation of hands locates, identifies, and groups together pixels associated with the hands, with a hand semantic label. We performed fine semantic segmentation of hands, by improving the architecture of the state-of-the-art deep convolutional neural network (RefineNet). We achieve a finer and more accurate result by amending the process of obtaining and combining high and low-level features, and the pixel grouping for pixel-level classification. We performed this task on a public egocentric video dataset (EgoHands). We evaluate our model (RefineNet-Pix) performance by adopting the existing pixel-level metric, mean precision (mPrecision). Comparing our result with the baseline reported in Urooj’s work, we obtain accuracy higher than 87.9% of the benchmark. Our finer and more accurate semantic segmentation result guarantees good performance under various lighting conditions and complex backgrounds, making it suitable for use in both indoor and outdoor environments. Fine-hand semantic segmentation can be applied in image analysis, medical systems (with a focus on understanding hand motion for prediction, diagnosis, and monitoring), hand gesture recognition (human-computer interaction and understanding action), and robotics(grasp and manipulation of objects).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The detection and localization of landmarks in human face images is an essential task in many computer vision applications. This task is challenging because face images can contain geometric modifications due to gesticulations and pose changes, and degradation caused by noise or nonuniform illumination. This work presents an exhaustive evaluation of several state-of-the-art facial landmark detection methods. The performance of each tested method is characterized in terms of reliability of landmark detection and accuracy of landmark localization. Computational results obtained in facial landmark recognition using images from well-known datasets are presented, discussed and compared in terms of objective measures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The World Health Organization forecasts a population of 2,000 million people over 60 years by the year 2050, with 7% of this population suffering from dementia, a disease of public priority. Making a constant evaluation of older adults allows early detection of the disease and provides a better quality of life in the patient. In this sense, the research and development of innovative technological systems for the management of the growing number of patients with cognitive diseases has increased in recent years, integrating data collection and its automatic processing based on geriatric metrics into these systems using artificial intelligence (AI) methods, such that they can establish disease detection at an early stage and follow-up of it, in order to support the increase in patients expected in the coming years in the clinical area. This research presents an interactive web platform that allows users with internet connection from any mobile device, computer, laptop, or other devices, to remotely perform an automated assessment of the Montreal Cognitive Assessment (MoCA) test. This test detects and assesses cognitive deterioration. We use AI and neural network methods for binary and multiclass classification to obtain assessment scores according to geriatric metrics. Subsequently, this test is validated remotely by a mental health specialist. The tests carried out show a correct correspondence in the handling of the information and the results regarding the reference data for comparison. Our system provides an automated and easyto-use digital evaluation metric.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Traffic congestion has become one of the major issues in Bangladesh. The vehicle density on the road is slowly becoming greater than the road capacity and resulting in difficult commutes. This traffic delay leads to wastage of valuable time which impacts the economic development of the country. One of the main reasons for this type of road congestion is due to poor traffic management. This paper presents implementation of an intelligent traffic control system using computer vision algorithms. In this research, we propose a smart traffic management system by measuring the traffic density of the road by real time detection and image processing. The vehicle detection system counts the number of vehicles approaching a traffic signal to determine the congestion of the traffic on the road. Then traffic controller uses an algorithm to control the timings of the traffic signals, red, green and yellow, based on the number of vehicles on the road. Our system was developed by capturing real traffic video using smartphone, vehicle detection system tested in the computer and the traffic signal was implemented in Arduino hardware. Vehicle detection accuracy was increased by training a more extensive dataset with Faster R-CNN (Region-based Convolutional Neural Network) and YOLOv5 (You Only Look Once version 5) models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Convolution Neural Networks are one of the pillars of the Machine Learning revolution over the last years. However, convolution operation is associated with a high computational cost. Here, we present a compact and high-speed solution to perform convolution leveraging optics on a chip, with the potential to reach over 350 TOPS/W.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fourier Transform Profilometry (FTP) is a powerful 3D reconstruction method based on structured-light projection suitable for dynamic shape measurements. A main feature of FTP is that it works using a single fringe pattern. However, the quality of the 3D reconstruction largely depends on the accuracy of first-order spectrum filtering. This work compares some representative spectrum filtering methods in different simulated situations, highlighting advantages and drawbacks. This study provides a reference for the practical implementation of a FTP system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Metric three-dimensional reconstruction by fringe projection profilometry requires calibrating the employed camera and projector. However, the calibration process is more difficult for projectors than for cameras. This work presents a reconstruction method where the projector parameters are not required explicitly. For this, we assume the projector follows the pinhole model and single-axis fringe projection is employed. The theoretical principles are explained, and the proposed method is validated experimentally by a metric three-dimensional reconstruction. The results provide a theoretical framework for further generalization, including implicit camera calibration and lens distortion, while keeping the metric reconstruction capability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The phase-shift exhibited by liquid crystal on silicon devices (LCoS) depends on the voltage applied and the illumination wavelength. Most of the LCoS used in the labs are digitally addressed using a binary pulse width modulated signal. Usually, these devices are characterized for a very small range of the available binary voltage values and for specific wavelengths. In this work, we consider a commercial parallel-aligned liquid crystal on silicon device (PA-LCoS) in which the binary voltages are accessible through the software of the vendor. We perform a complete averaged Stokes polarimetric characterization of the device where we are able to obtain the absolute unwrapped retardance values for a wide range of voltage parameters and across the visible spectrum. This provides a practical approach to evaluate the whole range of phase modulation possibilities, and to analyze some issues related with the physics of the device.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Computation of three-dimensional information of an object from captured images is an important task in computer vision. The use of binocular vision for this task has been widely explored for years. However, the accuracy of three-dimensional reconstruction using binocular vision is conditioned by operating within a specific field of view. This work presents a three-dimensional reconstruction method based on multi-ocular vision. This method achieves higher accuracy in comparison with the conventional binocular approach. The performance of the proposed method is evaluated for three-dimensional reconstruction using images from an existing stereo dataset and a real laboratory experiment using four cameras.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work focuses on the description of face reconstruction by using several image techniques. The main purpose is to explore many opto-electronic configurations for camera capture in order to obtain an accurate reconstruction. In this work, we used different camera technologies and lenses for face reconstruction. We compared the analyzed techniques and applied objective measures to evaluate the best camera configuration for reconstruction accuracy. Computer simulation results are obtained in order to evaluate the performance of the proposed system in terms of reconstruction performance, and computational efficiency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Correlation filters have been widely used in several pattern recognition applications. These filters can reliably detect and accurately locate a target with good tolerance to geometrical modifications and the presence of additive and nonoverlapping noise in the scene. This work presents an exhaustive performance evaluation of several advanced correlation filters for the task of printed character recognition. Several printed character strings in the English alphabet containing geometrical modifications and nonuniform illumination conditions are recognized using different advanced correlation filters. The performance of each tested filter is characterized in terms of efficiency of character recognition and accuracy of character location estimation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Today, the advancement of optical systems that can harness clean and renewable energy sources is a major focus for researchers and innovators worldwide. As we strive to create a sustainable future, this challenge has become increasingly critical to our success. Fresnel lenses are widely used as traditional concentrators, but they have a small acceptance angle, and the reflective elements need continuous maintenance of the surface reflectivity. Transmitting Holographic Optical Elements (HOEs) are an alternative to conventional lenses because they are more economical and versatile. Their material is usually a flexible photopolymer so that the optical element can be attached to different types of support, depending on whether one type of handling is required or another, and they tend to have low weight and volume, as well as a simple way of manufacturing. In addition, also provide an extended focusing area which helps to protect solar cells from heating damage. A theoretical and experimental study on the shrinkage of multiplexed holographic lenses (MHL) that were stored in a low-toxicity photopolymer was carried out. To accomplish the study, a K-space tool was used. Furthermore, an optimization analysis of the angular distance between peaks was performed. To determine efficiency, an evaluation of the short-circuit current under solar illumination with varying incident reconstruction angles was done.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents the implementation of localization algorithms for indoor autonomous mobile robots in known environments. The proposed implementation employs two sensors, an RGB-D camera and a 2D LiDAR to detect the environment and map an occupancy grid that allows the robot to perform autonomous/remote navigation throughout the environment while localizing itself. The implementation uses the data retrieved from the perception sensors and odometry to estimate the position of the robot through the Monte Carlo Localization algorithm. The proposed implementation employs the Robot Operating System (ROS) framework on an NVIDIA Jetson TX2 and the Turtlebot 2. Experimental results were considered using a physical implementation of the mobile robot in an indoor environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents the implementation of a lane detection and tracking algorithm for the autonomous navigation of an Ackermann-steering mobile robot. The proposed implementation employs an RGB camera mounted on the robot, the image information is processed through the lane detection and tracking algorithm to define the robot’s present and future position within the lane. This information is used to determine the orientation of the wheels required to steer the robot within the lane. The implementation employs a Raspberry Pi as the primary logic controller to process the image received from the RGB camera. The Ackermann-steering mobile robot performs steering and navigation with a proportional-integral-derivative controller that manages the orientation of the steering. Experimental results are presented to validate the implementation considering a physical implementation of the Ackermann-steering mobile robot.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Location and pose estimation are essential tasks for robot navigation. Conventional global positioning systems can perform poorly due to environmental or indoor interference. Alternatively, vision-based location and pose estimation systems may be more suitable for indoor and outdoor applications. However, vision-based systems still need to improve their robustness in an uncontrolled environment and operational performance. In this work, a visual pose estimation method for robot navigation in an uncontrolled environment is proposed. The theoretical principles of pose estimation are reviewed and the usefulness of the proposed approach in a navigation sequence is shown. The results obtained show that the proposed method is feasible for robot navigation applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In an ensembles of nominally identical quantum emitters, each emitter can have its emission frequency shifted randomly by its specific environment so that the emission spectrum of the overall system is inhomogeneously broadened over a large frequency range. This can make the system hard to probe and to utilize for a variety of applications. We show that it is possible, with realistic external control field protocols, to refocus the emission spectrum of the ensemble onto
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Skin cancer is one of the most common and lethal diseases in America, and its early detection and treatment is the best approach to protect the lives of those afflicted. Computer-Aided Diagnosis systems have been implemented for decades as evidence of intelligent methods applied in the medical field. In recent years, machine and deep learning fields have shown great potential in medical diagnosis, particularly in skin cancer classification. These methods enable the automated extraction of complex input features, such as those in medical imagery. Therefore, with the advent of quantum computing, it is now possible to perform complex computations with increased speed and efficiency. This study explores the application of quantum machine learning in the classification of skin cancer using the ResNet50 model, a deep convolutional neural network for RGB images. The research employs a quantum-enhanced version of ResNet50 using a quanvolutional layer that the input skin lesion images go through and compare its performance with the classical ResNet50 version. We show comparative experiments, and the results indicate that further experiments are needed using more extensice datasets and different quantum deep learning architectures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Efficiently achieving platform nonspecific designs with multiple functional requirements, such as arbitrary splitting ratio, low insertion losses, broad bandwidth, and small footprint, poses a significant challenge in the inverse design of optical Splitters. Traditional designs often fall short in meeting all the necessary criteria, while more successful nanophotonic inverse designs often demand substantial time and energy resources per device. Here, we present an efficient inverse design algorithm which provides universal designs of Splitters compliant with all the above constraints and offers significantly greater throughput compared to nanophotonic inverse design. To demonstrate the effectiveness of our method, we designed Splitters with various splitting ratios and fabricated 1×N power Splitters using direct laser writing in a borosilicate platform, which shows zero loss within marginal error, competitive imbalance of < 0.5 dB and a broad bandwidth range of 20 − 60 nm around 640 nm. Notably, our designs can be easily tuned to achieve different splitting ratios. Furthermore, we discussed the scalability of the Splitter footprint.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This research presents the integration of ultra low loss (<1dB/cm) silicon nitride waveguides with a superconducting single photon detector with photon number resolving (PNR) capability - the Microwave Kinetic Inductance Detector - introducing a new photonic integrated circuit platform. Our approach is to integrate waveguides and MKIDs on a sapphire substrate, which minimizes the two level system noise for the MKIDs and serves as a bottom cladding for waveguides. Silicon nitride is used as the core of the waveguides and either SiO2 or SiOxNy is used as the top cladding. We present findings on how the fabrications processes -- including the materials used and the method of deposition -- affect the quality of the MKIDs and waveguides. The integrated system would ultimately be used to make photon number resolving detectors for quantum information applications and high resolution spectrometers for astrophysics and metrology applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We experimentally demonstrate light modulation of near-infrared beam at wavelength 1550 nm by near-visible light at wavelength 810 nm with a mean number of photons less than 1 per pulse using an avalanche photodiode.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We, theoretically study the emergence of Akhmediev Breather (AB) that develops via modulation instability in an Ultra-Silicon-Rich Nitride (USRN) waveguide. The nonlinear parameter of the USRN waveguide is 106 times as large as that in single mode fiber with exceptionally strong dispersion induced by the stopband in a cladding modulated Bragg grating (CMBG). This significantly reduces the length scale and input power required for light-matter interaction to take place. We show that at small input powers, the waveguide can trigger strong modulation instability close to the waveguide input. This allows a fully developed AB to form within the first 1-3 mm of a 6 mm waveguide. Realizing MI and AB on an integrated chip offers the opportunity to study a variety of nonlinear phenomena such as supercontinuum generation, Fermi-Pasta-Ulam (FPU) recurrence, and optical rogue waves in highly compact, CMOS-compatible form factors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The next generation of cellular communication networks aims to enable ubiquitous connectivity with limited inter-cell interference through cell-free massive multiple-input multiple-output (CF-MMIMO) technology. Deploying an intelligent reflecting surface (IRS) in the cell-free (CF) architecture can significantly enhance coverage area and increase network spectrum efficiency. The signals reflected from IRS can be superposed coherently at the user by introducing the phaseshift with passive reflecting elements (PREs) on IRS. Non-terrestrial communications via unmanned aerial vehicles (UAVs) are critical in providing the seamless connection of the next generation of cellular communication systems. In the current terrestrial network, the signal strength of aerial platforms like UAVs is compromised as the access points (APs) are, in general, aimed at serving ground user ends (GUEs). This challenge can be addressed by incorporating IRS into the CF-MMIMO network. In this paper, we present an IRS-assisted CF-MMIMO network architecture that provides coverage for conventional terrestrial and evolving non-terrestrial aerial user equipment (UEs) simultaneously. The terrestrial UEs experience Rayleigh fading in this architecture, while the non-terrestrial aerial UEs experience Rician fading. To assess the performance of the considered system intuitively, we derive closed-form expressions for both the downlink (DL) spectral efficiency (SE) and the overall system outage probability. These analytical tools allow us to gain valuable insights into the design of CF systems for the next generation of communication systems. To further demonstrate the effectiveness of our developed analytical framework, we validate the developed theoretical tools with Monte Carlo computer simulations for various use cases of the CF-MMIMO network.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the last decades, new technology fabrication developments have permitted increased resolution and reduced pixel size of Liquid crystal on silicon (LCoS) microdisplays. However, the pixel size reduction triggers the microdisplay performance degradation due to different phenomena, such as the cross-talk between neighbouring pixels, fringing fields, out-of-plane reorientation of the liquid crystal director, and diffraction effects due to the pixelated grid pattern of the microdisplay. In this work, a full 3D simulation model has been applied to predict the liquid crystal director orientation as a function of space and external voltage. The scheme here considered provides the complete vectorial information of the electromagnetic field distribution produced by one single pixel illuminated by plane waves circularly polarised. This analysis is carried on for several pixel and gap sizes for different external voltages. This research focuses on S2 and S3 Stokes parameters and how their behaviour is affected due to the cross-talk phenomena previously presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Inline digital holography is useful to reconstruct focused images of microscopic objects. This configuration is less sensitive to mechanical vibrations and refraction index variations. However, the blurred conjugate image is formed over the focused image. To remove the conjugate image a double-sideband (DSB) filter was proposed. The main sketch proposed to constitute the filter is as follows: first a collimated and linearly polarized wavefront illuminates microscopic objects to be studied. A convergent lens is placed in the overlap between the diffracted wavefront and illumination wavefront. Afterwards, at the focal plane of this lens a Liquid Crystal Spatial Light Modulator (SLM-LC) is positioned, followed by a linear polarizer. Finally, the resulting fringe patterns are recorded with a CCD. Under this scenario, two phase retardation values (0° and π) are addressed to each half of the SLM-LC screen. In this form, the half of spatial frequency spectrum is blocked. Next, the values of the phase retardation on each half of SLM-LC screen are digitally exchanged, and the other half of the spatial frequency spectrum is blocked. In the computer, both fringe patterns are processed to retrieve the complex amplitude with magnitude and phase (hologram) of the diffracted wavefront, and thus, one of the conjugated images is removed. A diffraction integral equation is used to propagate digitally the hologram. Sparsity metric is applied to determine the best focused image. In this work, we provide a theoretical analysis of the longitudinal and transverse magnifications of the reconstructed images. We demonstrate that transverse and longitudinal magnification depend on the focal length of the lens as well as the length from the lens to the CCD. If the object position changes, the reconstruction length is proportional to longitudinal magnification of the system, while the transverse magnification of the reconstructed image does not vary. This is desirable for the displacement trace of moving particles, or for reconstruction of microscopic objects in different planes, in a 3D volume. Finally, we present the experimental results obtained in the reconstruction of the images of microscopic objects. We reconstruct the image of glass microspheres (diameter: 14.5 μm ± 1 μm), a micrometric reticle (100 μm), and a resolution test chart 1951 USAF, to verify longitudinal and transverse magnifications. The proposed study is useful for the study and tracking of quasi-transparent microscopic samples with optimized magnification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Cardiovascular disease (CVD) is a leading cause of death globally. Current CVD diagnostic tests fail to predict early cardiovascular events and assess the risk of developing early CVD. Researchers are actively looking for biomarkers for CVD prediction, such as blood pressure, arterial stiffness, and pulse wave velocity (PWV). Several population-based clinical studies suggest increased PWV is associated with increased CVD mortality. In this study, we propose using a high-speed camera to study PWV as a biomarker of CVD with remote photoplethysmography (rPPG). We selected a reference signal based on distinct features, including peak and modulation depth variations, and used correlation to find the relationship between the local signals and the reference signal. The results revealed areas on the neck that positively and negatively correlated with selected reference signals, possibly representing the distribution of the significant neck vessels: carotid artery and jugular vein, which implies the feasibility of the remote estimation of local PWV using a high-speed camera, thereby expanding the potential applications of rPPG used for PWV estimation and assisted the CVD diagnosis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.