To enhance the measurement accuracy and robustness of 3D reconstruction systems for obtaining precise shape and position information of object surfaces, including resistance to texture interference, we propose an anti-interference structured light measurement system based on scene surface. This system addresses limitations in stripe-encoded structured light 3D measurement, such as abrupt surface reflectance changes and camera defocusing, while investigating methods for phase retrieval and surface reflectance-based phase compensation. By adjusting the projection modulation intensity of the projector, we control the intensity of the reflected light on the measurement surface, reducing the sensitivity of the camera defocusing blur coefficient to texture variations. We propose an adaptive modulation intensity adjustment method to minimize phase errors between neighboring pixels, enabling high-precision phase retrieval. Experimental results demonstrate that in simulated experiments, the root mean square errors before and after anti-interference processing are 0.2124 rad and 0.0371 rad, respectively. In real-world scenarios, the errors of different measured surfaces decrease by approximately 60%. Comparative experiments validate the effectiveness and feasibility of the proposed method, indicating a significant improvement in measurement accuracy. The method enhances the performance of the coded structured light in applications such as industrial measurement and precise sorting, exhibiting high robustness, accuracy, and anti-interference capabilities.
The harvesting robot for Agaricus bisporus can significantly improve production efficiency. However, due to the limited growing space and complex environment of Agaricus bisporus, they usually grow into clusters with many adhesions and occlusions. This leads to significant stitching and target recognition challenges for Agaricus bisporus cultivation scenes. The high-precision scene stitching and recognition method, based on depth sensing, was proposed and evaluated for the harvest of Agaricus bisporus. The rapid depth map stitching algorithm based on disparity correction was proposed because the complete dataset cannot be obtained from a single collection due to the limited scene space. The strategy of hierarchical recognition of markers, called the "hierarchical watershed" algorithm, based on depth maps, is proposed to overcome the challenge of cluster occlusion in the mushroom harvest. The complete solution is applied to a robot platform that integrates scene stitching, recognition, positioning, and the target harvest. The platform is equipped with three robotic arms to increase harvest speed. The terminal position of the mechanical arm is equipped with suction cup grasping to ensure the quality of the harvest. The results showed that the overall stitching error and center coordinate positioning error were less than 2 𝑚𝑚 in the 200 𝑚𝑚 × 400 𝑚𝑚, and the success rate of picking was 95.82%.
Binocular optical tracking systems are an important component in the field of 3D measurement and reconstruction, and the calibration of objects with reflective marker points attached, hereafter referred to as marker object, is a key factor in the overall measurement accuracy. In response to the challenge of using more sophisticated and expensive instruments such as CMM for the calibration of traditional marker object, this paper proposes a short baseline, high-precision, fast, and low-cost marker object calibration method based on transformation constraints. A binocular calibration system with a short baseline and small field of view is designed to improve the initial spatial resolution accuracy. The multi-angle projection of marker points on the marker object under known fixed transformation constraints is collected through precision servo rotary stage control, and a global error optimization method based on Newton's iterative method is proposed to reduce the estimation error of the initial marker points. The marker object calibration system built in this paper enables a spatial position accuracy resolution of 0.15mm between marker points on the marker object, realizing the need for low-cost, fast and high-precision calibration, and achieving high-precision tracking of binocular optical tracking systems.
Static hand gesture recognition (HGR) has drawn increasing attention in computer vision and human-computer interaction (HCI) recently because of its great potential. However, HGR is a challenging problem due to the variations of gestures. In this paper, we present a new framework for static hand gesture recognition. Firstly, the key joints of the hand, including the palm center, the fingertips and finger roots, are located. Secondly, we propose novel and discriminative features called root-center-angles to alleviate the influence of the variations of gestures. Thirdly, we design a distance metric called finger length weighted Mahalanobis distance (FLWMD) to measure the dissimilarity of the hand gestures. Experiments demonstrate the accuracy, efficiency and robustness of our proposed HGR framework.
Photoplethysmography (PPG) technology is widely used in wearable heart pulse rate monitoring. It might reveal the potential risks of heart condition and cardiopulmonary function by detecting the cardiac rhythms in physical exercise. However the quality of wrist photoelectric signal is very sensitive to motion artifact since the thicker tissues and the fewer amount of capillaries. Therefore, motion artifact is the major factor that impede the heart rate measurement in the high intensity exercising. One accelerometer and three channels of light with different wavelengths are used in this research to analyze the coupled form of motion artifact. A novel approach is proposed to separate the pulse signal from motion artifact by exploiting their mixing ratio in different optical paths. There are four major steps of our method: preprocessing, motion artifact estimation, adaptive filtering and heart rate calculation. Five healthy young men are participated in the experiment. The speeder in the treadmill is configured as 12km/h, and all subjects would run for 3-10 minutes by swinging the arms naturally. The final result is compared with chest strap. The average of mean square error (MSE) is less than 3 beats per minute (BPM/min). Proposed method performed well in intense physical exercise and shows the great robustness to individuals with different running style and posture.
KEYWORDS: Image segmentation, Skin, RGB color model, Image processing algorithms and systems, Feature extraction, Data modeling, Medical imaging, Light sources and illumination, Lamps, Space reconnaissance
The automatic segmentation of psoriatic lesions is widely researched these years. It is an important step
in Computer-aid methods of calculating PASI for estimation of lesions. Currently those algorithms can
only handle single erythema or only deal with scaling segmentation. In practice, scaling and erythema
are often mixed together. In order to get the segmentation of lesions area,this paper proposes an
algorithm based on Random forests with color and texture features. The algorithm has three steps. The
first step, the polarized light is applied based on the skin’s Tyndall-effect in the imaging to eliminate
the reflection and Lab color space are used for fitting the human perception. The second step, sliding
window and its sub windows are used to get textural feature and color feature. In this step, a feature of
image roughness has been defined, so that scaling can be easily separated from normal skin. In the end,
Random forests will be used to ensure the generalization ability of the algorithm. This algorithm can
give reliable segmentation results even the image has different lighting conditions, skin types. In the
data set offered by Union Hospital, more than 90% images can be segmented accurately.
KEYWORDS: Skin, Feature selection, Feature extraction, Bismuth, Lithium, Image segmentation, Human vision and color perception, Diagnostics, Medical diagnostics, Machine learning
At present PASI system of scoring is used for evaluating erythema severity, which can help doctors to diagnose psoriasis
[1-3]. The system relies on the subjective judge of doctors, where the accuracy and stability cannot be guaranteed [4].
This paper proposes a stable and precise algorithm for erythema severity estimation. Our contributions are twofold. On
one hand, in order to extract the multi-scale redness of erythema, we design the hierarchical feature. Different from
traditional methods, we not only utilize the color statistical features, but also divide the detect window into small window
and extract hierarchical features. Further, a feature re-ranking step is introduced, which can guarantee that extracted
features are irrelevant to each other. On the other hand, an adaptive boosting classifier is applied for further feature
selection. During the step of training, the classifier will seek out the most valuable feature for evaluating erythema
severity, due to its strong learning ability. Experimental results demonstrate the high precision and robustness of our
algorithm. The accuracy is 80.1% on the dataset which comprise 116 patients’ images with various kinds of erythema.
Now our system has been applied for erythema medical efficacy evaluation in Union Hosp, China.
An active depth sensing approach by laser speckle projection system is proposed. After capturing the speckle pattern with an infrared digital camera, we extract the pure speckle pattern using a direct-global separation method. Then the pure speckles are represented by Census binary features. By evaluating the matching cost and uniqueness between the real-time image and the reference image, robust correspondences are selected as support points. After that, we build a disparity grid and propose a generative graphical model to compute disparities. An iterative approach is designed to propagate the messages between blocks and update the model. Finally, a dense depth map can be obtained by subpixel interpolation and transformation. The experimental evaluations demonstrate the effectiveness and efficiency of our approach.
This paper presents a machine vision system for automated label inspection, with the goal to reduce labor cost and ensure consistent product quality. Firstly, the images captured from each single-camera are distorted, since the inspection object is approximate cylindrical. Therefore, this paper proposes an algorithm based on adverse cylinder projection, where label images are rectified by distortion compensation. Secondly, to overcome the limited field of viewing for each single-camera, our method novelly combines images of all single-cameras and build a panorama for label inspection. Thirdly, considering the shake of production lines and error of electronic signal, we design the real-time image registration to calculate offsets between the template and inspected images. Experimental results demonstrate that our system is accurate, real-time and can be applied for numerous real- time inspections of approximate cylinders.
By transferring of prior knowledge from source domains and synthesizing the new knowledge extracted from the target domain, the performance of learning can be improved when there are insufficient training data in the target domain. In this paper we propose a new method to transfer a deformable part model (DPM) for object detection, using sharable filters from offline-trained auxiliary DPMs of similar categories and new filters learnt from the target training samples to improve the performance of the target object detector. A DPM consists of a collection of root and part filters. The filters of the auxiliary detectors capture the sharable appearance features and can be used as prior knowledge. The sharable filters are employed by the new detector with a coefficient reweighting algorithm to fit the target object much better. Meanwhile the target object still has some distinct local appearance features that the part filters in the auxiliary filter pool can not represent. Hence, new part filters will be learnt with the training samples of the target object and added to the filter pool as complementary. The final learnt model will be an assembly of transferred auxiliary filters and additional target filters. With a latent transfer learning algorithm, appropriate local features are extracted for the transfer of the auxiliary filters and the description of the distinct target filters. Our experiments demonstrate that the proposed strategy precedes some state-of-the-art methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.