KEYWORDS: Skin, Heart, Cameras, Video, Signal to noise ratio, RGB color model, Chromium, Blood circulation, Interference (communication), Linear filtering
Although the human visual system is not sufficiently sensitive to perceive blood circulation, blood flow caused by cardiac activity makes slight changes on human skin surfaces. With advances in imaging technology, it has become possible to capture these changes through digital cameras. However, it is difficult to obtain clear physiological signals from such changes due to its fineness and noise factors, such as motion artifacts and camera sensing disturbances. We propose a method for extracting physiological signals with improved quality from skin colored-videos recorded with a remote RGB camera. The results showed that our skin color magnification method reveals the hidden physiological components remarkably in the time-series signal. A Korea Food and Drug Administration-approved heart rate monitor was used for verifying the resulting signal synchronized with the actual cardiac pulse, and comparisons of signal peaks showed correlation coefficients of almost 1.0. In particular, our method can be an effective preprocessing before applying additional postfiltering techniques to improve accuracy in image-based physiological signal extractions.
With the growth of biometric technology, spoofing attacks have been emerged a threat to the security of the system. Main spoofing scenarios in the face recognition system include the printing attack, replay attack, and 3D mask attack. To prevent such attacks, techniques that evaluating liveness of the biometric data can be considered as a solution. In this paper, a novel face liveness detection method based on cardiac signal extracted from face is presented. The key point of proposed method is that the cardiac characteristic is detected in live faces but not detected in non-live faces. Experimental results showed that the proposed method can be effective way for determining printing attack or 3D mask attack.
The classification of eye openness and closure has been researched in various fields, e.g., driver drowsiness detection, physiological status analysis, and eye fatigue measurement. For a classification with high accuracy, accurate segmentation of the eye region is required. Most previous research used the segmentation method by image binarization on the basis that the eyeball is darker than skin, but the performance of this approach is frequently affected by thick eyelashes or shadows around the eye. Thus, we propose a fuzzy-based method for classifying eye openness and closure. First, the proposed method uses I and K color information from the HSI and CMYK color spaces, respectively, for eye segmentation. Second, the eye region is binarized using the fuzzy logic system based on I and K inputs, which is less affected by eyelashes and shadows around the eye. The combined image of I and K pixels is obtained through the fuzzy logic system. Third, in order to reflect the effect by all the inference values on calculating the output score of the fuzzy system, we use the revised weighted average method, where all the rectangular regions by all the inference values are considered for calculating the output score. Fourth, the classification of eye openness or closure is successfully made by the proposed fuzzy-based method with eye images of low resolution which are captured in the environment of people watching TV at a distance. By using the fuzzy logic system, our method does not require the additional procedure of training irrespective of the chosen database. Experimental results with two databases of eye images show that our method is superior to previous approaches.
This article [Opt. Eng.. 52, (7 ), 073104 (2013)] was originally published on 9 July 2013 with an error in Table 2. For Subject number 19, the value in the last column (EC) should be 0.7314, not −0.7314 . The corrected table is reprinted below.
Recently, it has become necessary to evaluate the performance of display devices in terms of human factors. To meet this requirement, several studies have been conducted to measure the eyestrain of users watching display devices. However, these studies were limited in that they did not consider precise human visual information. Therefore, a new eyestrain measurement method is proposed that uses a liquid crystal display (LCD) to measure a user’s gaze direction and visual field of view. Our study is different in the following four ways. First, a user’s gaze position is estimated using an eyeglass-type eye-image capturing device. Second, we propose a new eye foveation model based on a wavelet transform, considering the gaze position and the gaze detection error of a user. Third, three video adjustment factors—variance of hue (VH), edge, and motion information—are extracted from the displayed images in which the eye foveation models are applied. Fourth, the relationship between eyestrain and three video adjustment factors is investigated. Experimental results show that the decrement of the VH value in a display induces a decrease in eyestrain. In addition, increased edge and motion components induce a reduction in eyestrain.
Gaze-tracking technology is used to obtain the position of a user's viewpoint and a new gaze-tracking method is proposed based on a wearable goggle-type device, which includes an eye-tracking camera and a frontal viewing camera. The proposed method is novel in five ways compared to previous research. First, it can track the user's gazing position, allowing for the natural facial and eye movements by using frontal viewing and an eye-tracking camera. Second, an eye gaze position is calculated using a geometric transform, based on the mapping function among three rectangular regions. These are a rectangular region defined by the four pupil centers detected when a user gazes at the four corners of a monitor, a distorted monitor region observed by the frontal viewing camera, and an actual monitor region, respectively. Third, a facial gaze position is estimated based on the geometric center and the four internal angles of the monitor region detected by the frontal viewing camera. Fourth, a final gaze position is obtained by using the weighted summation of the eye and the facial gazing positions. Fifth, since a simple 2-D method is used to obtain the gazing position instead of a complicated 3-D method, the proposed method can be operated at real-time speeds. Experimental results show that the root mean square (rms) error of gaze estimation is less than 1 deg.
Gaze tracking technology is a convenient interfacing method for mobile devices. Most previous studies used a large-sized desktop or head-mounted display. In this study, we propose a novel gaze tracking method using an active appearance model (AAM) and multiple support vector regression (SVR) on a mobile device. Our research has four main contributions. First, in calculating the gaze position, the amount of facial rotation and translation based on four feature values is computed using facial feature points detected by AAM. Second, the amount of eye rotation based on two feature values is computed for measuring eye gaze position. Third, to compensate for the fitting error of an AAM in facial rotation, we use the adaptive discrete Kalman filter (DKF), which applies a different velocity of state transition matrix to the facial feature points. Fourth, we obtain gaze position on a mobile device based on multiple SVR by separating the rotation and translation of face and eye rotation. Experimental results show that the root mean square (rms) gaze error is 36.94 pixels on the 4.5-in. screen of a mobile device with a screen resolution of 800×600 pixels.
Until now, most research on iris recognition has been focused on recognition algorithms and iris camera systems. There has been little research into fake iris detection, although recently its importance has been greatly emphasized. Fake iris detection refers to the process of detecting and defeating fake iris images. In this work, we propose a new method of defeating fake iris attacks using Purkinje images based on gaze position. Our research presents the following four improvements over previous works. First, we calculate the theoretical positions and distances between the Purkinje images based on the 3-D human eye model. Second, by using these positions and distances (which changed according to gaze positions), we design a more robust way of detecting fake irises. Third, since it is not necessary to align the center of the user's eyeball with the optical axis of the camera, the proposed method can be used in practical iris systems. Fourth, by activating the illumination infrared-light-emitting diode (IR-LED), the distance-measuring IR-LED and the Purkinje IR-LED alternatively, we obtain accurate positions for the Purkinje images according to the user's gaze position. Experimental results show that the false rejection rate (FRR) is 0.2% and the false acceptance rate (FAR) is 0.2%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.