A person's signature is a significant biometric trait that can be used to identify the person, often required in financial transactions and insurance-related activities. It is well-known that during such transactions, handwritten forgery signatures made on behalf of a genuine user results stealing the money or other means of wealth which was safely kept in the bank or other institutions. Therefore, it is necessary to have a vital safeguard like an automated signature verification system against malicious offenders who threaten society. This paper reports a writer independent offline signature verification system that makes use of genuine and forged signatures written in Manipuri script. Further, a combination of handcrafted geometric features and the features extracted using Convolutional Neural Network (CNN) is used. Then, the combined features' feature space is made optimal using the Genetic Algorithm (GA). This system has achieved a very high-level performance using an ensemble of four pattern classifiers, Support Vector Machine (SVM), k-Nearest Neighbours (KNN), Naive Bayes learning, and Decision Tree Learning. Ensembling of the classifiers is done using logical OR rule and Majority Voting. Experiments are conducted on an original database consisting of Manipuri signatures of 81 individuals. Experimental results are compelling, while the proposed offline signature verification system is compared with the existing system.
In this paper, we investigate an application that integrates holistic appearance based method and feature based method for face recognition. The automatic face recognition system makes use of multiscale Kernel PCA (Principal Component Analysis) characterized approximated face images and reduced the number of invariant SIFT (Scale Invariant Feature Transform) keypoints extracted from face projected feature space. To achieve higher variance in the inter-class face images, we compute principal components in higher-dimensional feature space to project a face image onto some approximated kernel eigenfaces. As long as feature spaces retain their distinctive characteristics, reduced number of SIFT points are detected for a number of principal components and keypoints are then fused using user-dependent weighting scheme and form a feature vector. The proposed method is tested on ORL face database, and the efficacy of the system is proved by the test results computed using the proposed algorithm.
In this paper, we present a hierarchical kernel associative memory (H-KAM) based computational model with Finite Ridgelet Transform (FRIT) representation for multispectral palmprint recognition. To characterize a multispectral palmprint image, the Finite Ridgelet Transform is used to achieve a very compact and distinctive representation of linear singularities while it also captures the singularities along lines and edges. The proposed system makes use of Finite Ridgelet Transform to represent multispectral palmprint image and it is then modeled by Kernel Associative Memories. Finally, the recognition scheme is thoroughly tested with a benchmarking multispectral palmprint database CASIA. For recognition purpose a Bayesian classifier is used. The experimental results exhibit robustness of the proposed system under different wavelengths of palm image.
This paper proposes a palmprint identification system using Finite Ridgelet Transform (FRIT) and Bayesian classifier.
FRIT is applied on the ROI (region of interest), which is extracted from palmprint image, to extract a set of distinctive
features from palmprint image. These features are used to classify with the help of Bayesian classifier. The proposed
system has been tested on CASIA and IIT Kanpur palmprint databases. The experimental results reveal better
performance compared to all well known systems.
This paper presents a palmprint based verification system using SIFT features and Lagrangian network graph technique.
We employ SIFT for feature extraction from palmprint images whereas the region of interest (ROI) which has been
extracted from wide palm texture at the preprocessing stage, is considered for invariant points extraction. Finally,
identity is established by finding permutation matrix for a pair of reference and probe palm graphs drawn on extracted
SIFT features. Permutation matrix is used to minimize the distance between two graphs. The propsed system has been
tested on CASIA and IITK palmprint databases and experimental results reveal the effectiveness and robustness of the
system.
In this paper, fusion of Principal Component Analysis (PCA) and generalization of Linear Discriminant Analysis (LDA)
in the context of multiview face recognition is proposed. The generalization of LDA is extended to establish correlation
between face classes in the transformed representation, which is called canonical covariate. The proposed work uses
Gabor filter bank for extracting facial features characterized by spatial frequency, spatial locality and orientation to
compensate the variations in face that occur due to change in illumination, pose and facial expression. Convolution of
Gabor filter bank with face images produces Gabor face representations with high dimensional feature vectors. PCA and
canonical covariate are then applied on the Gabor face representations to reduce the high dimensional feature spaces into
low dimensional Gabor eigenfaces and Gabor canonical faces. Reduced eigenface vector and canonical face vector are
fused together using weighted mean fusion rule. Finally, support vector machines have been trained with augmented
fused set of features to perform recognition task. The proposed system has been evaluated with UMIST face database
and performs with higher recognition accuracy for multi-view face images.
Multibiometric systems offer more reliable and accurate performance combining the benefits of using multiple traits for
user authentication. Due to incompatible biometric characteristics such as unmatched image patterns, improper feature
registration and feature space representation, image scaling and unfeasible fusion schemes often degrades the
performance of multibiometric systems. This paper focuses on the benefits of feature level and match score level fusions
of face and ear biometrics using scale invariant feature transform (SIFT) representation and probabilistic graph. The
proposed fusion techniques first compute and detect the SIFT features from face and ear images independently. Further
probabilistic graphs are drawn on extracted feature points. By using iterative relaxation algorithm in both the graphs,
which are drawn on face and ear images, corresponding feature points are searched and match points are paired and
grouped into two independent sets. During feature level fusion, both the feature sets are concatenated together into an
augmented group. Combined feature set is normalized using 'min-max' normalization rule and finally the concatenated
feature vector is used for verification. In match score level fusion, independent verifications are performed using
relaxation based probabilistic graphs and point pattern matching algorithm. As a result, independent matching scores
generated from face and ear biometrics is fused together using 'sum' rule. The reported experimental results show the
performance improvements in verification by applying feature level. and score level fusions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.