Paper
19 March 2009 Information-theoretic feature extraction and selection for robust classification
Chandra Shekhar Dhir, Soo Young Lee
Author Affiliations +
Abstract
Classification performance of recognition tasks can be improved by selection of highly discriminative features from the low-dimensional linear representation of data. High-dimensional multivariate data can be represented in lower dimensions by unsupervised feature extraction techniques which attempts to remove the redundancy in the data and/or resolve the multivariate prediction problems. These extracted low-dimensional features of raw data may not ensure good class discrimination, therefore, supervised feature selection methods motivated by information-theoretic approaches can improve the recognition performance with lesser number of features. Proposed hybrid feature selection methods efficiently selects features with higher class discrimination in comparison to feature-class mutual information (MI), Fisher criterion or unsupervised selection using variance; thus, resulting in much improved recognition performance. Feature-class MI criterion and hybrid feature selection methods are computationally scalable and optimal selectors for statistically independent features.
© (2009) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Chandra Shekhar Dhir and Soo Young Lee "Information-theoretic feature extraction and selection for robust classification", Proc. SPIE 7343, Independent Component Analyses, Wavelets, Neural Networks, Biosystems, and Nanoengineering VII, 73430H (19 March 2009); https://doi.org/10.1117/12.822569
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Feature selection

Independent component analysis

Feature extraction

Principal component analysis

Databases

Statistical analysis

Facial recognition systems

RELATED CONTENT


Back to Top