Feature extraction is a process used to reduce data dimensions using various transforms while preserving the
discriminant characteristics of the original data. Feature extraction has been an important issue in pattern recognition
since it can reduce the computational complexity and provide a simplified classifier. In particular, linear feature
extraction has been widely used. This method applies a linear transform to the original data to reduce the data
dimensions. The decision boundary feature extraction method (DBFE) retains only informative directions for
discriminating among the classes. DBFE has been applied to various parametric and non-parametric classifiers, which
include the Gaussian maximum likelihood classifier (GML), the k-nearest neighbor classifier, support vector machines
(SVM) and neural networks. In this paper, we apply DBFE to deep neural networks. This algorithm is based on the nonparametric
version of DBFE, which was developed for neural networks. Experimental results with the UCI database
show improved classification accuracy with reduced dimensionality.
Among all the various computer vision applications, automatic logo recognition has drawn great interest from industry as well as various academic institutions. In this paper, we propose an angle-distance map, which we used to develop a robust logo detection algorithm. The proposed angle-distance histogram is invariant against scale and rotation. The proposed method first used shape information and color characteristics to find the candidate regions and then applied the angle-distance histogram. Experiments show that the proposed method detected logos of various sizes and orientations.
Most non-linear classification methods can be viewed as non-linear dimension expansion methods followed by a linear classifier. For example, the support vector machine (SVM) expands the dimensions of the original data using various kernels and classifies the data in the expanded data space using a linear SVM. In case of extreme learning machines or neural networks, the dimensions are expanded by hidden neurons and the final layer represents the linear classification. In this paper, we analyze the discriminant powers of various non-linear classifiers. Some analyses of the discriminating powers of non-linear dimension expansion methods are presented along with a suggestion of how to improve separability in non-linear classifiers.
In this paper, a weighted reduced multivariate polynomial for class imbalance learning is proposed. When there is a large variation in the numbers of available class samples, class distribution is said to be imbalanced. In such cases, conventional classifiers may classify most samples as majority classes to maximize the classification accuracy, which may not be desirable in some applications. Thus, for imbalanced data classification, an additional algorithm may be required to address low representation of minority classes when the classification performance of those classes is important. We used weighted ridge regression for class imbalanced data classification. Experimental results with the UCI database show improved classification of the minority classes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.