Paper
4 August 2003 Measuring the generalization capabilities of arbitrary classifiers
Author Affiliations +
Abstract
Given a classifier trained on two-class data one wishes to determine how well the classifier will perform on new, unseen data. To do this task one typically uses the data to estimate a distribution of the data, generate new data from this distribution, and then test the data. Also, hold-out methods are used including cross-validation. We propose a new method that uses computational geometry techniques that produces a partial ordering on subsets in feature space and measures how well the classifier will perform on these subsets. There are some conditions on the classifier that must be satisfied in order that this measure, in fact, exists. We give the details for these conditions as well as the results concerning this special collection of classifiers. We derive the measure that quantifies the generalization capability for the special collection classifier.
© (2003) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Mark E. Oxley and Amy L. Magnus "Measuring the generalization capabilities of arbitrary classifiers", Proc. SPIE 5103, Intelligent Computing: Theory and Applications, (4 August 2003); https://doi.org/10.1117/12.487484
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Logic

Fuzzy logic

Sensors

Data analysis

Defense and security

Lithium

Matrices

Back to Top