There is no universally accepted methodology to determine how much confidence one should have in a classifier
output. This research proposes a framework to determine the level of confidence in an indication from a classifier
system where the output is a measurement value. There are two types of confidence developed in this paper. The
first is confidence in a classification system or classifier and is denoted classifier confidence. The second is the
confidence in the output of a classification system or classifier. In this paradigm, we posit that the confidence in
the output of a classifier should be, on average, equal to the confidence in the classifier as a whole (i.e., classifier
confidence). The amount of confidence in a given classifier is estimated using multiattribute preference theory
and forms the foundation for a quadratic confidence function that is applied to posterior probability estimates.
Classifier confidence is currently determined based upon individual measurable value functions for classification
accuracy, average entropy, and sample size, and the form of the overall measurable value function is multilinear
based upon the assumption of weak difference independence. Using classifier confidence, a quadratic function is
trained to be the confidence function which inputs a posterior probability and outputs the confidence in a given
indication. In this paradigm, confidence is not equal to the posterior probability estimate but is related to it.
This confidence measure is a direct link between traditional decision analysis techniques and traditional pattern
recognition techniques. This methodology is applied to two real world data sets, and results show the sort of
behavior that would be expected from a rational confidence measure.
Typically, when considering multiple classifiers, researchers assume that they are independent. Under this assumption estimates for the performance of the fused classifiers are easier to obtain and quantify mathematically. But, in fact, classifiers may be correlated, thus, the performance of the fused classifiers will be over-estimated. This paper will address the issue of the dependence between the classifiers to be fused. Specifically, we will assume a level of dependence between two classifiers for a given fusion rule and produce a formula to quantify the performance of this newly fused classifier. The performance of the fused classifiers will then be evaluated via the Receiver Operating Characteristic (ROC) curve. A classifier typically relies on parameters that may vary over a given range. Thus, the probability of true and false positives can be computed over this range of values. The graph of these probabilities over this range then produces the ROC curve. The probability of true positive and false positive from the fused classifiers are developed according to various decision rules. Examples of dependent fused classifiers will be given for various levels of dependency and multiple decision rules.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.