The classification process for pattern recognition uses sensors to read measurements from input examples. A feature function next reduces and quantizes these measurements into feature vectors (combinations of feature data). Finally, using the feature vectors, a decision function classifies the current example by comparison to a statistical model of feature vector data. Which particular feature vectors are made available to the decision function has usually been determined during the design phase by the person constructing the system depending on the hardware available for the sensors. With an adaptive synthesis layer, however, a collective learning automata learns which feature vectors are contributing to correct classification and dynamically adjusts the decision function accordingly. A weighted average scheme is used to combine multiple subhypotheses of the example's class (known as rank hypotheses) into a single output hypothesis (known as the super hypothesis). Updating the weights depends on two factors: an evaluation score and a feature vector compensation. The score is a collective measure of the weighted average combination of rank hypotheses. The feature vector compensation is an individual measure of each feature vector's contribution to the overall decision based on a history of detected patterns. This two-layer approach is one of the most efficient methods in multi-objective programming, yet the application of this approach to machine learning as proposed in this dissertation is unique. In particular, a collective learning automation is used to enhance the combination of a number of candidate class subhypotheses into as single, unique classification. This process is refereed to as adaptive synthesis. This approach has been applied to black and white character recognition and grey scale block classification using the Adaptive Learning Image Analysis System (ALIAS) at the George Washington University and the Research Institute for Applied Knowledge Processing (FAW) in Ulm, Germany.
|