Appendix B. Feature Saliency
Author Affiliations +
Abstract
When designing a classifier, the developer is always concerned with which features to use to solve a particular classification or mapping problem. In this appendix, we will present one method in which the weights of a trained neural network can be used to discover which features are important. We begin by asking how, if you were the feedforward network, you could reduce the effect of a “bad” feature on the network. The logical answer is to drive the weights tied to the “bad” feature towards zero, so that it has no contribution to any neuron tied to it in the layer above it. Similarly, you would increase the weights tied to a “good” feature so its effect would be greater.
Online access to SPIE eBooks is limited to subscribing institutions.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Neurons

Data modeling

Neural networks

Error analysis

Network architectures

Statistical modeling

Back to Top