Paper
1 July 1992 Nonseparable data models for a single-layer perceptron
John J. Shynk, Neil J. Bershad
Author Affiliations +
Abstract
This paper describes two nonseparable data models that can be used to study the convergence properties of perceptron learning algorithms. A system identification formulation generates the training signal, with an input that is a zero-mean Gaussian random vector. One model is based on a two-layer perceptron configuration, while the second model has only one layer but with a multiplicative output node. The analysis in this paper focuses on Rosenblatt's training procedure, although the approach can be applied to other learning algorithms. Some examples of the performance surfaces are presented to illustrate possible convergence points of the algorithm for both nonseparable data models.
© (1992) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
John J. Shynk and Neil J. Bershad "Nonseparable data models for a single-layer perceptron", Proc. SPIE 1710, Science of Artificial Neural Networks, (1 July 1992); https://doi.org/10.1117/12.140095
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Data modeling

System identification

Artificial neural networks

Binary data

Signal generators

Evolutionary algorithms

Computer engineering

RELATED CONTENT


Back to Top