|
|
|
A perceptron is a prototypical example of an Old Connectionist network that was originally developed by Rosenblatt (1962). It consists of a layer of input units, a layer of modifiable connections, and a layer of one or more output units. It has no hidden units. It differs from a distributed associative memory in that its output units use a nonlinear activation function. Originally this was the step function. However, other functions that are both nonlinear and continuous can be used (e.g., logistic, Gaussian, see Dawson, 2004). Perceptrons were originally important because they demonstrated how a system could learn to recognize or categorize patterns. They later became important because there are many complex pattern recognition problems that they cannot learn to solve, and this motivated the development of New Connectionist networks that included layers of trainable hidden units. However, modern use of perceptrons still continues, because this simple network can provide insight into basic animal learning phenomena (e.g. Dawson, 2008).
- Dawson, M. R. W. (2004). Minds And Machines : Connectionism And Psychological Modeling. Malden, MA: Blackwell Pub.
- Dawson, M. R. W. (2008). Connectionism and classical conditioning. Comparative Cognition and Behavior Reviews, 3 (Monograph), 1-115.
- Rosenblatt, F. (1962). Principles Of Neurodynamics. Washington: Spartan Books.
(Added October 2009)
|
|
|
|