|
|
|
A recurrent network (Elman, 1990) is an elaborated version of a multilayer perceptron. Like a multilayer perceptron, it has a set of input units, a set of output units, and a set of hidden units. However, in addition it has another set of hidden units that serve as memories of the activities of the "standard" set. That is, when hidden units activate, these activities are then stored in the second set of hidden units. These stored activities can then be used as supplemental inputs to the network during the next stimulus presentation. As a result, the activity of a recurrent network is a function of a) the current inputs and b) the previous state of the network as represented in the hidden unit memory. This permits the network to deal with input pattern characteristics that are temporal in nature, and allows networks to classify sequential stimuli, such as artificial grammars (Kremer, 1995).
References:
- Elman, J. (1990). Finding structure in time. Cognitive science, 14, 179-211.
- Kremer, S. C. (1995). On the computational powers of Elman-style recurrent networks. IEEE Transactions on neural networks, 6, 1000-1004.
(Added January 2010)
|
|
|
|