This week we introduce the third and final building block of connectionist networks, the use of sequences of decisions. How might we train a multilayer perceptron, which includes at least one layer of nonlinear processors between its input and output layers? The answer to this question eluded connectionist researchers for many years, but was finally revealed in the mid-1980s. As a result, researchers were able to train networks with the same in-principle power as the universal Turing machine. In short, this week we learn about the generalized delta rule as a core element of modern connectionism.
The supplementary reading supplied on the right provides a more detailed account of such modern networks than will be supplied by the lecture. Our in-class activity will involve installing and using our third program, the Rumelhart software that trains multilayer perceptrons.!
|