Hebb learning is the simplest learning rule for storing an association between processing units in a modifiable connection weight. It is based on Donald Hebb’s (1949) neuroscientific hypothesis that if two connected neurons are firing at the same time, then the synapse between them should be strengthened, so that later if one of the neurons fires, it will cause the other to fire as well.
In connectionist networks, this idea is made more general by defining the weight change in terms of the multiplication of the activities of the two units, and letting the units have both positive or negative activation values. This builds inhibition in "for free", which was a problem for Hebb's original rule.
This rule is important, because it is one of the few connectionist learning rules that is supported by biological evidence. However, it is a fairly weak rule, and networks trained with it are very limited. The Hebb rule works well as long as all the input patterns are orthogonal or uncorrelated, but produces errors when these conditions are violated. Such weakness led to the development of other more powerful rules, such as the delta rule and the gradient descent rule, that utilize the discrepancy between the desired and actual output of each output unit to change the weights feeding into it.
References:
- Bechtel, W., & Abrahamsen, A. (1993). Connectionism and the mind: An introduction to parallel processing in networks. Oxford, UK: Blackwell.
- Hebb, D.O. (1949). The organization of behavior. New York: Wiley.
- Rumelhart, D.E., & McClelland, J. L.(1986). Parallel distributed processing: Explorations in the microstructure of cognition, vol. 1: Foundations. Cambridge, MA: MIT Press.