PDP Pattern Associator Model to Illustrate Learning and Graceful Degradation

Click here to get the instructions on a page that is separate from the models (which you see if you scroll down on this page) so that you can print the instructions, if you wish, to be able to refer to them more conveniently.

Start with Association A. Use the buttons to select + - - + (the sight of a rose) as the INPUT pattern, and - - + + (the smell of a rose) as the DESIRED OUTPUT pattern. Call the combination "Association A". Click the button to set the THRESHOLDS to 0, and type in a LEARNING RATE of .05.

Set RANDOM CONNECTION WEIGHTS by clicking the button. Propagate the activation. You probably will not get the desired output. If not, adjust the weights and propagate again. Then repeat by pressing "Cycle" over and over again. Notice that the error decreases with each cycle. Keep cyling until the desired output is obtained. Cycle at least one more time to make sure the association is learned. If you cycle even more times, the association will be learned more thoroughly.

Scroll down to Association B, and follow the same process, except, this time, select - + - + (the sight of a steak) as the INPUT pattern, and - + + - (the smell of a steak) as the DESIRED OUTPUT pattern. Call the combination "Association B". Use the same process as above to make the model learn this association.

Then scroll down to Association A and B. First click on the button marked "Add Weights A and B". Then set the THRESHOLDS and the LEARNING RATE as before. But now, click the button for "Association A", and propagate the activation. If the association was sufficiently learned, you will get the desired output. Now, click the button for "Association B". Without changing the weights, propagate the activation again. Even with the same connection weights, if the association was sufficiently learned, you should get the correct desired output for this association also. If either or both of the associations was not sufficiently learned, you can go back and cycle a few more times. Then add the weights again, and see if the new set of added weights now gives you the correct output for both associations.

Now, try changing one of the connection weights in the third model just slightly. Check if the model still knows both associations. Also, try changing one of the input values just slightly and check to see if the model still knows the association. As long as the changes are not too great, the model will still know the associations. In a larger model, if you continued trying greater and greater changes, you would start to get less and less correct output, but the model would not fail entirely. Changing the connection weights is called "degrading" the model. (It is like causing brain damage). Changing the input values is called "degrading" the input. (It is like trying to recognize a picture of a familiar object with the picture somewhat out of focus). If you try to run a program on a computer and give the program even slightly degraded input, or even slightly damage the hardware, you can expect the program to completely fail to give the correct output. But when humans receive degraded input, they still can give correct output as long as the degradation is not too great. Similarly, if a human suffers brain damage, they do not necessarilly completely fail to function. How much their functioning is impaired depends on the degree of brain damage. In other words, computers suffer catastrophic failure with the slightest degradation of input or hardware. But human performance degrades gradually, depending on HOW MUCH the input, or their brain, is degraded. This characteristic in humans is called "graceful degradation" of performance. Connectionist models do not suffer the catastrophic failure that computers suffer. Instead, they exhibit "graceful degradation", just like humans.