Department of Energy Argonne National Laboratory Office of Science NEWTON's Homepage NEWTON's Homepage
NEWTON, Ask A Scientist!
NEWTON Home Page NEWTON Teachers Visit Our Archives Ask A Question How To Ask A Question Question of the Week Our Expert Scientists Volunteer at NEWTON! Frequently Asked Questions Referencing NEWTON About NEWTON About Ask A Scientist Education At Argonne Computational neurobiology
Name: jcolombe
Status: N/A
Age: N/A
Location: N/A
Country: N/A
Date: Around 1993


Question:
I was wondering if anyone at Argonne was studying the brain or learning processes in which chaos (or tortuosity of state-space) plays a role in the "trial" part of trial-and-error learning. I have thought that maybe the state-space of activity in the brain might tend to be more tortuous (and thus variable) in regions of state-space that are unfamiliar, or where experience has not taken the state of the system yet. If experience is the tracking of system state through this space, and if Hebbian associative learning "convergences" experience paths through the space (making these paths themselves into attractors), then learning might "dechaoticize" the areas around these experienced paths, making them repeatable and reliable in activity. So "trial" would only be necessary (and possible) in unfamiliar circumstances, and learning would "iron out" the trial in favor of giving the system the ability to converge on (or be attracted to) familiar states in familiar situations. That might account for classical or perceptual conditioning, but what about operant or behavioral conditioning? I think that if a central reward signal was available to the system, that the amount of association taking place should be proportional to reward. Then the system will learn more and retain more function from favorable situations than unfavorable ones.



Replies:
I am not at Argonne but. . . It is an interesting idea (if I understand it correctly). In an abstract way I suppose that is how neural networks are supposed to work - they go from a completely random state with random (Chaotic) output upon specified inputs to more specific output once learning has taken place. I think I have seen this described as energy minima in the neural net and because of the present sophistication of neural nets that is most like classical conditioning. In behavioral conditioning the brain learns by reward or punishment to a previous response. Some people are looking at models that incorporate state dependent responses (that are contingent on a previous stimuli as well as the present one). I have not seen this used to let the net program itself (enhance learning). My guess is that the reason for this is in how much more computational power is required to adjust neural nets in their current form. As some of my colleagues have noted this is one of the weaknesses of neural networks in their current form. The brain does not appear to need to take up excessive amounts of processing capacity in order to adjust its net to new learned information.

psych



Click here to return to the Biology Archives

NEWTON is an electronic community for Science, Math, and Computer Science K-12 Educators, sponsored and operated by Argonne National Laboratory's Educational Programs, Andrew Skipor, Ph.D., Head of Educational Programs.

For assistance with NEWTON contact a System Operator (help@newton.dep.anl.gov), or at Argonne's Educational Programs

NEWTON AND ASK A SCIENTIST
Educational Programs
Building 360
9700 S. Cass Ave.
Argonne, Illinois
60439-4845, USA
Update: June 2012
Weclome To Newton

Argonne National Laboratory