AR-P learning applied to a network model of cortical area 7a

Pietro Mazzoni, Richard A. Andersen, Michael I. Jordan

Research output: Contribution to conferencePaperpeer-review

3 Scopus citations

Abstract

A neural network is described that learns to transform retinotopic coordinates of visual stimuli into a head-centered reference frame by combining retinal stimuli with eye position. Area 7a of the primate cortex is thought to perform a similar transformation. The neurons involved have unique response properties (planar modification of visual response by eye position and large complex receptive fields) and appear to represent head-centered space in a distributed fashion. The network's architecture is similar to that of the backpropagation model of area 7a of R. A. Andersen and D. Zipser (1988, 1989) but is trained with a gradient-descent algorithm that is more biologically plausible than backpropagation. This algorithm is a variant of the associative reward-penalty (AR-P) learning rule and uses a global reinforcement signal to adjust the connection strengths. The network learns to perform the task successfully to any accuracy and generalizes appropriately, and the hidden units develop response properties very similar to those of area 7a neurons. These results show that a learning network does not require backpropagation to acquire biologically interesting properties. These may arise naturally from the network's layered architecture and from the supervised learning paradigm.

Original languageEnglish
Pages373-379
Number of pages7
StatePublished - 1990
Event1990 International Joint Conference on Neural Networks - IJCNN 90 Part 3 (of 3) - San Diego, CA, USA
Duration: Jun 17 1990Jun 21 1990

Conference

Conference1990 International Joint Conference on Neural Networks - IJCNN 90 Part 3 (of 3)
CitySan Diego, CA, USA
Period06/17/9006/21/90

Fingerprint

Dive into the research topics of 'AR-P learning applied to a network model of cortical area 7a'. Together they form a unique fingerprint.

Cite this