Abstract
A neural network is described that learns to transform retinotopic coordinates of visual stimuli into a head-centered reference frame by combining retinal stimuli with eye position. Area 7a of the primate cortex is thought to perform a similar transformation. The neurons involved have unique response properties (planar modification of visual response by eye position and large complex receptive fields) and appear to represent head-centered space in a distributed fashion. The network's architecture is similar to that of the backpropagation model of area 7a of R. A. Andersen and D. Zipser (1988, 1989) but is trained with a gradient-descent algorithm that is more biologically plausible than backpropagation. This algorithm is a variant of the associative reward-penalty (AR-P) learning rule and uses a global reinforcement signal to adjust the connection strengths. The network learns to perform the task successfully to any accuracy and generalizes appropriately, and the hidden units develop response properties very similar to those of area 7a neurons. These results show that a learning network does not require backpropagation to acquire biologically interesting properties. These may arise naturally from the network's layered architecture and from the supervised learning paradigm.
Original language | English |
---|---|
Pages | 373-379 |
Number of pages | 7 |
State | Published - 1990 |
Event | 1990 International Joint Conference on Neural Networks - IJCNN 90 Part 3 (of 3) - San Diego, CA, USA Duration: Jun 17 1990 → Jun 21 1990 |
Conference
Conference | 1990 International Joint Conference on Neural Networks - IJCNN 90 Part 3 (of 3) |
---|---|
City | San Diego, CA, USA |
Period | 06/17/90 → 06/21/90 |