Strong Anti-Hebbian Plasticity Alters the Convexity of Network Attractor Landscapes

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

In this brief, we study recurrent neural networks in the presence of pairwise learning rules. We are specifically interested in how the attractor landscapes of such networks become altered as a function of the strength and nature (Hebbian versus anti-Hebbian) of learning, which may have a bearing on the ability of such rules to mediate large-scale optimization problems. Through formal mathematical analysis, we show that a transition from Hebbian to anti-Hebbian learning brings about a pitchfork bifurcation that destroys convexity in the network attractor landscape. In larger scale settings, this implies that anti-Hebbian plasticity will bring about multiple stable equilibria, and such effects may be outsized at interconnection or 'choke' points. Furthermore, attractor landscapes are more sensitive to slower learning rates than faster ones. These results provide insight into the types of objective functions that can be encoded via different pairwise plasticity rules.

Original languageEnglish
Pages (from-to)17491-17498
Number of pages8
JournalIEEE Transactions on Neural Networks and Learning Systems
Volume36
Issue number9
DOIs
StatePublished - 2025

Keywords

  • Anti-Hebbian learning
  • Hebbian learning
  • attractors
  • recurrent neural networks

Fingerprint

Dive into the research topics of 'Strong Anti-Hebbian Plasticity Alters the Convexity of Network Attractor Landscapes'. Together they form a unique fingerprint.

Cite this