TY - JOUR
T1 - Recurrent information optimization with local, metaplastic synaptic dynamics
AU - Liu, Sensen
AU - Ching, Shi Nung
N1 - Funding Information:
S. C. holds a Career Award at the Scientific Interface from the Burroughs-Wellcome Fund. This work was partially supported by AFOSR 15RT0189, NSF ECCS 1509342, and NSF CMMI 1537015, from the U.S. Air Force Office of Scientific Research and the U.S. National Science Foundation, respectively.
Publisher Copyright:
© 2017 Massachusetts Institute of Technology.
PY - 2017/9/1
Y1 - 2017/9/1
N2 - We consider the problem of optimizing information-theoretic quantities in recurrent networks via synaptic learning. In contrast to feedforward networks, the recurrence presents a key challenge insofar as an optimal learning rule must aggregate the joint distribution of the whole network. This challenge, in particular, makes a local policy (i.e., one that depends on only pairwise interactions) difficult. Here, we report a local metaplastic learning rule that performs approximate optimization by estimating whole-network statistics through the use of several slow, nested dynamical variables. These dynamics provide the rule with both anti-Hebbian and Hebbian components, thus allowing for decorrelating and correlating learning regimes that can occur when either is favorable for optimality. We demonstrate the performance of the synthesized rule in comparison to classical BCM dynamics and use the networks to conduct history-dependent tasks that highlight the advantages of recurrence. Finally, we show the consistency of the resultant learned networks with notions of criticality, including balanced ratios of excitation and inhibition.
AB - We consider the problem of optimizing information-theoretic quantities in recurrent networks via synaptic learning. In contrast to feedforward networks, the recurrence presents a key challenge insofar as an optimal learning rule must aggregate the joint distribution of the whole network. This challenge, in particular, makes a local policy (i.e., one that depends on only pairwise interactions) difficult. Here, we report a local metaplastic learning rule that performs approximate optimization by estimating whole-network statistics through the use of several slow, nested dynamical variables. These dynamics provide the rule with both anti-Hebbian and Hebbian components, thus allowing for decorrelating and correlating learning regimes that can occur when either is favorable for optimality. We demonstrate the performance of the synthesized rule in comparison to classical BCM dynamics and use the networks to conduct history-dependent tasks that highlight the advantages of recurrence. Finally, we show the consistency of the resultant learned networks with notions of criticality, including balanced ratios of excitation and inhibition.
UR - http://www.scopus.com/inward/record.url?scp=85047186617&partnerID=8YFLogxK
U2 - 10.1162/NECO_a_00993
DO - 10.1162/NECO_a_00993
M3 - Letter
C2 - 28599115
AN - SCOPUS:85047186617
SN - 0899-7667
VL - 29
SP - 2528
EP - 2552
JO - Neural Computation
JF - Neural Computation
IS - 9
ER -