TY - JOUR
T1 - Creating functionally favorable neural dynamics by maximizing information capacity
AU - Ghazizadeh, Elham
AU - Ching, Shi Nung
N1 - Funding Information:
ShiNung Ching is an Associate Professor in the Department of Electrical and Systems Engineering at Washington University in St. Louis (St. Louis, USA). Dr. Ching completed his B.Eng (Hons.) and M.A.Sc degrees in Electrical and Computer Engineering from McGill University, Canada and the University of Toronto, Canada. He earned his Ph.D. in Electrical Engineering from the University of Michigan in 2009. His research interests are at the intersection of control theory and systems neuroscience, particularly in using systems and control theoretic concepts to study the link between dynamics and function in neuronal networks. Dr. Ching has received the CAREER Award from the US National Science Foundation, the Young Investigator Program award from the US AFOSR, a Career Award at the Scientific Interface from the Burroughs-Welcome Fund.
Funding Information:
ShiNung Ching holds a Career Award at the Scientific Interface from the Burroughs-Wellcome Fund. This work was partially supported by AFOSR 15RT0189 , NSF ECCS 1509342 and NSF CMMI 1653589 , from the US Air Force Office of Scientific Research and the US National Science Foundation , respectively.
Publisher Copyright:
© 2020 Elsevier B.V.
PY - 2020/8/4
Y1 - 2020/8/4
N2 - A ubiquitous problem in optimization and machine learning pertains to the design of systems that enact a desired behavior in dynamical environments. For example, the classical example of a control system that stabilizes an inverted pendulum. In this paper, we consider a complementary and less well-studied problem: the design of the environment itself. That is, can we create a dynamical system that in a general but mathematically rigorous way, is readily ‘usable’ by an unknown agent. We are especially interested in the synthesis of neuronal dynamics that are maximally labile with respect to afferent inputs. That is, can we create neural dynamics that propagate information well. To do so, we blend ideas from control and information theories, by turning specifically to the notion of empowerment, or the information capacity of a dynamical system in an input-to-state sense. We devise a strategy to optimize the dynamics of a system using empowerment over its state space as an objective function. This results in dynamics that are generically conducive to information propagation. For example, the optimized environment would be expected to perform well as an encoder (of afferent input distributions). We outline the key technical innovations needed in order to perform the optimization and, by means of example, discuss emergent dynamical characteristics of systems optimized according to this principle.
AB - A ubiquitous problem in optimization and machine learning pertains to the design of systems that enact a desired behavior in dynamical environments. For example, the classical example of a control system that stabilizes an inverted pendulum. In this paper, we consider a complementary and less well-studied problem: the design of the environment itself. That is, can we create a dynamical system that in a general but mathematically rigorous way, is readily ‘usable’ by an unknown agent. We are especially interested in the synthesis of neuronal dynamics that are maximally labile with respect to afferent inputs. That is, can we create neural dynamics that propagate information well. To do so, we blend ideas from control and information theories, by turning specifically to the notion of empowerment, or the information capacity of a dynamical system in an input-to-state sense. We devise a strategy to optimize the dynamics of a system using empowerment over its state space as an objective function. This results in dynamics that are generically conducive to information propagation. For example, the optimized environment would be expected to perform well as an encoder (of afferent input distributions). We outline the key technical innovations needed in order to perform the optimization and, by means of example, discuss emergent dynamical characteristics of systems optimized according to this principle.
KW - Empowerment
KW - Information capacity
KW - Neural dynamics
UR - http://www.scopus.com/inward/record.url?scp=85082496483&partnerID=8YFLogxK
U2 - 10.1016/j.neucom.2020.03.008
DO - 10.1016/j.neucom.2020.03.008
M3 - Article
AN - SCOPUS:85082496483
SN - 0925-2312
VL - 400
SP - 285
EP - 293
JO - Neurocomputing
JF - Neurocomputing
ER -