TY - GEN
T1 - A learning framework for controlling spiking neural networks
AU - Narayanan, Vignesh
AU - Ritt, Jason T.
AU - Li, Jr Shin
AU - Ching, Shinung
N1 - Funding Information:
*This work was partially supported by NSF 1653589, 1509342 and NIH R21EY027590-02.
Publisher Copyright:
© 2019 American Automatic Control Council.
PY - 2019/7
Y1 - 2019/7
N2 - Controlling a population of interconnected neurons using extrinsic stimulation is a challenging problem. The challenges are due to the inherent nonlinear neuronal dynamics, the highly complex structure of underlying neuronal networks, the underactuated nature of the control problem, and adding to these is the binary nature of the observation/feedback. To meet these challenges, adaptive, learning-based approaches using deep neural networks and reinforcement learning are potentially useful strategies. In this paper, we propose an approximation based learning framework in which a model for approximating the input-output relationship in a spiking neuron is developed. We then present a reinforcement learning scheme to approximate the solution for the Bellman equation, and to design the control sequence to achieve a desired spike pattern. The proposed strategy, by integrating the reinforcement learning and system theoretic approaches, provides a tractable framework to design a learning control network, and to select the hyper parameters in deep learning architectures. We demonstrate the feasibility of the proposed approach using numerical simulations.
AB - Controlling a population of interconnected neurons using extrinsic stimulation is a challenging problem. The challenges are due to the inherent nonlinear neuronal dynamics, the highly complex structure of underlying neuronal networks, the underactuated nature of the control problem, and adding to these is the binary nature of the observation/feedback. To meet these challenges, adaptive, learning-based approaches using deep neural networks and reinforcement learning are potentially useful strategies. In this paper, we propose an approximation based learning framework in which a model for approximating the input-output relationship in a spiking neuron is developed. We then present a reinforcement learning scheme to approximate the solution for the Bellman equation, and to design the control sequence to achieve a desired spike pattern. The proposed strategy, by integrating the reinforcement learning and system theoretic approaches, provides a tractable framework to design a learning control network, and to select the hyper parameters in deep learning architectures. We demonstrate the feasibility of the proposed approach using numerical simulations.
UR - http://www.scopus.com/inward/record.url?scp=85072288213&partnerID=8YFLogxK
U2 - 10.23919/acc.2019.8815197
DO - 10.23919/acc.2019.8815197
M3 - Conference contribution
AN - SCOPUS:85072288213
T3 - Proceedings of the American Control Conference
SP - 211
EP - 216
BT - 2019 American Control Conference, ACC 2019
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2019 American Control Conference, ACC 2019
Y2 - 10 July 2019 through 12 July 2019
ER -