TY - GEN
T1 - Endpoint-based discriminability of minimum energy inputs
AU - Menolascino, Delsin
AU - Ching, Shinung
N1 - Funding Information:
ShiNung Ching holds a Career Award at the Scientific Interface from the Burroughs-Wellcome Fund. This work was partially supported by AFOSR 15RT0189, NSF ECCS 1509342 and NSF CMMI 1537015, from the US Air Force Office of Scientific Research and the US National Science Foundation, respectively
Publisher Copyright:
© 2016 American Automatic Control Council (AACC).
PY - 2016/7/28
Y1 - 2016/7/28
N2 - Complex neural networks, such as those found in the human brain, are able to very accurately discriminate and classify external stimuli. Some of their topological and computational properties have been extracted and used to great effect by the artificial intelligence community. However, even our best simulated neural networks are very pale abstractions of reality, partly because (in general) they fail to account for the temporal dynamics and recurrence inherent in natural neural networks, and instead employ feed-forward architecture and discrete, simultaneous activity. In this paper we begin to develop an intuitive, geometric framework to explore the ways in which different inputs could be discriminated in recurrent linear dynamical networks, with the eventual goal of being able to facilitate a transition to more realistic and effective artificial networks. We first establish a useful, closed-form measure on the space of minimum-energy inputs to a linear system, which allows an elucidation of how discrepancies between inputs impact output trajectories in the state space. We characterize, to an extent, the relationship between input and output difference as it relates to system dynamics as manifest in the geometry of the reachable output space. We draw from this characterization principles which may be employed in the design of dynamic, recurrent artificial networks for input discrimination.
AB - Complex neural networks, such as those found in the human brain, are able to very accurately discriminate and classify external stimuli. Some of their topological and computational properties have been extracted and used to great effect by the artificial intelligence community. However, even our best simulated neural networks are very pale abstractions of reality, partly because (in general) they fail to account for the temporal dynamics and recurrence inherent in natural neural networks, and instead employ feed-forward architecture and discrete, simultaneous activity. In this paper we begin to develop an intuitive, geometric framework to explore the ways in which different inputs could be discriminated in recurrent linear dynamical networks, with the eventual goal of being able to facilitate a transition to more realistic and effective artificial networks. We first establish a useful, closed-form measure on the space of minimum-energy inputs to a linear system, which allows an elucidation of how discrepancies between inputs impact output trajectories in the state space. We characterize, to an extent, the relationship between input and output difference as it relates to system dynamics as manifest in the geometry of the reachable output space. We draw from this characterization principles which may be employed in the design of dynamic, recurrent artificial networks for input discrimination.
UR - http://www.scopus.com/inward/record.url?scp=84992124418&partnerID=8YFLogxK
U2 - 10.1109/ACC.2016.7525382
DO - 10.1109/ACC.2016.7525382
M3 - Conference contribution
AN - SCOPUS:84992124418
T3 - Proceedings of the American Control Conference
SP - 3038
EP - 3043
BT - 2016 American Control Conference, ACC 2016
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2016 American Control Conference, ACC 2016
Y2 - 6 July 2016 through 8 July 2016
ER -