Visual prototypes in the ventral stream are attuned to complexity and gaze behavior

Olivia Rose, James Johnson, Binxu Wang, Carlos R. Ponce

Research output: Contribution to journalArticlepeer-review

3 Scopus citations

Abstract

Early theories of efficient coding suggested the visual system could compress the world by learning to represent features where information was concentrated, such as contours. This view was validated by the discovery that neurons in posterior visual cortex respond to edges and curvature. Still, it remains unclear what other information-rich features are encoded by neurons in more anterior cortical regions (e.g., inferotemporal cortex). Here, we use a generative deep neural network to synthesize images guided by neuronal responses from across the visuocortical hierarchy, using floating microelectrode arrays in areas V1, V4 and inferotemporal cortex of two macaque monkeys. We hypothesize these images (“prototypes”) represent such predicted information-rich features. Prototypes vary across areas, show moderate complexity, and resemble salient visual attributes and semantic content of natural images, as indicated by the animals’ gaze behavior. This suggests the code for object recognition represents compressed features of behavioral relevance, an underexplored aspect of efficient coding.

Original languageEnglish
Article number6723
JournalNature communications
Volume12
Issue number1
DOIs
StatePublished - Dec 2021

Fingerprint

Dive into the research topics of 'Visual prototypes in the ventral stream are attuned to complexity and gaze behavior'. Together they form a unique fingerprint.

Cite this