Evolving Images for Visual Neurons Using a Deep Generative Network Reveals Coding Principles and Neuronal Preferences

Carlos R. Ponce, Will Xiao, Peter F. Schade, Till S. Hartmann, Gabriel Kreiman, Margaret S. Livingstone

Research output: Contribution to journalArticlepeer-review

41 Scopus citations

Abstract

What specific features should visual neurons encode, given the infinity of real-world images and the limited number of neurons available to represent them? We investigated neuronal selectivity in monkey inferotemporal cortex via the vast hypothesis space of a generative deep neural network, avoiding assumptions about features or semantic categories. A genetic algorithm searched this space for stimuli that maximized neuronal firing. This led to the evolution of rich synthetic images of objects with complex combinations of shapes, colors, and textures, sometimes resembling animals or familiar people, other times revealing novel patterns that did not map to any clear semantic category. These results expand our conception of the dictionary of features encoded in the cortex, and the approach can potentially reveal the internal representations of any system whose input can be captured by a generative model. Neurons guided the evolution of their own best stimuli with a generative deep neural network.

Original languageEnglish
Pages (from-to)999-1009.e10
JournalCell
Volume177
Issue number4
DOIs
StatePublished - May 2 2019

Keywords

  • generative adversarial network
  • inferotemporal cortex
  • neural networks

Fingerprint

Dive into the research topics of 'Evolving Images for Visual Neurons Using a Deep Generative Network Reveals Coding Principles and Neuronal Preferences'. Together they form a unique fingerprint.

Cite this