Multivariate sensitivity to voice during auditory categorization

Yune Sang Lee, Jonathan E. Peelle, David Kraemer, Samuel Lloyd, Richard Granger

Research output: Contribution to journalArticlepeer-review

12 Scopus citations

Abstract

Past neuroimaging studies have documented discrete regions of human temporal cortex that are more strongly activated by conspecific voice sounds than by nonvoice sounds. However, the mechanisms underlying this voice sensitivity remain unclear. In the present functional MRI study, we took a novel approach to examining voice sensitivity, in which we applied a signal detection paradigm to the assessment of multivariate pattern classification among several living and nonliving categories of auditory stimuli. Within this framework, voice sensitivity can be interpreted as a distinct neural representation of brain activity that correctly distinguishes human vocalizations from other auditory object categories. Across a series of auditory categorization tests, we found that bilateral superior and middle temporal cortex consistently exhibited robust sensitivity to human vocal sounds. Although the strongest categorization was in distinguishing human voice from other categories, subsets of these regions were also able to distinguish reliably between nonhuman categories, suggesting a general role in auditory object categorization. Our findings complement the current evidence of cortical sensitivity to human vocal sounds by revealing that the greatest sensitivity during categorization tasks is devoted to distinguishing voice from nonvoice categories within human temporal cortex.

Original languageEnglish
Pages (from-to)1819-1826
Number of pages8
JournalJournal of neurophysiology
Volume114
Issue number3
DOIs
StatePublished - Aug 5 2015

Keywords

  • Animate
  • Auditory
  • Categorization
  • Category specific
  • Conspecific
  • Human
  • Living
  • Multivariate pattern-based analysis
  • Temporal voice area
  • Voice

Fingerprint

Dive into the research topics of 'Multivariate sensitivity to voice during auditory categorization'. Together they form a unique fingerprint.

Cite this