Reading your own lips: Common-coding theory and visual speech perception

Nancy Tye-Murray, Brent P. Spehar, Joel Myerson, Sandra Hale, Mitchell S. Sommers

Research output: Contribution to journalArticlepeer-review

18 Scopus citations

Abstract

Common-coding theory posits that (1) perceiving an action activates the same representations of motor plans that are activated by actually performing that action, and (2) because of individual differences in the ways that actions are performed, observing recordings of one's own previous behavior activates motor plans to an even greater degree than does observing someone else's behavior. We hypothesized that if observing oneself activates motor plans to a greater degree than does observing others, and if these activated plans contribute to perception, then people should be able to lipread silent video clips of their own previous utterances more accurately than they can lipread video clips of other talkers. As predicted, two groups of participants were able to lipread video clips of themselves, recorded more than two weeks earlier, significantly more accurately than video clips of others. These results suggest that visual input activates speech motor activity that links to word representations in the mental lexicon.

Original languageEnglish
Pages (from-to)115-119
Number of pages5
JournalPsychonomic Bulletin and Review
Volume20
Issue number1
DOIs
StatePublished - 2013

Keywords

  • Models of visual word recognition and priming
  • Motor control
  • Motor planning/programming
  • Visual word recognition

Fingerprint

Dive into the research topics of 'Reading your own lips: Common-coding theory and visual speech perception'. Together they form a unique fingerprint.

Cite this