The self-advantage in visual speech processing enhances audiovisual speech recognition in noise

Nancy Tye-Murray, Brent P. Spehar, Joel Myerson, Sandra Hale, Mitchell S. Sommers

Research output: Contribution to journalArticlepeer-review

7 Scopus citations

Abstract

Individuals lip read themselves more accurately than they lip read others when only the visual speech signal is available (Tye-Murray et al., Psychonomic Bulletin & Review, 20, 115–119, 2013). This self-advantage for vision-only speech recognition is consistent with the common-coding hypothesis (Prinz, European Journal of Cognitive Psychology, 9, 129–154, 1997), which posits (1) that observing an action activates the same motor plan representation as actually performing that action and (2) that observing one’s own actions activates motor plan representations more than the others’ actions because of greater congruity between percepts and corresponding motor plans. The present study extends this line of research to audiovisual speech recognition by examining whether there is a self-advantage when the visual signal is added to the auditory signal under poor listening conditions. Participants were assigned to sub-groups for round-robin testing in which each participant was paired with every member of their subgroup, including themselves, serving as both talker and listener/observer. On average, the benefit participants obtained from the visual signal when they were the talker was greater than when the talker was someone else and also was greater than the benefit others obtained from observing as well as listening to them. Moreover, the self-advantage in audiovisual speech recognition was significant after statistically controlling for individual differences in both participants’ ability to benefit from a visual speech signal and the extent to which their own visual speech signal benefited others. These findings are consistent with our previous finding of a self-advantage in lip reading and with the hypothesis of a common code for action perception and motor plan representation.

Original languageEnglish
Pages (from-to)1048-1053
Number of pages6
JournalPsychonomic Bulletin and Review
Volume22
Issue number4
DOIs
StatePublished - Aug 27 2015

Keywords

  • Audiovisual speech recognition
  • Common coding hypothesis
  • Lip reading
  • Self-advantage
  • Visual speech benefit

Fingerprint

Dive into the research topics of 'The self-advantage in visual speech processing enhances audiovisual speech recognition in noise'. Together they form a unique fingerprint.

Cite this