TY - JOUR
T1 - Reading your own lips
T2 - Common-coding theory and visual speech perception
AU - Tye-Murray, Nancy
AU - Spehar, Brent P.
AU - Myerson, Joel
AU - Hale, Sandra
AU - Sommers, Mitchell S.
N1 - Funding Information:
This research was supported by National Institutes of Health Grant No. AG018029.
PY - 2013
Y1 - 2013
N2 - Common-coding theory posits that (1) perceiving an action activates the same representations of motor plans that are activated by actually performing that action, and (2) because of individual differences in the ways that actions are performed, observing recordings of one's own previous behavior activates motor plans to an even greater degree than does observing someone else's behavior. We hypothesized that if observing oneself activates motor plans to a greater degree than does observing others, and if these activated plans contribute to perception, then people should be able to lipread silent video clips of their own previous utterances more accurately than they can lipread video clips of other talkers. As predicted, two groups of participants were able to lipread video clips of themselves, recorded more than two weeks earlier, significantly more accurately than video clips of others. These results suggest that visual input activates speech motor activity that links to word representations in the mental lexicon.
AB - Common-coding theory posits that (1) perceiving an action activates the same representations of motor plans that are activated by actually performing that action, and (2) because of individual differences in the ways that actions are performed, observing recordings of one's own previous behavior activates motor plans to an even greater degree than does observing someone else's behavior. We hypothesized that if observing oneself activates motor plans to a greater degree than does observing others, and if these activated plans contribute to perception, then people should be able to lipread silent video clips of their own previous utterances more accurately than they can lipread video clips of other talkers. As predicted, two groups of participants were able to lipread video clips of themselves, recorded more than two weeks earlier, significantly more accurately than video clips of others. These results suggest that visual input activates speech motor activity that links to word representations in the mental lexicon.
KW - Models of visual word recognition and priming
KW - Motor control
KW - Motor planning/programming
KW - Visual word recognition
UR - http://www.scopus.com/inward/record.url?scp=84873139611&partnerID=8YFLogxK
U2 - 10.3758/s13423-012-0328-5
DO - 10.3758/s13423-012-0328-5
M3 - Article
C2 - 23132604
AN - SCOPUS:84873139611
SN - 1069-9384
VL - 20
SP - 115
EP - 119
JO - Psychonomic Bulletin and Review
JF - Psychonomic Bulletin and Review
IS - 1
ER -