TY - JOUR
T1 - Auditory and Visual Lexical Neighborhoods in Audiovisual Speech Perception
AU - Tye-Murray, Nancy
AU - Sommers, Mitchell
AU - Spehar, Brent
N1 - Funding Information:
This work was supported by grant RO1 AG-180291 from the US National Institutes of Health, National Institute on Aging, Bethesda, Maryland.
PY - 2007/12
Y1 - 2007/12
N2 - Much evidence suggests that the mental lexicon is organized into auditory neighborhoods, with words that are phonologically similar belonging to the same neighborhood. In this investigation, we considered the existence of visual neighborhoods. When a receiver watches someone speak a word, a neighborhood of homophenes (ie, words that look alike on the face, such as pat and bat) is activated. The simultaneous activation of a word's auditory and visual neighborhoods may, in part, account for why individuals recognize speech better in an auditory–visual condition than what would be predicted by their performance in audition-only and vision-only conditions. A word test was administered to 3 groups of participants in audition-only, vision-only, and auditory–visual conditions, in the presence of 6-talker babble. Test words with sparse visual neighborhoods were recognized more accurately than words with dense neighborhoods in a vision-only condition. Densities of both the acoustic and visual neighborhoods as well as their intersection overlap were predictive of how well the test words were recognized in the auditory–visual condition. These results suggest that visual neighborhoods exist and that they affect auditory–visual speech perception. One implication is that in the presence of dual sensory impairment, the boundaries of both acoustic and visual neighborhoods may shift, adversely affecting speech recognition.
AB - Much evidence suggests that the mental lexicon is organized into auditory neighborhoods, with words that are phonologically similar belonging to the same neighborhood. In this investigation, we considered the existence of visual neighborhoods. When a receiver watches someone speak a word, a neighborhood of homophenes (ie, words that look alike on the face, such as pat and bat) is activated. The simultaneous activation of a word's auditory and visual neighborhoods may, in part, account for why individuals recognize speech better in an auditory–visual condition than what would be predicted by their performance in audition-only and vision-only conditions. A word test was administered to 3 groups of participants in audition-only, vision-only, and auditory–visual conditions, in the presence of 6-talker babble. Test words with sparse visual neighborhoods were recognized more accurately than words with dense neighborhoods in a vision-only condition. Densities of both the acoustic and visual neighborhoods as well as their intersection overlap were predictive of how well the test words were recognized in the auditory–visual condition. These results suggest that visual neighborhoods exist and that they affect auditory–visual speech perception. One implication is that in the presence of dual sensory impairment, the boundaries of both acoustic and visual neighborhoods may shift, adversely affecting speech recognition.
KW - Lexical neighborhood
KW - auditory–visual speech perception
KW - integration
UR - http://www.scopus.com/inward/record.url?scp=36248950233&partnerID=8YFLogxK
U2 - 10.1177/1084713807307409
DO - 10.1177/1084713807307409
M3 - Article
C2 - 18003867
AN - SCOPUS:36248950233
SN - 1084-7138
VL - 11
SP - 233
EP - 241
JO - Trends in Amplification
JF - Trends in Amplification
IS - 4
ER -