TY - JOUR
T1 - Increased Connectivity among Sensory and Motor Regions during Visual and Audiovisual Speech Perception
AU - Peelle, Jonathan E.
AU - Spehar, Brent
AU - Jones, Michael S.
AU - McConkey, Sarah
AU - Myerson, Joel
AU - Hale, Sandra
AU - Sommers, Mitchell S.
AU - Tye-Murray, Nancy
N1 - Publisher Copyright:
© 2022 the authors
PY - 2022/1/19
Y1 - 2022/1/19
N2 - In everyday conversation, we usually process the talker’s face as well as the sound of the talker’s voice. Access to visual speech information is particularly useful when the auditory signal is degraded. Here, we used fMRI to monitor brain activity while adult humans (n = 60) were presented with visual-only, auditory-only, and audiovisual words. The audiovisual words were presented in quiet and in several signal-to-noise ratios. As expected, audiovisual speech perception recruited both auditory and visual cortex, with some evidence for increased recruitment of premotor cortex in some conditions (including in substantial background noise). We then investigated neural connectivity using psychophysiological interaction analysis with seed regions in both primary auditory cortex and primary visual cortex. Connectivity between auditory and visual cortices was stronger in audiovisual conditions than in unimodal conditions, including a wide network of regions in posterior temporal cortex and prefrontal cortex. In addition to whole-brain analyses, we also conducted a region-of-interest analysis on the left posterior superior temporal sulcus (pSTS), implicated in many previous studies of audiovisual speech perception. We found evidence for both activity and effective connectivity in pSTS for visual-only and audiovisual speech, although these were not significant in whole-brain analyses. Together, our results suggest a prominent role for cross-region synchronization in understanding both visual-only and audiovisual speech that complements activity in integrative brain regions like pSTS.
AB - In everyday conversation, we usually process the talker’s face as well as the sound of the talker’s voice. Access to visual speech information is particularly useful when the auditory signal is degraded. Here, we used fMRI to monitor brain activity while adult humans (n = 60) were presented with visual-only, auditory-only, and audiovisual words. The audiovisual words were presented in quiet and in several signal-to-noise ratios. As expected, audiovisual speech perception recruited both auditory and visual cortex, with some evidence for increased recruitment of premotor cortex in some conditions (including in substantial background noise). We then investigated neural connectivity using psychophysiological interaction analysis with seed regions in both primary auditory cortex and primary visual cortex. Connectivity between auditory and visual cortices was stronger in audiovisual conditions than in unimodal conditions, including a wide network of regions in posterior temporal cortex and prefrontal cortex. In addition to whole-brain analyses, we also conducted a region-of-interest analysis on the left posterior superior temporal sulcus (pSTS), implicated in many previous studies of audiovisual speech perception. We found evidence for both activity and effective connectivity in pSTS for visual-only and audiovisual speech, although these were not significant in whole-brain analyses. Together, our results suggest a prominent role for cross-region synchronization in understanding both visual-only and audiovisual speech that complements activity in integrative brain regions like pSTS.
KW - audiovisual integration
KW - language
KW - lipreading
KW - speech
KW - speechreading
UR - http://www.scopus.com/inward/record.url?scp=85123812840&partnerID=8YFLogxK
U2 - 10.1523/JNEUROSCI.0114-21.2021
DO - 10.1523/JNEUROSCI.0114-21.2021
M3 - Article
C2 - 34815317
AN - SCOPUS:85123812840
SN - 0270-6474
VL - 42
SP - 435
EP - 442
JO - Journal of Neuroscience
JF - Journal of Neuroscience
IS - 3
ER -