Talking points: A modulating circle reduces listening effort without improving speech recognition

Julia F. Strand, Violet A. Brown, Dennis L. Barbour

Research output: Contribution to journalArticlepeer-review

5 Scopus citations


Speech recognition is improved when the acoustic input is accompanied by visual cues provided by a talking face (Erber in Journal of Speech and Hearing Research, 12(2), 423–425 1969; Sumby & Pollack in The Journal of the Acoustical Society of America, 26(2), 212–215, 1954). One way that the visual signal facilitates speech recognition is by providing the listener with information about fine phonetic detail that complements information from the auditory signal. However, given that degraded face stimuli can still improve speech recognition accuracy (Munhall et al. in Perception & Psychophysics, 66(4), 574–583, 2004), and static or moving shapes can improve speech detection accuracy (Bernstein et al. in Speech Communication, 44(1/4), 5–18, 2004), aspects of the visual signal other than fine phonetic detail may also contribute to the perception of speech. In two experiments, we show that a modulating circle providing information about the onset, offset, and acoustic amplitude envelope of the speech does not improve recognition of spoken sentences (Experiment 1) or words (Experiment 2), but does reduce the effort necessary to recognize speech. These results suggest that although fine phonetic detail may be required for the visual signal to benefit speech recognition, low-level features of the visual signal may function to reduce the cognitive effort associated with processing speech.

Original languageEnglish
Pages (from-to)291-297
Number of pages7
JournalPsychonomic Bulletin and Review
Issue number1
StatePublished - Feb 15 2019


  • Cross-modal attention
  • Speech perception
  • Spoken word recognition


Dive into the research topics of 'Talking points: A modulating circle reduces listening effort without improving speech recognition'. Together they form a unique fingerprint.

Cite this