Cookies Policy
X

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies.

I accept this policy

Find out more here

Full Access Audiovisual onset differences are used to determine the identity of ambiguous syllables

No metrics data to plot.
The attempt to load metrics for this article has failed.
The attempt to plot a graph for these metrics has failed.
The full text of this article is not currently available.

Brill’s MyBook program is exclusively available on BrillOnline Books and Journals. Students and scholars affiliated with an institution that has purchased a Brill E-Book on the BrillOnline platform automatically have access to the MyBook option for the title(s) acquired by the Library. Brill MyBook is a print-on-demand paperback copy which is sold at a favorably uniform low price.

Audiovisual onset differences are used to determine the identity of ambiguous syllables

  • PDF
  • HTML
Add to Favorites
You must be logged in to use this functionality

For more content, see Seeing and Perceiving and Spatial Vision.

Content and temporal cues have been shown to interact during audiovisual (AV) speech identification. Typically, the most reliable unimodal cue is used to identify specific speech features; however, visual cues are only used if the audiovisual stimuli are presented within a certain temporal integration window (TWI). This suggests that temporal cues denote whether unimodal stimuli belong together and should be integrated. It is unknown whether temporal cues also provide information about speech content. Since spoken syllables have naturally varying audiovisual onset asynchronies, we hypothesize that for suboptimal AV cues presented within the TWI, these natural AV onset differences can aid in speech identification. To test this, we presented low-intensity auditory syllables concurrently with visual speech signals, and varied the stimulus onset asynchronies (SOA) of the audiovisual pair, while participants identified the syllables. We revealed that the most reliable cues of the audiovisual input were used to identify specific speech features (e.g., voicing). Additionally, we showed that the TWI was wider for congruent stimulus pairs. Finally, we found a specific response pattern across the SOA-range for syllables that were not reliably identified by the unimodal cues, which we explained by the use of natural onset differences between audiovisual speech signals. This indicates that temporal cues not only determine whether or not different inputs belong together, but additionally convey identity information of audiovisual pairs. These results provide a detailed behavioral basis for further neuro-imaging and stimulation studies to unravel the neurofunctional mechanisms of the audiovisual–temporal interplay within speech perception.

Affiliations: 1: 1Department of Cognitive Neuroscience, University Maastricht, The Netherlands; 2: 2Maastricht University, The Netherlands

Content and temporal cues have been shown to interact during audiovisual (AV) speech identification. Typically, the most reliable unimodal cue is used to identify specific speech features; however, visual cues are only used if the audiovisual stimuli are presented within a certain temporal integration window (TWI). This suggests that temporal cues denote whether unimodal stimuli belong together and should be integrated. It is unknown whether temporal cues also provide information about speech content. Since spoken syllables have naturally varying audiovisual onset asynchronies, we hypothesize that for suboptimal AV cues presented within the TWI, these natural AV onset differences can aid in speech identification. To test this, we presented low-intensity auditory syllables concurrently with visual speech signals, and varied the stimulus onset asynchronies (SOA) of the audiovisual pair, while participants identified the syllables. We revealed that the most reliable cues of the audiovisual input were used to identify specific speech features (e.g., voicing). Additionally, we showed that the TWI was wider for congruent stimulus pairs. Finally, we found a specific response pattern across the SOA-range for syllables that were not reliably identified by the unimodal cues, which we explained by the use of natural onset differences between audiovisual speech signals. This indicates that temporal cues not only determine whether or not different inputs belong together, but additionally convey identity information of audiovisual pairs. These results provide a detailed behavioral basis for further neuro-imaging and stimulation studies to unravel the neurofunctional mechanisms of the audiovisual–temporal interplay within speech perception.

Loading

Full text loading...

/deliver/22134808/26/10/22134808_026_00_S77_text.html;jsessionid=d8RyKJZYOKGbZstg3cELFHDx.x-brill-live-03?itemId=/content/journals/10.1163/22134808-000s0077&mimeType=html&fmt=ahah
/content/journals/10.1163/22134808-000s0077
Loading

Data & Media loading...

http://brill.metastore.ingenta.com/content/journals/10.1163/22134808-000s0077
Loading
Loading

Article metrics loading...

/content/journals/10.1163/22134808-000s0077
2013-05-16
2016-12-06

Sign-in

Can't access your account?
  • Key

  • Full access
  • Open Access
  • Partial/No accessInformation