Cookies Policy
X

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies.

I accept this policy

Find out more here

Full Access Hearing voices then seeing lips: Fragmentation and renormalisation of subjective timing in the McGurk illusion

No metrics data to plot.
The attempt to load metrics for this article has failed.
The attempt to plot a graph for these metrics has failed.
The full text of this article is not currently available.

Brill’s MyBook program is exclusively available on BrillOnline Books and Journals. Students and scholars affiliated with an institution that has purchased a Brill E-Book on the BrillOnline platform automatically have access to the MyBook option for the title(s) acquired by the Library. Brill MyBook is a print-on-demand paperback copy which is sold at a favorably uniform low price.

Hearing voices then seeing lips: Fragmentation and renormalisation of subjective timing in the McGurk illusion

  • PDF
  • HTML
Add to Favorites
You must be logged in to use this functionality

image of Seeing and Perceiving
For more content, see Multisensory Research and Spatial Vision.

Due to physical and neural delays, the sight and sound of a person speaking causes a cachophony of asynchronous events in the brain. How can we still perceive them as simultaneous? Our converging evidence suggests that actually, we do not. Patient PH, with midbrain and auditory brainstem lesions, experiences voices leading lipmovements by approximately 200 ms. In temporal order judgements (TOJ) he experiences simultaneity only when voices physically lag lips. In contrast, he requires the opposite visual lag (again of about 200 ms) to experience the classic McGurk illusion (e.g., hearing ‘da’ when listening to /ba/ and watching lips say [ga]), consistent with pathological auditory slowing. These delays seem to be specific to speech stimuli. Is PH just an anomaly? Surprisingly, neuro-typical individual differences between temporal tuning of McGurk integration and TOJ are actually negatively correlated. Thus some people require a small auditory lead for optimal McG but an auditory lag for subjective simultaneity (like PH but not as extreme), while others show the opposite pattern. Evidently, any individual can concurrently experience the same external events as happening at different times. These dissociative patterns confirm that distinct mechanisms for audiovisual synchronization versus integration are each subject to different neural delays. To explain the apparent repulsion of their respective timings, we propose that multimodal synchronization is achieved by discounting the average neural event time within each modality. Lesions or individual differences which slow the propagation of neural signals will then attract the average, so that relatively undelayed neural signals will be experienced as occurring relatively early.

Affiliations: 1: City University London, GB

Due to physical and neural delays, the sight and sound of a person speaking causes a cachophony of asynchronous events in the brain. How can we still perceive them as simultaneous? Our converging evidence suggests that actually, we do not. Patient PH, with midbrain and auditory brainstem lesions, experiences voices leading lipmovements by approximately 200 ms. In temporal order judgements (TOJ) he experiences simultaneity only when voices physically lag lips. In contrast, he requires the opposite visual lag (again of about 200 ms) to experience the classic McGurk illusion (e.g., hearing ‘da’ when listening to /ba/ and watching lips say [ga]), consistent with pathological auditory slowing. These delays seem to be specific to speech stimuli. Is PH just an anomaly? Surprisingly, neuro-typical individual differences between temporal tuning of McGurk integration and TOJ are actually negatively correlated. Thus some people require a small auditory lead for optimal McG but an auditory lag for subjective simultaneity (like PH but not as extreme), while others show the opposite pattern. Evidently, any individual can concurrently experience the same external events as happening at different times. These dissociative patterns confirm that distinct mechanisms for audiovisual synchronization versus integration are each subject to different neural delays. To explain the apparent repulsion of their respective timings, we propose that multimodal synchronization is achieved by discounting the average neural event time within each modality. Lesions or individual differences which slow the propagation of neural signals will then attract the average, so that relatively undelayed neural signals will be experienced as occurring relatively early.

Loading

Full text loading...

/deliver/18784763/25/0/18784763_025_00_S008_text.html;jsessionid=airDiBrB-7_hMIgtuwznzwGF.x-brill-live-03?itemId=/content/journals/10.1163/187847612x646325&mimeType=html&fmt=ahah
/content/journals/10.1163/187847612x646325
Loading

Data & Media loading...

http://brill.metastore.ingenta.com/content/journals/10.1163/187847612x646325
Loading
Loading

Article metrics loading...

/content/journals/10.1163/187847612x646325
2012-01-01
2016-12-09

Sign-in

Can't access your account?
  • Key

  • Full access
  • Open Access
  • Partial/No accessInformation