Cookies Policy
X

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies.

I accept this policy

Find out more here

Full Access Early auditory sensory processing is facilitated by visual mechanisms

No metrics data to plot.
The attempt to load metrics for this article has failed.
The attempt to plot a graph for these metrics has failed.
The full text of this article is not currently available.

Brill’s MyBook program is exclusively available on BrillOnline Books and Journals. Students and scholars affiliated with an institution that has purchased a Brill E-Book on the BrillOnline platform automatically have access to the MyBook option for the title(s) acquired by the Library. Brill MyBook is a print-on-demand paperback copy which is sold at a favorably uniform low price.

Early auditory sensory processing is facilitated by visual mechanisms

  • HTML
  • PDF
Add to Favorites
You must be logged in to use this functionality

image of Seeing and Perceiving
For more content, see Multisensory Research and Spatial Vision.

There is compelling evidence that low-level sensory areas are sensitive to more than one modality. For example, auditory cortices respond to visual-only stimuli (Calvert et al., 1997; Meyer et al., 2010; Pekkola et al., 2005) and conversely, visual sensory areas respond to sound sources even in auditory-only conditions (Poirier et al., 2005; von Kriegstein et al., 2008; von Kriegstein and Giraud, 2006). Currently, it is unknown what makes the brain activate modality-specific, sensory areas solely in response to input of a different modality. One reason may be that such activations are instrumental for early sensory processing of the input modality — a hypothesis that is contrary to current text book knowledge. Here we test this hypothesis by harnessing a temporally highly resolved method, i.e., magnetoencephalography (MEG), to identify the temporal response profile of visual regions in response to auditory-only voice recognition. Participants ( n = 19 ) briefly learned a set of voices audio–visually, i.e., together with a talking face in an ecologically valid situation, as in daily life. Once subjects were able to recognize these now familiar voices, we measured their brain responses using MEG. The results revealed two key mechanisms that characterize the sensory processing of familiar speakers’ voices: (i) activation in the visual face-sensitive fusiform gyrus at very early auditory processing stages, i.e., only 100 ms after auditory onset and (ii) a temporal facilitation of auditory processing (M200) that was directly associated with improved recognition performance. These findings suggest that visual areas are instrumental already during very early auditory-only processing stages and indicate that the brain uses visual mechanisms to optimize sensory processing and recognition of auditory stimuli.

Affiliations: 1: 1Max Planck Institute for Human Cognitive and Brain Sciences, DE

There is compelling evidence that low-level sensory areas are sensitive to more than one modality. For example, auditory cortices respond to visual-only stimuli (Calvert et al., 1997; Meyer et al., 2010; Pekkola et al., 2005) and conversely, visual sensory areas respond to sound sources even in auditory-only conditions (Poirier et al., 2005; von Kriegstein et al., 2008; von Kriegstein and Giraud, 2006). Currently, it is unknown what makes the brain activate modality-specific, sensory areas solely in response to input of a different modality. One reason may be that such activations are instrumental for early sensory processing of the input modality — a hypothesis that is contrary to current text book knowledge. Here we test this hypothesis by harnessing a temporally highly resolved method, i.e., magnetoencephalography (MEG), to identify the temporal response profile of visual regions in response to auditory-only voice recognition. Participants ( n = 19 ) briefly learned a set of voices audio–visually, i.e., together with a talking face in an ecologically valid situation, as in daily life. Once subjects were able to recognize these now familiar voices, we measured their brain responses using MEG. The results revealed two key mechanisms that characterize the sensory processing of familiar speakers’ voices: (i) activation in the visual face-sensitive fusiform gyrus at very early auditory processing stages, i.e., only 100 ms after auditory onset and (ii) a temporal facilitation of auditory processing (M200) that was directly associated with improved recognition performance. These findings suggest that visual areas are instrumental already during very early auditory-only processing stages and indicate that the brain uses visual mechanisms to optimize sensory processing and recognition of auditory stimuli.

Loading

Full text loading...

/deliver/18784763/25/0/18784763_025_00_S172_text.html;jsessionid=ENv2XU1-YicQrW-HZ4xaxBq7.x-brill-live-02?itemId=/content/journals/10.1163/187847612x648143&mimeType=html&fmt=ahah
/content/journals/10.1163/187847612x648143
Loading

Data & Media loading...

1. Calvert G. A. , Bullmore E. T. , Brammer M. J. , Campbell R. , Williams S. C. , et al , ( 1997). "Activation of auditory cortex during silent lipreading", Science Vol 276, 593596. http://dx.doi.org/10.1126/science.276.5312.593
2. Meyer K. , Kaplan J. T. , Essex R. , Webber C. , Damasio H. , et al , ( 2010). "Predicting visual stimuli on the basis of activity in auditory cortices", Nat. Neurosci. Vol 13, 667668. http://dx.doi.org/10.1038/nn.2533
3. Pekkola J. , Ojanen V. , Autti T. , Jaaskelainen I. P. , Mottonen R. , et al , ( 2005). "Primary auditory cortex activation by visual speech: an fMRI study at 3 T", Neuroreport Vol 16, 125128. http://dx.doi.org/10.1097/00001756-200502080-00010
4. Poirier C. , Collignon O. , Devolder A. G. , Renier L. , Vanlierde A. , et al , ( 2005). "Specific activation of the V5 brain area by auditory motion processing: an fMRI study", Brain Res. Cogn. Brain Res. Vol 25, 650658. http://dx.doi.org/10.1016/j.cogbrainres.2005.08.015
5. von Kriegstein K. , Dogan O. , Gruter M. , Giraud A. L. , Kell C. A. , et al , ( 2008). "Simulation of talking faces in the human brain improves auditory speech recognition", Proc. Natl. Acad. Sci. USA Vol 105, 67476752. http://dx.doi.org/10.1073/pnas.0710826105
6. von Kriegstein K. , Giraud A. L. ( 2006). "Implicit multisensory associations influence voice recognition", PLoS Biol. Vol 4, e326.
http://brill.metastore.ingenta.com/content/journals/10.1163/187847612x648143
Loading
Loading

Article metrics loading...

/content/journals/10.1163/187847612x648143
2012-01-01
2016-12-03

Sign-in

Can't access your account?
  • Key

  • Full access
  • Open Access
  • Partial/No accessInformation