Cookies Policy
X

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies.

I accept this policy

Find out more here

Full Access Supramodal processing in visual learning and plasticity

No metrics data to plot.
The attempt to load metrics for this article has failed.
The attempt to plot a graph for these metrics has failed.
The full text of this article is not currently available.

Brill’s MyBook program is exclusively available on BrillOnline Books and Journals. Students and scholars affiliated with an institution that has purchased a Brill E-Book on the BrillOnline platform automatically have access to the MyBook option for the title(s) acquired by the Library. Brill MyBook is a print-on-demand paperback copy which is sold at a favorably uniform low price.

Supramodal processing in visual learning and plasticity

  • HTML
  • PDF
Add to Favorites
You must be logged in to use this functionality

image of Multisensory Research
For more content, see Seeing and Perceiving and Spatial Vision.

Multisensory interactions are ubiquitous in cortex and question whether sensory cortices can be distinctively supramodal i.e., capable of functional selectivity irrespective of the sensory modality of inputs (Pascual-Leone and Hamilton, 2001; Ricciardi and Pietrini, 2011; Voss and Zatorre, 2012). This suggests that visual perceptual learning could benefit from supramodal processing via reverse hierarchy (Ahissar and Hochstein, 2004; Proulx et al., 2012). To test this, novel stimuli were developed consisting of acoustic textures sharing the temporal statistics of visual random dot kinematograms (RDKs). Two groups of participants were trained in a difficult visual coherence discrimination task, with or without sounds, while being recorded with magnetoencephalography (MEG). Participants trained in audiovisual conditions (AV) significantly outperformed visual trainees (V) although they were unaware of their progress. When contrasting post- vs. pre-training MEG data, significant differences in the dynamic pattern and the cortical regions responsive to visual RDKs were observed between the two groups in response to visual RDKs. Specifically, neural activity in multisensory cortices (mSTS) correlated with post-training performances and visual motion area (hMT+) selectively responded to trained coherence levels but only in the AV trainees. The latencies of these effects suggest selective feedback from mSTS to hMT+ possibly mediated by posterior temporal cortices (pSTS). Altogether, we interpret our results in the context of the Reverse Hierarchy Theory of learning (Ahissar and Hochstein, 2004) in which supramodal processing optimizes visual perceptual learning by capitalizing on sensory-invariant representations namely, global coherence levels across sensory modalities.

Multisensory interactions are ubiquitous in cortex and question whether sensory cortices can be distinctively supramodal i.e., capable of functional selectivity irrespective of the sensory modality of inputs (Pascual-Leone and Hamilton, 2001; Ricciardi and Pietrini, 2011; Voss and Zatorre, 2012). This suggests that visual perceptual learning could benefit from supramodal processing via reverse hierarchy (Ahissar and Hochstein, 2004; Proulx et al., 2012). To test this, novel stimuli were developed consisting of acoustic textures sharing the temporal statistics of visual random dot kinematograms (RDKs). Two groups of participants were trained in a difficult visual coherence discrimination task, with or without sounds, while being recorded with magnetoencephalography (MEG). Participants trained in audiovisual conditions (AV) significantly outperformed visual trainees (V) although they were unaware of their progress. When contrasting post- vs. pre-training MEG data, significant differences in the dynamic pattern and the cortical regions responsive to visual RDKs were observed between the two groups in response to visual RDKs. Specifically, neural activity in multisensory cortices (mSTS) correlated with post-training performances and visual motion area (hMT+) selectively responded to trained coherence levels but only in the AV trainees. The latencies of these effects suggest selective feedback from mSTS to hMT+ possibly mediated by posterior temporal cortices (pSTS). Altogether, we interpret our results in the context of the Reverse Hierarchy Theory of learning (Ahissar and Hochstein, 2004) in which supramodal processing optimizes visual perceptual learning by capitalizing on sensory-invariant representations namely, global coherence levels across sensory modalities.

Loading

Full text loading...

/deliver/22134808/26/10/22134808_026_00_S83_text.html;jsessionid=X2wkZKouys3liLnaTf4D16SM.x-brill-live-02?itemId=/content/journals/10.1163/22134808-000s0083&mimeType=html&fmt=ahah
/content/journals/10.1163/22134808-000s0083
Loading

Data & Media loading...

http://brill.metastore.ingenta.com/content/journals/10.1163/22134808-000s0083
Loading
Loading

Article metrics loading...

/content/journals/10.1163/22134808-000s0083
2013-05-16
2016-12-07

Sign-in

Can't access your account?
  • Key

  • Full access
  • Open Access
  • Partial/No accessInformation