Cookies Policy
X

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies.

I accept this policy

Find out more here

Full Access Learning regularities from different sensory modalities: Evidence for an a-modal learning mechanism

No metrics data to plot.
The attempt to load metrics for this article has failed.
The attempt to plot a graph for these metrics has failed.
The full text of this article is not currently available.

Brill’s MyBook program is exclusively available on BrillOnline Books and Journals. Students and scholars affiliated with an institution that has purchased a Brill E-Book on the BrillOnline platform automatically have access to the MyBook option for the title(s) acquired by the Library. Brill MyBook is a print-on-demand paperback copy which is sold at a favorably uniform low price.

Learning regularities from different sensory modalities: Evidence for an a-modal learning mechanism

  • HTML
  • PDF
Add to Favorites
You must be logged in to use this functionality

image of Multisensory Research
For more content, see Seeing and Perceiving and Spatial Vision.

How do we implicitly learn regularities from different modalities? Implicit learning refers to people’s ability to learn regularities in the environment automatically, unintentionally and without awareness. One central question is whether learning occurs separately within modalities (e.g., vision, audition) or whether learning is a-modal. Previous studies largely support separable modality-specific learning mechanisms. We explored this question using two prominent learning paradigms: statistical learning (SL), referring to learning of statistical regularities between elements such as transitional probabilities, and artificial grammar learning (AGL), referring to learning the underlying grammar of a set of exemplars. In Experiment 1 participants were familiarized with a structured stream of elements composed of reoccurring triplets, and subsequently were tested on their familiarity of these triplets. Learning occurred for visual, auditory and audiovisual triplets. Moreover, learning of the audiovisual triplets was significantly larger compared to learning of unimodal triplets. Similarly, using an AGL task, in Experiment 2 participants were familiarized with a set of sequences all adhering to a grammar. Participants could subsequently classify novel sequences as grammatical or not for both multimodal and unimodal sequences, with learning superior for multimodal compared to unimodal learning. In Experiment 3 we used a modified AGL task offering an alternative explanation for previous studies indicating modality-specific learning mechanisms. Two main findings are noted: (a) Implicit learning is amodal, learning both unimodal and multimodal regularities, and (b) multimodal is higher compared to unimodal learning. Together, these findings support the existence of an a-modal learning mechanism(s) sensitive to multimodal information.

Affiliations: 1: Department of Psychology, The Hebrew University of Jerusalem, Israel

How do we implicitly learn regularities from different modalities? Implicit learning refers to people’s ability to learn regularities in the environment automatically, unintentionally and without awareness. One central question is whether learning occurs separately within modalities (e.g., vision, audition) or whether learning is a-modal. Previous studies largely support separable modality-specific learning mechanisms. We explored this question using two prominent learning paradigms: statistical learning (SL), referring to learning of statistical regularities between elements such as transitional probabilities, and artificial grammar learning (AGL), referring to learning the underlying grammar of a set of exemplars. In Experiment 1 participants were familiarized with a structured stream of elements composed of reoccurring triplets, and subsequently were tested on their familiarity of these triplets. Learning occurred for visual, auditory and audiovisual triplets. Moreover, learning of the audiovisual triplets was significantly larger compared to learning of unimodal triplets. Similarly, using an AGL task, in Experiment 2 participants were familiarized with a set of sequences all adhering to a grammar. Participants could subsequently classify novel sequences as grammatical or not for both multimodal and unimodal sequences, with learning superior for multimodal compared to unimodal learning. In Experiment 3 we used a modified AGL task offering an alternative explanation for previous studies indicating modality-specific learning mechanisms. Two main findings are noted: (a) Implicit learning is amodal, learning both unimodal and multimodal regularities, and (b) multimodal is higher compared to unimodal learning. Together, these findings support the existence of an a-modal learning mechanism(s) sensitive to multimodal information.

Loading

Full text loading...

/deliver/22134808/26/10/22134808_026_00_S55_text.html;jsessionid=GOleHQ2Ds6DA0WZMM2xYSykw.x-brill-live-03?itemId=/content/journals/10.1163/22134808-000s0055&mimeType=html&fmt=ahah
/content/journals/10.1163/22134808-000s0055
Loading

Data & Media loading...

http://brill.metastore.ingenta.com/content/journals/10.1163/22134808-000s0055
Loading
Loading

Article metrics loading...

/content/journals/10.1163/22134808-000s0055
2013-05-16
2016-12-06

Sign-in

Can't access your account?
  • Key

  • Full access
  • Open Access
  • Partial/No accessInformation