Cookies Policy
X

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies.

I accept this policy

Find out more here

Full Access Rotating straight ahead or translating in circles: How we learn to integrate contradictory multisensory self-motion cue pairings

No metrics data to plot.
The attempt to load metrics for this article has failed.
The attempt to plot a graph for these metrics has failed.
The full text of this article is not currently available.

Brill’s MyBook program is exclusively available on BrillOnline Books and Journals. Students and scholars affiliated with an institution that has purchased a Brill E-Book on the BrillOnline platform automatically have access to the MyBook option for the title(s) acquired by the Library. Brill MyBook is a print-on-demand paperback copy which is sold at a favorably uniform low price.

Rotating straight ahead or translating in circles: How we learn to integrate contradictory multisensory self-motion cue pairings

  • PDF
  • HTML
Add to Favorites
You must be logged in to use this functionality

image of Multisensory Research
For more content, see Seeing and Perceiving and Spatial Vision.

Humans integrate multisensory information to reduce perceptual uncertainty when perceiving the world (Hillis et al., 2002, 2004) and self (Butler et al., 2010; Prsa et al., 2012) and it has been shown that two multisensory cues are combined and give rise to a single percept only if attributed to the same causal event (Koerding et al., 2007; Parise et al., 2012; Shams and Beierholm, 2010). A growing body of literature studies the limits of such integration for bodily self-consciousness and the perception of self-location under normal and pathological conditions (Ionta et al., 2011). We extend this research by investigating whether human subjects can learn to integrate two arbitrary visual and vestibular cues of self-motion due to their temporal co-occurrence. We conducted two experiments ( N = 8 each) in which whole-body rotations were used as the vestibular stimulus and optic flow as the visual stimulus. The vestibular stimulus provided a yaw self-rotation cue, the visual — a roll (experiment 1) or pitch (experiment 2) rotation cue. Subjects made a relative size comparison between a standard rotation size and a variable test rotation size. Their discrimination performance was fit with a psychometric function and perceptual discrimination thresholds were extracted. We compared experimentally measured thresholds in the bimodal condition with theoretical predictions derived from the single cue thresholds. Our results show that human subjects can learn to combine and optimally integrate vestibular and visual information, each signaling self-motion around a different rotation axis (yaw versus roll as well as pitch). This finding suggests that the experience of two temporally co-occurring but spatially unrelated self-motion cues leads to inferring a common cause to these two initially unrelated sources of information about self-motion.

Affiliations: 1: 1Laboratory of Cognitive Neuroscience, Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland; 2: 2Université de Genève, Neural Circuits and Behavior, Switzerland

Humans integrate multisensory information to reduce perceptual uncertainty when perceiving the world (Hillis et al., 2002, 2004) and self (Butler et al., 2010; Prsa et al., 2012) and it has been shown that two multisensory cues are combined and give rise to a single percept only if attributed to the same causal event (Koerding et al., 2007; Parise et al., 2012; Shams and Beierholm, 2010). A growing body of literature studies the limits of such integration for bodily self-consciousness and the perception of self-location under normal and pathological conditions (Ionta et al., 2011). We extend this research by investigating whether human subjects can learn to integrate two arbitrary visual and vestibular cues of self-motion due to their temporal co-occurrence. We conducted two experiments ( N = 8 each) in which whole-body rotations were used as the vestibular stimulus and optic flow as the visual stimulus. The vestibular stimulus provided a yaw self-rotation cue, the visual — a roll (experiment 1) or pitch (experiment 2) rotation cue. Subjects made a relative size comparison between a standard rotation size and a variable test rotation size. Their discrimination performance was fit with a psychometric function and perceptual discrimination thresholds were extracted. We compared experimentally measured thresholds in the bimodal condition with theoretical predictions derived from the single cue thresholds. Our results show that human subjects can learn to combine and optimally integrate vestibular and visual information, each signaling self-motion around a different rotation axis (yaw versus roll as well as pitch). This finding suggests that the experience of two temporally co-occurring but spatially unrelated self-motion cues leads to inferring a common cause to these two initially unrelated sources of information about self-motion.

Loading

Full text loading...

/deliver/22134808/26/10/22134808_026_00_S111_text.html?itemId=/content/journals/10.1163/22134808-000s0111&mimeType=html&fmt=ahah
/content/journals/10.1163/22134808-000s0111
Loading

Data & Media loading...

http://brill.metastore.ingenta.com/content/journals/10.1163/22134808-000s0111
Loading
Loading

Article metrics loading...

/content/journals/10.1163/22134808-000s0111
2013-05-16
2017-08-19

Sign-in

Can't access your account?
  • Key

  • Full access
  • Open Access
  • Partial/No accessInformation