Cookies Policy
X

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies.

I accept this policy

Find out more here

Full Access Touching bouba, hearing kiki. Image resolution and sound symbolism in visual-to-auditory sensory substitution

No metrics data to plot.
The attempt to load metrics for this article has failed.
The attempt to plot a graph for these metrics has failed.
The full text of this article is not currently available.

Brill’s MyBook program is exclusively available on BrillOnline Books and Journals. Students and scholars affiliated with an institution that has purchased a Brill E-Book on the BrillOnline platform automatically have access to the MyBook option for the title(s) acquired by the Library. Brill MyBook is a print-on-demand paperback copy which is sold at a favorably uniform low price.

Touching bouba, hearing kiki. Image resolution and sound symbolism in visual-to-auditory sensory substitution

  • PDF
  • HTML
Add to Favorites
You must be logged in to use this functionality

image of Multisensory Research
For more content, see Seeing and Perceiving and Spatial Vision.

The Bouba/Kiki effect involves non-arbitrary mapping between visual shape and speech sounds (Ramachandran and Hubbard, 2001). We presented two groups of participants with both tactile ‘bouba and kiki’s’ and their sonified soundscapes. One group trained in the use of a visual-to-auditory sensory substitution device (SSD), and the other a control group that received no training. Participants in the trained condition learned to make associations between visual and tactile objects and their soundscapes using the SSD. In the test phases the objects were categorized by image resolution and temporal duration. Even though higher resolution images were possible (maximum theoretical resolution of 176 × 144 pixels), successful object discrimination was significant down to 8 × 8 pixels for both tactile-to-auditory and visual-to-auditory conditions. Also, for duration, a highly significant difference favouring the ‘short’ duration category was found for the tactile-to-auditory condition. Next, participants were tested with the Bouba/Kiki stimuli. Each group was tested with the tactile shape test. Participants in each condition primarily chose the expected tactile shape (88% trained condition; 83% control), consistent with that chosen visually. Next each group listened to SSD soundscape versions of the stimuli. Here only 41% of the naïve control participants chose the expected soundscape. In contrast, 76.5% of the trained participants selected the expected soundscape that was consistent with that chosen visually (all selected the expected visual–verbal association). The results show that amongst naive users of SSD’s shape discrimination can be made using very basic object features and non-arbitrary cross-modal mappings are apparent after just a basic training regime.

Affiliations: 1: 1Queen Mary University of London, UK; 2: 2University of Bath, UK

The Bouba/Kiki effect involves non-arbitrary mapping between visual shape and speech sounds (Ramachandran and Hubbard, 2001). We presented two groups of participants with both tactile ‘bouba and kiki’s’ and their sonified soundscapes. One group trained in the use of a visual-to-auditory sensory substitution device (SSD), and the other a control group that received no training. Participants in the trained condition learned to make associations between visual and tactile objects and their soundscapes using the SSD. In the test phases the objects were categorized by image resolution and temporal duration. Even though higher resolution images were possible (maximum theoretical resolution of 176 × 144 pixels), successful object discrimination was significant down to 8 × 8 pixels for both tactile-to-auditory and visual-to-auditory conditions. Also, for duration, a highly significant difference favouring the ‘short’ duration category was found for the tactile-to-auditory condition. Next, participants were tested with the Bouba/Kiki stimuli. Each group was tested with the tactile shape test. Participants in each condition primarily chose the expected tactile shape (88% trained condition; 83% control), consistent with that chosen visually. Next each group listened to SSD soundscape versions of the stimuli. Here only 41% of the naïve control participants chose the expected soundscape. In contrast, 76.5% of the trained participants selected the expected soundscape that was consistent with that chosen visually (all selected the expected visual–verbal association). The results show that amongst naive users of SSD’s shape discrimination can be made using very basic object features and non-arbitrary cross-modal mappings are apparent after just a basic training regime.

Loading

Full text loading...

/deliver/22134808/26/10/22134808_026_00_S44_text.html;jsessionid=2po0b20yxoMqUSRp8voGZvcp.x-brill-live-02?itemId=/content/journals/10.1163/22134808-000s0044&mimeType=html&fmt=ahah
/content/journals/10.1163/22134808-000s0044
Loading

Data & Media loading...

http://brill.metastore.ingenta.com/content/journals/10.1163/22134808-000s0044
Loading
Loading

Article metrics loading...

/content/journals/10.1163/22134808-000s0044
2013-05-16
2016-12-11

Sign-in

Can't access your account?
  • Key

  • Full access
  • Open Access
  • Partial/No accessInformation