Cookies Policy
X

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies.

I accept this policy

Find out more here

Full Access Sources of variance in the audiovisual perception of speech in noise

No metrics data to plot.
The attempt to load metrics for this article has failed.
The attempt to plot a graph for these metrics has failed.
The full text of this article is not currently available.

Brill’s MyBook program is exclusively available on BrillOnline Books and Journals. Students and scholars affiliated with an institution that has purchased a Brill E-Book on the BrillOnline platform automatically have access to the MyBook option for the title(s) acquired by the Library. Brill MyBook is a print-on-demand paperback copy which is sold at a favorably uniform low price.

Sources of variance in the audiovisual perception of speech in noise

  • PDF
  • HTML
Add to Favorites
You must be logged in to use this functionality

image of Seeing and Perceiving
For more content, see Multisensory Research and Spatial Vision.

The sight of a talker’s face dramatically influences the perception of auditory speech. This effect is most commonly observed when subjects are presented audiovisual (AV) stimuli in the presence of acoustic noise. However, the magnitude of the gain in perception that vision adds varies considerably in published work. Here we report data from an ongoing study of individual differences in AV speech perception when English words are presented in an acoustically noisy background. A large set of monosyllablic nouns was presented at 7 signal-to-noise ratios (pink noise) in both AV and auditory-only (AO) presentation modes. The stimuli were divided into 14 blocks of 25 words and each block was equated for spoken frequency using the SUBTLEXus database (Brysbaert and New, 2009). The presentation of the stimulus blocks was counterbalanced across subjects for noise level and presentation. In agreement with Sumby and Pollack (1954), the accuracy of both AO and AV increase monotonically with signal strength with the greatest visual gain being when the auditory signal was weakest. These average results mask considerable variability due to subject (individual differences in auditory and visual perception), stimulus (lexical type, token articulation) and presentation (signal and noise attributes) factors. We will discuss how these sources of variance impede comparisons between studies.

Affiliations: 1: 1Centre for Neuroscience Studies, Queen’s University, CA; 2: 2Department of Psychology, Queen’s University, CA; 3: 3Department of Biomedical and Molecular Sciences, Queen’s University, CA

The sight of a talker’s face dramatically influences the perception of auditory speech. This effect is most commonly observed when subjects are presented audiovisual (AV) stimuli in the presence of acoustic noise. However, the magnitude of the gain in perception that vision adds varies considerably in published work. Here we report data from an ongoing study of individual differences in AV speech perception when English words are presented in an acoustically noisy background. A large set of monosyllablic nouns was presented at 7 signal-to-noise ratios (pink noise) in both AV and auditory-only (AO) presentation modes. The stimuli were divided into 14 blocks of 25 words and each block was equated for spoken frequency using the SUBTLEXus database (Brysbaert and New, 2009). The presentation of the stimulus blocks was counterbalanced across subjects for noise level and presentation. In agreement with Sumby and Pollack (1954), the accuracy of both AO and AV increase monotonically with signal strength with the greatest visual gain being when the auditory signal was weakest. These average results mask considerable variability due to subject (individual differences in auditory and visual perception), stimulus (lexical type, token articulation) and presentation (signal and noise attributes) factors. We will discuss how these sources of variance impede comparisons between studies.

Loading

Full text loading...

/deliver/18784763/25/0/18784763_025_00_S115_text.html;jsessionid=G8LmDVZ_iOFkxJbEmtVLI2Ys.x-brill-live-03?itemId=/content/journals/10.1163/187847612x647568&mimeType=html&fmt=ahah
/content/journals/10.1163/187847612x647568
Loading

Data & Media loading...

1. Brysbaert M. , New B. ( 2009). "Moving beyond Kučera and Francis: A critical evaluation of current word frequency norms and the introduction of a new and improved word frequency measure for American English", Behavior Research Methods Vol 41, 977990. http://dx.doi.org/10.3758/BRM.41.4.977
2. Sumby W. H. , Pollack I. ( 1954). "Visual contribution to speech intelligibility in noise", Journal of the Acoustical Society of America Vol 26, 212215. http://dx.doi.org/10.1121/1.1907309
http://brill.metastore.ingenta.com/content/journals/10.1163/187847612x647568
Loading
Loading

Article metrics loading...

/content/journals/10.1163/187847612x647568
2012-01-01
2016-12-05

Sign-in

Can't access your account?
  • Key

  • Full access
  • Open Access
  • Partial/No accessInformation