Cookies Policy

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies.

I accept this policy

Find out more here

A computational analysis of separating motion signals in transparent random dot kinematograms

No metrics data to plot.
The attempt to load metrics for this article has failed.
The attempt to plot a graph for these metrics has failed.
The full text of this article is not currently available.

Brill’s MyBook program is exclusively available on BrillOnline Books and Journals. Students and scholars affiliated with an institution that has purchased a Brill E-Book on the BrillOnline platform automatically have access to the MyBook option for the title(s) acquired by the Library. Brill MyBook is a print-on-demand paperback copy which is sold at a favorably uniform low price.

Access this article

+ Tax (if applicable)
Add to Favorites
You must be logged in to use this functionality

image of Spatial Vision
For more content, see Multisensory Research and Seeing and Perceiving.

When multiple motion directions are presented simultaneously within the same region of the visual field human observers see motion transparency. This perceptual phenomenon requires from the visual system to separate different motion signal distributions, which are characterised by distinct means that correspond to the different dot directions and variances that are determined by the signal and processing noise. Averaging of local motion signals can be employed to reduce noise components, but such pooling could at the same time lead to the averaging of different directional signal components, arising from spatially adjacent dots moving in different directions, which would reduce the visibility of transparent directions. To study the theoretical limitations of encoding transparent motion by a biologically plausible motion detector network, the distributions of motion directions signalled by a motion detector model (2DMD) were analysed here for Random Dot Kinematograms (RDKs). In sparse dot RDKs with two randomly interleaved motion directions, the angular separation that still allows us to separate two directions is limited by the internal noise in the system. Under the present conditions direction differences down to 30 deg could be separated. Correspondingly, in a transparent motion stimulus containing multiple motion directions, more than eight directions could be separated. When this computational analysis is compared to some published psychophysical data, it appears that the experimental results do not reach the predicted limits. Whereas the computer simulations demonstrate that even an unsophisticated motion detector network would be appropriate to represent a considerable number of motion directions simultaneously within the same region, human observers usually are restricted to seeing not more than two or three directions under comparable conditions. This raises the question why human observers do not make full use of information that could be easily extracted from the representation of motion signals at the early stages of the visual system.


Full text loading...


Data & Media loading...

Article metrics loading...



Can't access your account?
  • Tools

  • Add to Favorites
  • Printable version
  • Email this page
  • Subscribe to ToC alert
  • Get permissions
  • Recommend to your library

    You must fill out fields marked with: *

    Librarian details
    Your details
    Why are you recommending this title?
    Select reason:
    Spatial Vision — Recommend this title to your library
  • Export citations
  • Key

  • Full access
  • Open Access
  • Partial/No accessInformation