Cookies Policy
X

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies.

I accept this policy

Find out more here

Full Access Acquiring object affordances through touch, vision, and language

No metrics data to plot.
The attempt to load metrics for this article has failed.
The attempt to plot a graph for these metrics has failed.
The full text of this article is not currently available.

Brill’s MyBook program is exclusively available on BrillOnline Books and Journals. Students and scholars affiliated with an institution that has purchased a Brill E-Book on the BrillOnline platform automatically have access to the MyBook option for the title(s) acquired by the Library. Brill MyBook is a print-on-demand paperback copy which is sold at a favorably uniform low price.

Acquiring object affordances through touch, vision, and language

  • HTML
  • PDF
Add to Favorites
You must be logged in to use this functionality

image of Seeing and Perceiving
For more content, see Multisensory Research and Spatial Vision.

We often use tactile-input in order to recognize familiar objects and to acquire information about unfamiliar ones. We also use our hands to manipulate objects and utilize them as tools. However, research on object affordances has mainly been focused on visual-input and, thus, limiting the level of detail one can get about object features and uses. In addition to the limited multisensory-input, data on object affordances has also been hindered by limited participant input (e.g., naming task). In order to address the above mention limitations, we aimed at identifying a new methodology for obtaining undirected, rich information regarding people’s perception of a given object and the uses it can afford without necessarily viewing the particular object. Specifically, 40 participants were video-recorded in a three-block experiment. During the experiment, participants were exposed to pictures of objects, pictures of someone holding the objects, and the actual objects and they were allowed to provide unconstrained verbal responses on the description and possible uses of the stimuli presented. The stimuli presented were lithic tools given the: novelty, man-made design, design for specific use/action, and absence of functional knowledge and movement associations. The experiment resulted in a large linguistic database, which was linguistically analyzed following a response-based specification. Analysis of the data revealed significant contribution of visual- and tactile-input in naming and definition of object-attributes (color/condition/shape/size/texture/weight), while no significant tactile-information was obtained for object-features of material, visual-pattern, and volume. Overall, this new approach highlights the importance of multisensory-input in the study of object affordances.

Affiliations: 1: Cognitive Systems Research Institute (CSRI), GR

We often use tactile-input in order to recognize familiar objects and to acquire information about unfamiliar ones. We also use our hands to manipulate objects and utilize them as tools. However, research on object affordances has mainly been focused on visual-input and, thus, limiting the level of detail one can get about object features and uses. In addition to the limited multisensory-input, data on object affordances has also been hindered by limited participant input (e.g., naming task). In order to address the above mention limitations, we aimed at identifying a new methodology for obtaining undirected, rich information regarding people’s perception of a given object and the uses it can afford without necessarily viewing the particular object. Specifically, 40 participants were video-recorded in a three-block experiment. During the experiment, participants were exposed to pictures of objects, pictures of someone holding the objects, and the actual objects and they were allowed to provide unconstrained verbal responses on the description and possible uses of the stimuli presented. The stimuli presented were lithic tools given the: novelty, man-made design, design for specific use/action, and absence of functional knowledge and movement associations. The experiment resulted in a large linguistic database, which was linguistically analyzed following a response-based specification. Analysis of the data revealed significant contribution of visual- and tactile-input in naming and definition of object-attributes (color/condition/shape/size/texture/weight), while no significant tactile-information was obtained for object-features of material, visual-pattern, and volume. Overall, this new approach highlights the importance of multisensory-input in the study of object affordances.

Loading

Full text loading...

/deliver/18784763/25/0/18784763_025_00_S060_text.html;jsessionid=Gjfua2t7Ov1K_hSmpcvo5umV.x-brill-live-02?itemId=/content/journals/10.1163/187847612x646857&mimeType=html&fmt=ahah
/content/journals/10.1163/187847612x646857
Loading

Data & Media loading...

http://brill.metastore.ingenta.com/content/journals/10.1163/187847612x646857
Loading
Loading

Article metrics loading...

/content/journals/10.1163/187847612x646857
2012-01-01
2016-12-10

Sign-in

Can't access your account?
  • Key

  • Full access
  • Open Access
  • Partial/No accessInformation