• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • Tagged with
  • 5
  • 5
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Compatibility effects evoked by pictures of graspable objects

van Noordenne, Maria H.J. 31 August 2017 (has links)
It has been claimed that action representations can be evoked by the image of a handled object (Tucker & Ellis, 1998). Contrary to this view, it may instead be the location of the object’s handle in visual space that generates a spatial code that in turn interacts with selection of response location. For example, an object with its handle extending into right visual space may bias attention to the right, resulting in a faster right- versus left-sided response (Cho & Proctor, 2010). In the current experiments I present evidence that under certain task conditions, images of objects evoke their corresponding action representations. When subjects engaged in laterality judgments to images of hands presented after or in conjunction with an image of a handled object, motor representations associated with that object were evoked. Although the location of the handle was irrelevant to the task, subjects were faster at responding when the depicted handle location and hand of response were aligned (i.e., right-handed key press to a right-handled frying pan) rather than misaligned. The effect of alignment remained constant across the response time distribution. When subjects made a crossed-hand response, the alignment effect was driven by a correspondence between the location of the object’s handle and the response hand, not the response location. These results contrast with what was found when observers responded to directional arrow cues in place of pictures of hands. With arrow cues, the observed alignment effect appeared to be driven by spatial correspondence between the location of the object’s body and the location of the response button. Moreover, in this case the alignment effect decreased across the response time distribution, in keeping with other cases of spatial compatibility effects (Proctor, Miles, & Baroni, 2011). I conclude that attention to an image of a hand can induce observers to activate motor affordances associated with pictured objects. / Graduate
2

Form Follows Function: The Time Course of Action Representations Evoked by Handled Objects

Kumar, Ragav 21 August 2015 (has links)
To investigate the role of action representations in the identification of upright and rotated objects, we examined the time course of their evocation. Across five experiments, subjects made vertically or horizontally oriented reach and grasp actions primed by images of handled objects that were depicted in upright or rotated orientations, at various Stimulus Onset Asynchronies: -250 ms (action cue preceded the prime), 0 ms, and +250 ms. Congruency effects between action and object orientation were driven by the object's canonical (upright) orientation at the 0 ms SOA, but by its depicted orientation at the +250 ms SOA. Alignment effects between response hand and the object's handle appeared only at the +250 ms SOA, and were driven by the depicted orientation. Surprisingly, an attempt to replicate this finding with improved stimuli (Experiment 3) did not show significant congruency effects at the 0 ms SOA; a further examination of the 0 ms SOA in Experiments 4 and 5 also failed to reach significance. However, a meta-analysis of the latter three experiments showed evidence for the congruency effect, suggesting that the experiments might just have been underpowered. We conclude that subjects initially evoke a conceptually-driven motor representation of the object, and that only after some time can the depicted form become prominent enough to influence the elicited action representation. / Graduate / 0633 / ragavk@uvic.ca
3

Learning action representations using kernel perceptrons

Mourao, Kira Margaret Thom January 2012 (has links)
Action representation is fundamental to many aspects of cognition, including language. Theories of situated cognition suggest that the form of such representation is distinctively determined by grounding in the real world. This thesis tackles the question of how to ground action representations, and proposes an approach for learning action models in noisy, partially observable domains, using deictic representations and kernel perceptrons. Agents operating in real-world settings often require domain models to support planning and decision-making. To operate effectively in the world, an agent must be able to accurately predict when its actions will be successful, and what the effects of its actions will be. Only when a reliable action model is acquired can the agent usefully combine sequences of actions into plans, in order to achieve wider goals. However, learning the dynamics of a domain can be a challenging problem: agents’ observations may be noisy, or incomplete; actions may be non-deterministic; the world itself may be noisy; or the world may contain many objects and relations which are irrelevant. In this thesis, I first show that voted perceptrons, equipped with the DNF family of kernels, easily learn action models in STRIPS domains, even when subject to noise and partial observability. Key to the learning process is, firstly, the implicit exploration of the space of conjunctions of possible fluents (the space of potential action preconditions) enabled by the DNF kernels; secondly, the identification of objects playing similar roles in different states, enabled by a simple deictic representation; and lastly, the use of an attribute-value representation for world states. Next, I extend the model to more complex domains by generalising both the kernel and the deictic representation to a relational setting, where world states are represented as graphs. Finally, I propose a method to extract STRIPS-like rules from the learnt models. I give preliminary results for STRIPS domains and discuss how the method can be extended to more complex domains. As such, the model is both appropriate for learning data generated by robot explorations as well as suitable for use by automated planning systems. This combination is essential for the development of autonomous agents which can learn action models from their environment and use them to generate successful plans.
4

Getting a Handle on Meaning: Planned Hand Actions' Influence on the Identification of Handled Objects

Moise, Noah 03 October 2022 (has links)
We confirm that under certain conditions, constituents of motor actions afforded by handled objects play a role in their identification. Subjects held in working memory action plans specifying both the laterality of the hand to be used (left or right) and a wrist orientation (vertical or horizontal). Speeded object identification was impaired when a pictured object matched the action on only one of these two categorical dimensions (e.g., a frying pan with its handle facing left, an action plan involving the right hand and horizontal wrist orientation), relative to when the object matched the action on both dimensions or neither dimension. This phenomenon only occurred for a semantic task (i.e., naming) and significantly weakened when the handled object was named following the naming of a non-handled object. These results imply that, when maintaining the features of planned actions in working memory, identification of the object leads to conflict between components of the action plan and features of the grasp action afforded by the depicted object. When bound to a matching feature, the discrepant features cannot be easily disregarded, and conflict with the features of the target object resulted in delayed identification. Naming a non-handled object first weakens the pragmatic processing generated by attending to the features of the action plans, resulting in less conflict when only one feature matched between the action plan and action afforded by the handled object. / Graduate
5

Body representations in action : development and plasticity in the sensory guidance of prehension / Représentations du corps dans l'action : développement et plasticité dans le guidage sensoriel de la préhension

Martel, Marie 06 December 2016 (has links)
Planifier, exécuter un mouvement fait appel à des représentations mentales de l'action. Ces dernières ont été formalisées par les sciences computationnelles sous le terme de modèles internes du contrôle moteur. Outre l'environnement, les informations concernant la posture, les dimensions de l'effecteur sont également cruciales et doivent être mises à jour fréquemment. Etonnamment, les modèles actuels de l‘action n'attribuent pas aux représentations du corps un rôle majeur. La mise à jour de ces représentations de l'action et du corps doit intervenir dès l'enfance, néanmoins leur développement reste méconnu. En premier lieu, je me suis attachée au développement des représentations de l'action chez les enfants de 5 à 10 ans, au développement typique d'une part et lors de Troubles des Acquisitions et de la Coordination (TAC) d'autre part. A travers la cinématique, j'ai cherché à comprendre comment les enfants développent leur capacité à anticiper et adapter leurs mouvements. Dans un deuxième temps, l'utilisation d'un outil qui allonge fonctionnellement le bras m'a permis d'examiner les inputs sensoriels (vision, proprioception) nécessaires à la plasticité des représentations du corps chez l'adulte. Enfin, j'ai interrogé ces mécanismes de plasticité des représentations de l'effecteur au cours de la croissance de l'individu alors que les dimensions corporelles changent progressivement. Dans ce but, j'ai étudié la plasticité induite par l'outil chez des enfants et adolescents au développement typique. Finalement, je discuterai des liens entre représentations du corps et contrôle moteur, deux notions indispensables à la cognition motrice / To prepare and perform movements efficiently, accurate action representations are necessary, formalized by computational science as “internal models”. Actions representations do not require exclusively the representation of object properties, information about the body and particularly the effector such as its posture and dimension are also crucial. Thus, effector representations need to be updated to account for postural changes, yet, they do not play a prominent role in the actual models of motor control. In addition, updates settings of both action and body representation are presumably established ontogenetically, but little is known on their developmental path. First, I investigated the maturation of action representation in children from 5 to 10 years of age, as well as the potential differences in children with Developmental Coordination Disorder (DCD). Through kinematics analyses, I sought to understand how children develop their ability to control their movements. Second, using a tool functionally extending arm length, I questioned the sensory inputs for body representation plasticity in adults, such as proprioception and vision. Third, I probed rapid body representations plasticity during the slowly changing dimensions of the body during growth. To this aim I investigated in typically developing children and adolescents tool-induced plasticity of the upper-limb representation. Finally, I discuss the relationship between body representations and motor control in adults and children, as despite being both related they have often times walked parallel ways

Page generated in 0.1718 seconds