• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 1
  • Tagged with
  • 17
  • 17
  • 7
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The effects of allocentric cue presence on eye-hand coordination: disappearing targets in motion

Langridge, Ryan 12 September 2016 (has links)
Participants executed right-handed reach-to-grasp movements toward horizontally translating targets. Visual feedback of the target when reaching, as well as the presence of additional cues placed close (Experiment 1) or far (Experiment 2) above and below the target’s path was manipulated. Additional cue presence appeared to impair participants’ ability to extrapolate the disappeared target’s motion, and caused grasps for occluded targets to be less accurate. Final gaze and grasp positions were more accurate when reaching for leftward moving targets, suggesting individuals use different grasp strategies when reaching for targets travelling away from the reaching hand. Comparison of average fixations at reach onset and at the time of the grasp suggested that participants accurately extrapolated the occluded target’s motion prior to reach onset, but not after, resulting in inaccurate grasps. New information is provided about the eye-hand strategies used when reaching for moving targets in unpredictable visual conditions. / October 2016
2

Perceptuomotor incoordination during manually-assisted search

Solman, Grayden J. F. January 2012 (has links)
The thesis introduces a novel search paradigm, and explores a previously unreported behavioural error detectable in this paradigm. In particular, the ‘Unpacking Task’ is introduced – a search task in which participants use a computer mouse to sort through random heaps of items in order to locate a unique target. The task differs from traditional search paradigms by including an active motor component in addition to purely perceptual inspection. While completing this task, participants are often found to select and move the unique target item without recognizing it, at times continuing to make many additional moves before correcting the error. This ‘unpacking error’ is explored with perceptual, memory load, and instructional manipulations, evaluating eye-movements and motor characteristics in additional to traditional response time and error rate metrics. It is concluded that the unpacking error arises because perceptual and motor systems fail to adequately coordinate during completion of the task. In particular, the motor system is found to ‘process’ items (i.e., to select and discard them) more quickly than the perceptual system is able to reliably identify those same items. On those occasions where the motor system selects and rejects the target item before the perceptual system has had time to resolve its identity, the unpacking error results. These findings have important implications for naturalistic search, where motor interaction is common, and provide further insights into the conditions under which perceptual and motor systems will interact in a coordinated or an uncoordinated fashion.
3

Perceptuomotor incoordination during manually-assisted search

Solman, Grayden J. F. January 2012 (has links)
The thesis introduces a novel search paradigm, and explores a previously unreported behavioural error detectable in this paradigm. In particular, the ‘Unpacking Task’ is introduced – a search task in which participants use a computer mouse to sort through random heaps of items in order to locate a unique target. The task differs from traditional search paradigms by including an active motor component in addition to purely perceptual inspection. While completing this task, participants are often found to select and move the unique target item without recognizing it, at times continuing to make many additional moves before correcting the error. This ‘unpacking error’ is explored with perceptual, memory load, and instructional manipulations, evaluating eye-movements and motor characteristics in additional to traditional response time and error rate metrics. It is concluded that the unpacking error arises because perceptual and motor systems fail to adequately coordinate during completion of the task. In particular, the motor system is found to ‘process’ items (i.e., to select and discard them) more quickly than the perceptual system is able to reliably identify those same items. On those occasions where the motor system selects and rejects the target item before the perceptual system has had time to resolve its identity, the unpacking error results. These findings have important implications for naturalistic search, where motor interaction is common, and provide further insights into the conditions under which perceptual and motor systems will interact in a coordinated or an uncoordinated fashion.
4

The things you do : implicit person models guide online action observation

Schenke, Kimberley Caroline January 2017 (has links)
Social perception is dynamic and ambiguous. Whilst previous research favoured bottom-up views where observed actions are matched to higher level (or motor) representations, recent accounts suggest top-down processes where prior knowledge guides perception of others’ actions, in a predictive manner. This thesis investigated how person-specific models of others’ typical behaviour in different situations are reactivated when they are re-encountered and predict their actions, using strictly controlled computer-based action identification tasks, event-related potentials (ERPs), as well as recording participants’ actions via motion tracking (using the Microsoft Kinect Sensor). The findings provided evidence that knowledge about seen actor’s typical behaviour is used in action observation. It was found, first, that actions are identified faster when performed by an actor that typically performed these actions compared to another actor who only performed them rarely (Chapters Two and Three). These effects were specific to meaningful actions with objects, not withdrawals from them, and went along with action-related ERP responses (oERN, observer related error negativity). Moreover, they occurred despite current actor identity not being relevant to the task, and were largely independent of the participants’ ability to report the individual’s behaviour. Second, the findings suggested that these predictive person models are embodied such that they influenced the observers own motor systems, even when the relevant actors were not seen acting (Chapter Four). Finally, evidence for theses person-models were found when naturalistic responding was required when participants had to use their feet to ‘block’ an incoming ball (measured by the Microsoft Kinect Sensor), where they made earlier and more pronounced movements when the observed actor behaved according to their usual action patterns (Chapter Five). The findings are discussed with respect to recent predictive coding theories of social perception, and a new model is proposed that integrates the findings.
5

Bases neurales des comportements orientés vers un but étude des corrélats de l'activité unitaire préfrontale et hippocampique dans une tâche de navigation /

Hok, Vincent Poucet, Bruno January 2008 (has links)
Reproduction de : Thèse de doctorat : Neurosciences : Toulouse 3 : 2007. / Titre provenant de l'écran-titre. Bibliogr. p. 169-188.
6

The influence of response discriminability and stimulus centring on object-based alignment effects

MacRae, Connor 30 April 2018 (has links)
The present study determined how object-based alignment effects are influenced by the arrangement of the stimuli and response options. It is well established that the magnitude of these effects differ depending on the mode of responding. This finding has often been used to support claims that viewing photograph images of graspable objects can automatically trigger motor representations, regardless of the intentions of the observer. Our findings instead suggest that the distinction between response modes is primarily a difference in response discriminability. More importantly, it was found that this influence of response discriminability works in a completely opposite manner, dependent on the technique used to center the frying pan stimuli. Pixel-centered stimuli produced a handle-based alignment effect that was enhanced under conditions of high response discriminability. Object-centered stimuli produced a body-based alignment effect that was diminished under conditions of high-response discriminability. These findings provide overwhelming evidence that qualitatively different principles govern the alignment effect found with pixel-centered and object-centered stimuli. Crucially, these finding also provide strong evidence against the notion that motor representations are triggered by images of graspable objects in the absence of an intention to act. / Graduate
7

Coordinative Dynamics: Joint Action Synergies During a Cooperative Puzzle Task

Hassebrock, Justin A. 24 April 2015 (has links)
No description available.
8

Peripersonal space in the humanoid robot iCub

Ramírez Contla, Salomón January 2014 (has links)
Developing behaviours for interaction with objects close to the body is a primary goal for any organism to survive in the world. Being able to develop such behaviours will be an essential feature in autonomous humanoid robots in order to improve their integration into human environments. Adaptable spatial abilities will make robots safer and improve their social skills, human-robot and robot-robot collaboration abilities. This work investigated how a humanoid robot can explore and create action-based representations of its peripersonal space, the region immediately surrounding the body where reaching is possible without location displacement. It presents three empirical studies based on peripersonal space findings from psychology, neuroscience and robotics. The experiments used a visual perception system based on active-vision and biologically inspired neural networks. The first study investigated the contribution of binocular vision in a reaching task. Results indicated the signal from vergence is a useful embodied depth estimation cue in the peripersonal space in humanoid robots. The second study explored the influence of morphology and postural experience on confidence levels in reaching assessment. Results showed that a decrease of confidence when assessing targets located farther from the body, possibly in accordance to errors in depth estimation from vergence for longer distances. Additionally, it was found that a proprioceptive arm-length signal extends the robot’s peripersonal space. The last experiment modelled development of the reaching skill by implementing motor synergies that progressively unlock degrees of freedom in the arm. The model was advantageous when compared to one that included no developmental stages. The contribution to knowledge of this work is extending the research on biologically-inspired methods for building robots, presenting new ways to further investigate the robotic properties involved in the dynamical adaptation to body and sensing characteristics, vision-based action, morphology and confidence levels in reaching assessment.
9

Human movement sonification for motor skill learning

Dyer, John January 2017 (has links)
Transforming human movement into live sound can be used as a method to enhance motor skill learning via the provision of augmented perceptual feedback. A small but growing number of studies hint at the substantial efficacy of this approach, termed 'movement sonification'. However there has been sparse discussion in Psychology about how movement should be mapped onto sound to best facilitate learning. The current thesis draws on contemporary research conducted in Psychology and theoretical debates in other disciplines more directly concerned with sonic interaction - including Auditory Display and Electronic Music-Making - to propose an embodied account of sonification as feedback. The empirical portion of the thesis both informs and tests some of the assumptions of this approach with the use of a custom bimanual coordination paradigm. Four motor skill learning studies were conducted with the use of optical motion-capture. Findings support the general assumption that effective mappings aid learning by making task-intrinsic perceptual information more readily available and meaningful, and that the relationship between task demands and sonic information structure (or, between action and perception) should be complementary. Both the theoretical and empirical treatments of sonification for skill learning in this thesis suggest the value of an approach which addresses learner experience of sonified interaction while grounding discussion in the links between perception and action.
10

Embodied Cognition as Internal Simulation of Perception and Action: Towards a cognitive robotics

Svensson, Henrik January 2002 (has links)
<p>This dissertation discusses the view that embodied cognition is essentially internal simulation (or emulation) of perception and action, and that the same (neural) mechanisms are underlying both real and simulated perception and action. More specifically, it surveys evidence supporting the simulation view from different areas of cognitive science (neuroscience, perception, psychology, social cognition, theory of mind). This is integrated with related research in situated robotics and directions for future work on internal simulation of perception and action in robots are outlined. In sum, the ideas discussed here provide an alternative view of representation, which is opposed to the traditional correspondence notions of representation that presuppose objectivism and functionalism. Moreover, this view is suggested as a viable route for situated robotics, which due to its rejection of traditional notions of representation so far has mostly dealt with more or less reactive behavior, to scale up to a cognitive robotics, and thus to further contribute to cognitive science and the understanding of higher-level cognition</p>

Page generated in 0.1469 seconds