• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 29
  • 29
  • 12
  • Tagged with
  • 429
  • 175
  • 58
  • 53
  • 33
  • 22
  • 20
  • 18
  • 17
  • 15
  • 14
  • 14
  • 14
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Cerebellum inspired robotic gaze control

Lenz, Alexander January 2011 (has links)
The primary aims of this research were to gain insight into control architectures of the mammalian brain and to explore how such architectures can be transferred into real-world robotic systems. Specifically, the work presented in this thesis focuses on the cerebellum, a part of the brain implicated in motor learning. Based on biologically grounded assumptions of uniformity of the cerebellar structure, one specific (but representative) example of cerebellar motor control was investigated: the mammalian vestibula-ocular reflex (VOR). During movement, animals are faced with disturbances with respect to their vision system. The VOR compensates for head motion by driving the eyes in the opposite direction of the head and thereby stabilising the image on the retina. Due to severe delays in the visual feedback signal, the VOR is required to operate as an open-loop controller, which uses proprioceptive information about head motion to instigate eye movements. As a feed-forward control system, it requires calibration to gradually learn the required motor commands. This is achieved by the cerebellum through the utilisation of the delayed visual information encoding image slip. In order to explore the suitability of a recurrent cerebellar model to achieve similar performance in a robotic context, engineering equivalents of the biological sub-systems were developed and integrated as a distributed embedded computing infrastructure. These included systems for rotation sensing, vision, actuation, stimulation and monitoring. Real-time implementations of cerebellar models were developed and then tested on two custom designed robotic eyes: one actuated with electrical motors and the other operated by pneumatic artificial muscles. It is argued that the successful transfer of cerebellar models into robotic systems implicitly validates these models by providing an existence proof in terms of structure, robust learning under noisy real- world conditions, and the functional role of the cerebellum. In addition, the gained insights from this research may be exploitable in terms of control of novel actuators in the emerging field of soft robotics. Finally, the presented architectures, including hardware and software infrastructures, provide a platform with which to explore other advanced models of brain mediated sensory-motor control interfaces.
12

Novel approaches for the safety of human-robot interaction

Woodman, Roger January 2013 (has links)
In recent years there has been a concerted effort to address many of the safety issues associated with physical human-robot interaction (pHRI). However, a number of challenges remain. For personal robots, and those intended to operate in unstructured environments, the problem of safety is compounded. We believe that the safety issue is a primary factor in wide scale adoption of personal robots, and until these issues are addressed, commercial enterprises will be unlikely to invest heavily in their development. In this thesis we argue that traditional system design techniques fail to capture the complexities associated with dynamic environments. This is based on a careful analysis of current design processes, which looks at how effectively they identify hazards that may arise in typical environments that a personal robot may be required to operate in. Based on this investigation, we show how the adoption of a hazard check list that highlights particular hazardous areas, can be used to improve current hazard analysis techniques. A novel safety-driven control system architecture is presented, which attempts to address many of the weaknesses identified with the present designs found in the literature. The new architecture design centres around safety, and the concept of a `safety policy' is introduced. These safety policies are shown to be an effective way of describing safety systems as a set of rules that dictate how the system should behave in potentially hazardous situations. A safety analysis methodology is introduced, which integrates both our hazard analysis technique and the implementation of the safety layer of our control system. This methodology builds on traditional functional hazard analysis, with the addition of processes aimed to improve the safety of personal robots. This is achieved with the use of a safety system, developed during the hazard analysis stage. This safety system, called the safety protection system, is initially used to verify that safety constraints, identified during hazard analysis, have been implemented appropriately. Subsequently it serves as a high-level safety enforcer, by governing the actions of the robot and preventing the control layer from performing unsafe operations. To demonstrate the effectiveness of the design, a series of experiments have been conducted using both simulation environments and physical hardware. These experiments demonstrate the effectiveness of the safety-driven control system for performing tasks safely, while maintaining a high level of availability.
13

Multi-modal task instructions to robots by naive users

Wolf, Joerg Christian January 2008 (has links)
This thesis presents a theoretical framework for the design of user-programmable robots. The objective of the work is to investigate multi-modal unconstrained natural instructions given to robots in order to design a learning robot. A corpus-centred approach is used to design an agent that can reason, learn and interact with a human in a natural unconstrained way. The corpus-centred design approach is formalised and developed in detail. It requires the developer to record a human during interaction and analyse the recordings to find instruction primitives. These are then implemented into a robot. The focus of this work has been on how to combine speech and gesture using rules extracted from the analysis of a corpus. A multi-modal integration algorithm is presented, that can use timing and semantics to group, match and unify gesture and language. The algorithm always achieves correct pairings on a corpus and initiates questions to the user in ambiguous cases or missing information. The domain of card games has been investigated, because of its variety of games which are rich in rules and contain sequences. A further focus of the work is on the translation of rule-based instructions. Most multi-modal interfaces to date have only considered sequential instructions. The combination of frame-based reasoning, a knowledge base organised as an ontology and a problem solver engine is used to store these rules. The understanding of rule instructions, which contain conditional and imaginary situations require an agent with complex reasoning capabilities. A test system of the agent implementation is also described. Tests to confirm the implementation by playing back the corpus are presented. Furthermore, deployment test results with the implemented agent and human subjects are presented and discussed. The tests showed that the rate of errors that are due to the sentences not being defined in the grammar does not decrease by an acceptable rate when new grammar is introduced. This was particularly the case for complex verbal rule instructions which have a large variety of being expressed.
14

Development of cognitive capabilities in humanoid robots

Tikhanoff, Vadim January 2009 (has links)
Building intelligent systems with human level of competence is the ultimate grand challenge for science and technology in general, and especially for the computational intelligence community. Recent theories in autonomous cognitive systems have focused on the close integration (grounding) of communication with perception, categorisation and action. Cognitive systems are essential for integrated multi-platform systems that are capable of sensing and communicating. This thesis presents a cognitive system for a humanoid robot that integrates abilities such as object detection and recognition, which are merged with natural language understanding and refined motor controls. The work includes three studies; (1) the use of generic manipulation of objects using the NMFT algorithm, by successfully testing the extension of the NMFT to control robot behaviour; (2) a study of the development of a robotic simulator; (3) robotic simulation experiments showing that a humanoid robot is able to acquire complex behavioural, cognitive, and linguistic skills through individual and social learning. The robot is able to learn to handle and manipulate objects autonomously, to cooperate with human users, and to adapt its abilities to changes in internal and environmental conditions. The model and the experimental results reported in this thesis, emphasise the importance of embodied cognition, i.e. the humanoid robot's physical interaction between its body and the environment.
15

Navigational assistance for disabled wheelchair users

Goodwin, Michael John January 1999 (has links)
Previous low cost systems of navigational assistance for disabled wheelchair users have provided little more than simple obstacle and collision avoidance, or follow a pre-defined fixed route defined by a white line or a buried wire. Other research has used complex high cost multi-sensor mode systems closely resembling industrial, military or space exploration applications. These systems used natural features or artificial beacons to produce accurate maps of the operating environments. The progress of the vehicle is monitored and corrected using multisensor techniques such as vision cameras, odometry and triangulation from beacons located in the environment. Such systems have required modification of the operating environment and have resulted in a fully autonomous vehicle providing little or no overall control by the user. Whilst proving the technical feasibilty their cost and complexity has not resulted in practical and affordable solutions for the wheelchair user. The purpose of the present study was to bridge the gap between these two previous areas of research and to provide navigational assistance at an affordable cost. Low cost ultrasonic sensors enabled a wheelchair to operate in an unknown (i. e. previously unmapped) environment whilst leaving the user in overall control. Hardware modifications to a commercial powered wheelchair enabled data from ultrasonic arrays and the user's joystick to be interrogated and mixed by a computer to provide appropriate signals for the wheelchair drive motors. A simulation program was created to interpret the sensor signals that would be generated from the various conditions likely to be encountered by a wheelchair and to develop the various control strategies. The simulation was able to differentiate between the various environmental conditions and select the appropriate action using the newly created control algorithms. The sensor data interpretation modules together with the control algorithms, from the simulation, were incorporated into a practical system for controlling the wheelchair. In tests data from the sensors was used to detect and evaluate localised changes in the environment and used to determine appropriate signals for the drive wheel motors. In the tests it was found that the wheelchair controller and the geometry of the wheelchair resulted in a degradation of the expected wheelchair response. This was overcome in two ways: firstly by modifying the control algorithm and secondly by changing the wheelchair geometry.
16

Efficient monocular SLAM by using a structure-driven mapping

Carranza, Jose Martinez January 2012 (has links)
Important progress has been achieved in recent years with regards to the monocular SLAM problem, which consists of estimating the 6-D pose of a single camera, whilst building a 3-D map representation of scene structure observed by the camera. Nowa- days, there exist various monocular SLAM systems capable of outputting camera and map estimates at camera frame rates over long trajectories and for indoor and outdoor scenarios. These systems are attractive due to their low cost - a consequence of using a conventional camera - and have been widely utilised in different applications such as in augmented and virtual reality. However, the main utility of the built map has been reduced to work as an effective reference system for robust and fast camera localisation. In order to produce more useful maps, different works have proposed the use of higher-level structures such as lines, planes and even meshes. Planar structure is one of the most popular structures to be incorporated into the map, given that they are abundant in man-made scenes, and because a plane by itself provides implicit semantic cues about the scene structure. Nevertheless, very often planar structure detection is carried out by ad-hoc auxiliary methods delivering a delayed detection and therefore a delayed mapping which becomes a problem when rapid planar mapping is demanded. This thesis addresses the problem of planar structure detection and mapping by propos- ing a novel mapping mechanism called structure-driven mapping. This new approach aims at enabling a monocular SLAM system to perform planar or point mapping ac- cording to scene structure observed by the camera. In order to achieve this, we propose to incorporate the plane detection task into the SLAM process. For this purpose, we have ' developed a novel framework that unifies planar and point mapping under a common parameterisation. This enables map components to evolve according to the incremen- tal visual observations of the scene structure thus providing undelayed planar mapping. Moreover, the plane detection task stops as soon as the camera explores a non planar structure scenario, which avoids wasting unnecessary processing time, starting again as soon as planar structure gets into view. We present a thorough evaluation of this novel approach through simulation experiments and results obtained with real data. We also present a visual odometry application which takes advantage of the efficient way in which the scene structure is mapped by using the novel mapping mechanism presented in this work. Therefore, the results suggest the feasibility of performing simultaneous planar structure detection, localisation and mapping within the same coherent estimation framework.
17

Linguistic decision tree cloning of optimised trajectories for real-time obstacle avoidance

Turnbull, Oliver January 2008 (has links)
Linguistic Decision Trees (LDTs) are used to clone the behaviour of a Model Predictive Control (MPC) algorithm for obstacle avoidance. The resulting controller benefits from the optimised trajectories of the MPC and the rapid computation of the decision tree to provide decisions that are suitable for use in a real-time dynamic environment. The LDT represents a set of linguistic decision rules that ensure a high degree of controller transparency. A method to predict discontinuous functions, such as UAV heading deviation required to avoid an obstacle, is proposed and shown to significantly improve the controller's performance.
18

An active touch approach to object detection in urban search and rescue

Pele Odiase, Oziegbe-orhuwa January 2008 (has links)
One of the fundamental challenges in robotics is object detection in unstructured environments which includes urban search and rescue (USAR). In these environments, objects' properties are not known a priori, vision is partially or totally impaired and sensing is susceptible to errors. Furthermore, the noncontact sensors such as sonar sensors, video cameras and infrared sensors used for object detection have limitations that make them inadequate for successful completion of object detection tasks in USAR. As a solution, the active touch approach for object detection is proposed.
19

Design of a novel robotic gripper for dexterous assembly of compliant elements

Widhiada, I. Wayan January 2012 (has links)
This thesis describes an investigation of devising a high speed single gripper which can provide all necessary gripping tasks required to assemble a gas regulator. This is novel in that the creation of a single gripper that can be precisely controlled to grasp a range of engineering components of widely different handling characteristics has not, to the author's knowledge, been previously studied. A novel feature of the investigated high speed gripper is the inclusion of a prismatic sliding element at the end of each fmger to facilitate the handling of large and small compliant components typically found in the assembly products such as a domestic gas regulator. Also, a unique vacuum system on the fingertips griper was created to pick up quite small engineering components such as a support plastic component, and 'O'rings. The fmgers are to be controlled in a manner which mimics the kinematics and dynamics of the thumb, middle finger and index finger of a human hand. This mimicry is required to design the correct motions and tactile forces necessary to grasp delicate and non-delicate engineering components. The design of the dexterous gripper finger is validated in simulation results as proof that this gripper fmger design can achieve the best performance. The grasping control method was extended to control of the trajectory gripper, the prismatic sliding elements and vacuum system. So the methods could be used in future research. Multi-closed loop PID control is applied to control the kinematics and dynamics motion of the three fingered gripper systems.
20

Optimal Path Planning for redundant manipulators

McAvoy, Brendan January 2005 (has links)
Although a lot of tesearch effort has been expended in the last decade on the problem of motion planning for redundant robotic manipulators, none of the approaches reported previously offers a readily implementable solution. A redundant manipulator has degrees of fi*eedom in excess of its task space dimension, and therefore has a non-unique solution to the inverse position kinematics problem. For a specified end effector move, and efficient use of the excess degrees of freedom implies the choice of motion plan which also achieves desired secondary objectives, such as optimization of a specified objective function.

Page generated in 0.0245 seconds