21 |
Building safety maps using vision for safe local mobile robot navigationMurarka, Aniket 18 March 2011 (has links)
In this work we focus on building local maps to enable wheeled mobile robots to navigate safely and autonomously in urban environments. Urban environments present a variety of hazards that mobile robots have to detect and represent in their maps to navigate safely. Examples of hazards include obstacles such as furniture, drop-offs such as at downward stairs, and inclined surfaces such as wheelchair ramps. We address two shortcomings perceived in the literature on mapping. The first is the extensive use of expensive laser-based sensors for mapping, and the second is the focus on only detecting obstacles when clearly other hazards such as drop-offs need to be detected to ensure safety. Therefore, in this work we develop algorithms for building maps using only relatively inexpensive stereo cameras, that allow safe local navigation by detecting and modeling hazards such as overhangs, drop-offs, and ramps in addition to static obstacles. The hazards are represented using 2D annotated grid maps called local safety maps. Each cell in the map is annotated with one of several labels: Level, Inclined, Non-ground, or, Unknown. Level cells are safe for travel whereas Inclined cells require caution. Non-ground cells are unsafe for travel and represent obstacles, overhangs, or regions lower than safe ground. Level and Inclined cells can be further annotated as being Drop-off Edges. The process of building safety maps consists of three main steps: (i) computing a stereo depth map; (ii) building a 3D model using the stereo depths; and, (iii) analyzing the 3D model for safety to construct the safety map. We make significant contributions to each of the three steps: we develop global stereo methods for computing disparity maps that use edge and color information; we introduce a probabilistic data association method for building 3D models using stereo range points; and we devise a novel method for segmenting and fitting planes to 3D models allowing for a precise safety analysis. In addition, we also develop a stand-alone method for detecting drop-offs in front of the robot that uses motion and occlusion cues and only relies on monocular images. We introduce an evaluation framework for evaluating (and comparing) our algorithms on real world data sets, collected by driving a robot in various environments. Accuracy is measured by comparing the constructed safety maps against ground truth safety maps and computing error rates. The ground truth maps are obtained by manually annotating maps built using laser data. As part of the framework we also estimate latencies introduced by our algorithms and the accuracy of the plane fitting process. We believe this framework can be used for comparing the performance of a variety of vision-based mapping systems and for this purpose we make our datasets, ground truth maps, and evaluation code publicly available. We also implement a real-time version of one of the safety map algorithms on a wheelchair robot and demonstrate it working in various environments. The constructed safety maps allow safe local motion planning and also support the extraction of local topological structures that can be used to build global maps. / text
|
22 |
Fast upper body pose estimation for human-robot interactionBurke, Michael Glen January 2015 (has links)
This work describes an upper body pose tracker that finds a 3D pose estimate using video sequences obtained from a monocular camera, with applications in human-robot interaction in mind. A novel mixture of Ornstein-Uhlenbeck processes model, trained in a reduced dimensional subspace and designed for analytical tractability, is introduced. This model acts as a collection of mean-reverting random walks that pull towards more commonly observed poses. Pose tracking using this model can be Rao-Blackwellised, allowing for computational efficiency while still incorporating bio-mechanical properties of the upper body. The model is used within a recursive Bayesian framework to provide reliable estimates of upper body pose when only a subset of body joints can be detected. Model training data can be extended through a retargeting process, and better pose coverage obtained through the use of Poisson disk sampling in the model training stage. Results on a number of test datasets show that the proposed approach provides pose estimation accuracy comparable with the state of the art in real time (30 fps) and can be extended to the multiple user case. As a motivating example, this work also introduces a pantomimic gesture recognition interface. Traditional approaches to gesture recognition for robot control make use of predefined codebooks of gestures, which are mapped directly to the robot behaviours they are intended to elicit. These gesture codewords are typically recognised using algorithms trained on multiple recordings of people performing the predefined gestures. Obtaining these recordings can be expensive and time consuming, and the codebook of gestures may not be particularly intuitive. This thesis presents arguments that pantomimic gestures, which mimic the intended robot behaviours directly, are potentially more intuitive, and proposes a transfer learning approach to recognition, where human hand gestures are mapped to recordings of robot behaviour by extracting temporal and spatial features that are inherently present in both pantomimed actions and robot behaviours. A Bayesian bias compensation scheme is introduced to compensate for potential classification bias in features. Results from a quadrotor behaviour selection problem show that good classification accuracy can be obtained when human hand gestures are recognised using behaviour recordings, and that classification using these behaviour recordings is more robust than using human hand recordings when users are allowed complete freedom over their choice of input gestures.
|
23 |
Implementace řídicích členů pro mobilní kráčivý robot / Implementaion of the controllers of a mobile walking robotKrajíček, Lukáš January 2012 (has links)
This diploma thesis deals with design and implementation of the controllers of a mobile walking robot. The advantage of these controllers are their kinematics and geometrics independent representation, which allow to use them for different robot types and tasks. In this thesis the contact controller is designed, which minimizes residual forces and torques at the robot's center of gravity, and thereby stabilize robot's body. Next the thesis deals with a posture controller, which maximizes a heuristic posture measure to optimize posture of robot body. Because of this optimization, legs are moved away from their limits and therefore they have more working space for next move. Implementation of the chosen solution is made on the robot's MATLAB mathematical model. Controllers are composed into a control basis, that allows to solve general control tasks by simultaneous combination of contained controllers. The algorithm was created for that simultaneous activation and its operation was explained on flow charts.
|
24 |
Use of Vocal Prosody to Express Emotions in Robotic SpeechCrumpton, Joe 14 August 2015 (has links)
Vocal prosody (pitch, timing, loudness, etc.) and its use to convey emotions are essential components of speech communication between humans. The objective of this dissertation research was to determine the efficacy of using varying vocal prosody in robotic speech to convey emotion. Two pilot studies and two experiments were performed to address the shortcomings of previous HRI research in this area. The pilot studies were used to determine a set of vocal prosody modification values for a female voice model using the MARY speech synthesizer to convey the emotions: anger, fear, happiness, and sadness. Experiment 1 validated that participants perceived these emotions along with a neutral vocal prosody at rates significantly higher than chance. Four of the vocal prosodies (anger, fear, neutral, and sadness) were recognized at rates approaching the recognition rate (60%) of emotions in person to person speech. During Experiment 2 the robot led participants through a creativity test while making statements using one of the validated emotional vocal prosodies. The ratings of the robot’s positive qualities and the creativity scores by the participant group that heard nonnegative vocal prosodies (happiness, neutral) did not significantly differ from the ratings and scores of the participant group that heard the negative vocal prosodies (anger, fear, sadness). Therefore, Experiment 2 failed to show that the use of emotional vocal prosody in a robot’s speech influenced the participants’ appraisal of the robot or the participants’ performance on this specific task. At this time robot designers and programmers should not expect that vocal prosody alone will have a significant impact on the acceptability or the quality of human-robot interactions. Further research is required to show that multi-modal (vocal prosody along with facial expressions, body language, or linguistic content) expressions of emotions by robots will be effective at improving human-robot interactions.
|
25 |
The Perception And Measurement Of Human-robot TrustSchaefer, Kristin 01 January 2013 (has links)
As robots penetrate further into the everyday environments, trust in these robots becomes a crucial issue. The purpose of this work was to create and validate a reliable scale that could measure changes in an individual’s trust in a robot. Assessment of current trust theory identified measurable antecedents specific to the human, the robot, and the environment. Six experiments subsumed the development of the 40 item trust scale. Scale development included the creation of a 172 item pool. Two experiments identified the robot features and perceived functional characteristics that were related to the classification of a machine as a robot for this item pool. Item pool reduction techniques and subject matter expert (SME) content validation were used to reduce the scale to 40 items. The two final experiments were then conducted to validate the scale. The finalized 40 item pre-post interaction trust scale was designed to measure trust perceptions specific to HRI. The scale measured trust on a 0-100% rating scale and provides a percentage trust score. A 14 item sub-scale of this final version of the test recommended by SMEs may be sufficient for some HRI tasks, and the implications of this proposition were discussed.
|
26 |
Design of a Novel Tripedal Locomotion Robot and Simulation of a Dynamic Gait for a Single StepHeaston, Jeremy Rex 02 October 2006 (has links)
Bipedal robotic locomotion based on passive dynamics is a field that has been extensively researched. By exploiting the natural dynamics of the system, these bipedal robots consume less energy and require minimal control to take a step. Yet the design of most of these bipedal machines is inherently unstable and difficult to control since there is a tendency for the machine to fall once it stops walking.
This thesis presents the design and analysis of a novel three-legged walking robot for a single step. The STriDER (Self-excited Tripedal Dynamic Experimental Robot) incorporates aspects of passive dynamic walking into a stable tripedal platform. During a step, two legs act as stance legs while the other acts as a swing leg. A stance plane, formed by the hip and two ground contact points of the stance legs, acts as a single effective stance leg. When viewed in the sagittal plane, the machine can be modeled as a planar four link pendulum. To initiate a step, the legs are oriented to push the center of gravity outside of the stance legs. As the body of the robot falls forward, the swing leg naturally swings in between the two stance legs and catches the STriDER. Once all three legs are in contact with the ground, the robot regains its stability and the posture of the robot is then reset in preparation for the next step.
To guide the design of the machine, a MATLAB simulation was written to allow for tuning of several design parameters, including the mass, mass distribution, and link lengths. Further development of the code also allowed for optimization of the design parameters to create an ideal gait for the robot. A self-excited method of actuation, which seeks to drive a stable system toward instability, was used to control the robot. This method of actuation was found to be robust across a wide range of design parameters and relatively insensitive to controller gains. / Master of Science
|
27 |
Dynamics and control of robotsTaha, Z. January 1987 (has links)
No description available.
|
28 |
On application of vision and manipulator with redunduncy to automatic locating and handling of objects余永康, Yu, Wing-hong, William. January 1989 (has links)
published_or_final_version / Electrical and Electronic Engineering / Doctoral / Doctor of Philosophy
|
29 |
Control of automatically guided vehiclesBouguechal, Nour-Eddine January 1989 (has links)
No description available.
|
30 |
An investigation into the development and potential of a computer based robot selection aidIoannou, A. January 1983 (has links)
No description available.
|
Page generated in 0.0571 seconds