• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 2
  • 1
  • Tagged with
  • 40
  • 40
  • 16
  • 13
  • 11
  • 10
  • 8
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Robot positioning error analysis and correction

Tang, Stanley C. 12 April 2010 (has links)
The applicability and productivity of industrial robots have been limited partly because of positioning errors. In this thesis, two causes of positioning errors are identified: link deflection and gear transmission errors. Positioning errors due to the effect of link deflection and gear transmission errors are discussed and an iterative scheme is introduced to compute the deflected end-effector position. Validity of the scheme is verified by using Castigliano’s method on the same two-link hypothetical robot. Numerical integration of the integral equations of deflections is compared with analytical integration A correction method is then proposed and an error corrector is constructed. Effectiveness of the error corrector is demonstrated with a hypothetical RRRS robot, a 3-link robot with revolute joints and a spherical hand. / Master of Science
22

Development of an interactive graphical simulator for the IBM 7545 robot

Mohandas, Velluva P. 12 March 2013 (has links)
In this work an enhanced graphical simulator for the IBM 7545 robot running on the AML/E language was developed. The simulator provided two views with the facility to chose either one as the major view, and pan/zoom into that view. It gives the user the facility to define equipment and workcell setups, accepting data in the International Graphics Exchange Specification (IGES) Version 3.0 format. The user can interactively simulate either the complete program, partial program or any subroutine. The system was integrated including the facility to edit, compile, generate cross references, set system configuration, simulate and run the robot either continuously or interactively. The system was developed on an IBM Personal Computer using two monitors, text and enhanced graphics for maximizing the display surface for the graphics. The programs developed for this work can be broadly classified into the various menu programs, the definition programs to define and display the equipments and workcells, and the graphic programs for driving and displaying the graphics and altering the views. The limitations and assumptions made in developing this system along with the scope for further work are presented. / Master of Science
23

An improved controller for the Rhino robot arm

Hopkins, Mark A. January 1984 (has links)
The study of robotics cannot be satisfactorily pursued without access to working robots. The inexpensive Rhino robot arm is one that academic institutions can easily obtain for educational purposes. This thesis presents a new controller that replaces the original Rhino controller, which in many ways was not suited, or was too limited, for experimentation. A comparison of the old and new controllers is given, but the primary purpose of this thesis is to provide complete details of the new controller, and its use. The conclusion discusses the performance of the new controller and areas of experimentation to which it might be applied. / Master of Science
24

Mobile manipulation in unstructured environments with haptic sensing and compliant joints

Jain, Advait 22 August 2012 (has links)
We make two main contributions in this thesis. First, we present our approach to robot manipulation, which emphasizes the benefits of making contact with the world across all the surfaces of a manipulator with whole-arm tactile sensing and compliant actuation at the joints. In contrast, many current approaches to mobile manipulation assume most contact is a failure of the system, restrict contact to only occur at well modeled end effectors, and use stiff, precise control to avoid contact. We develop a controller that enables robots with whole-arm tactile sensing and compliant actuation at the joints to reach to locations in high clutter while regulating contact forces. We assume that low contact forces are benign and our controller does not place any penalty on contact forces below a threshold. Our controller only requires haptic sensing, handles multiple contacts across the surface of the manipulator, and does not need an explicit model of the environment prior to contact. It uses model predictive control with a time horizon of length one, and a linear quasi-static mechanical model that it constructs at each time step. We show that our controller enables both a real and simulated robots to reach goal locations in high clutter with low contact forces. While doing so, the robots bend, compress, slide, and pivot around objects. To enable experiments on real robots, we also developed an inexpensive, flexible, and stretchable tactile sensor and covered large surfaces of two robot arms with these sensors. With an informal experiment, we show that our controller and sensor have the potential to enable robots to manipulate in close proximity to, and in contact with humans while keeping the contact forces low. Second, we present an approach to give robots common sense about everyday forces in the form of probabilistic data-driven object-centric models of haptic interactions. These models can be shared by different robots for improved manipulation performance. We use pulling open doors, an important task for service robots, as an example to demonstrate our approach. Specifically, we capture and model the statistics of forces while pulling open doors and drawers. Using a portable custom force and motion capture system, we create a database of forces as human operators pull open doors and drawers in six homes and one office. We then build data-driven models of the expected forces while opening a mechanism, given knowledge of either its class (e.g, refrigerator) or the mechanism identity (e.g, a particular cabinet in Advait's kitchen). We demonstrate that these models can enable robots to detect anomalous conditions such as a locked door, or collisions between the door and the environment faster and with lower excess force applied to the door compared to methods that do not use a database of forces.
25

How to teach a new robot new tricks - an interactive learning framework applied to service robotics

Remy, Sekou 16 November 2009 (has links)
The applications of robotics are changing. Just as computers evolved from the realm of research and extreme novelty tools to now becoming essential components of modern life, robotics is also making a similar transition. With the changes in applications come changes in the user base of robotics. These users will span a broad range of society, but there are some key properties that can be used to characterize them. First they, more often than not, will not be the designers of the robots. Second, they will not have robot control as their primary task while operating the robot. Third, they will not have the resources or the desire to provide all the training that the robot will require, yet they will have the need to fine tune robot performance to their specific needs. Fourth, they will want to use multiple modes of interaction to make the robot accomplish the primary task. Fifth, they will expect and demand that the robot remain safe at all times (safe to humans, pets, or personal property) and expect the robot to be a readily replaceable appliance (cheap). Sixth, they will expect that the robot will be intelligent, at least in the confines of the task at hand. These are some of the key properties that will exist for the new user base. To address some of the needs that will arise because of these properties, we propose work that enables behavior transfer from teacher to robotic student that is facilitated through observation and interaction. Many users in the projected user base will not have exposure to the technologies that enable robotic operation. These users will however have some degree of understanding of how they would like the robot to provide assistance in accomplishing the task. The goal of this work is specifically to enable the user to transfer this understanding to the robot, and have the robot acquire this understanding via interactive learning. To make interactive learning possible via interaction we believe that the robot will have to be able to perform some degree of self regulation. Further, since it is assumed that the user will not have access to the robot's internal machinations, the robot will also have to be able to properly manage the knowledge it acquires over time and to verify and validate its understanding periodically. Scaffolding, a method in which teachers provide support while the student learns to master portions of a task, is likely to be the primary method to facilitate this process. This research will undertake study of coherence and its relevance to learning by observation. It will also implement the components that would enable a robot to learn to perform a small set of tasks and demonstrate them in various settings. For this work a robot will be defined as a hardware platform upon which a software agent operates. It is our desire that this software agent will be equipped to operate on any platform and learn any task that a human could perform with the same resources.
26

Leveraging distribution and heterogeneity in robot systems architecture

O'Hara, Keith Joseph 03 August 2011 (has links)
Like computer architects, robot designers must address multiple, possibly competing, requirements by balancing trade-offs in terms of processing, memory, communication, and energy to satisfy design objectives. However, robot architects currently lack the design guidelines, organizing principles, rules of thumb, and tools that computer architects rely upon. This thesis takes a step in this direction, by analyzing the roles of heterogeneity and distribution in robot systems architecture. This thesis takes a systems architecture approach to the design of robot systems, and in particular, investigates the use of distributed, heterogeneous platforms to exploit locality in robot systems design. We show how multiple, distributed heterogeneous platforms can serve as general purpose robot systems for three distinct domains with different design objectives: increasing availability in a search and rescue mission, increasing flexibility and ease-of-use for a personal educational robot, and decreasing the computation and sensing resources necessary for navigation and foraging tasks.
27

Control of reconfigurability and navigation of a wheel-legged robot based on active vision

Brooks, Douglas Antwonne 31 July 2008 (has links)
The ability of robotic units to navigate various terrains is critical to the advancement of robotic operation in real world environments. Next generation robots will need to adapt to their environment in order to accomplish tasks that are either too hazardous, too time consuming, or physically impossible for human-beings. Such tasks may include accurate and rapid explorations of various planets or potentially dangerous areas on planet Earth. This research investigates a navigation control methodology for a wheel-legged robot based on active vision. The method presented is designed to control the reconfigurability of the robot (i.e. control the usage of the wheels and legs), depending upon the obstacle/terrain, based on perception. Surface estimation for robot reconfigurability is implemented using a region growing method and a characterization and traversability assessment generated from camera data. As a result, a mathematical approach that directs necessary navigation behavior is implemented to control robot mobility. The hybrid wheeled-legged rover possesses a four-legged or six-legged walking system as well as a four-wheeled mobility system.
28

Motion planning algorithms for autonomous robots in static and dynamic environments

Mkhize, Zanele G. N. 01 August 2012 (has links)
M.Ing. / The objective of this research is to present motion planning methods for an autonomous robot. Motion planning is one of the most important issues in robotics. The goal of motion planning is to find a path from a starting position to a goal position while avoiding obstacles in the environment. The robot's environment can be static or dynamic. Motion planning problems can be addressed using either classical approaches or obstacle-avoidance approaches. The classical approaches discussed in this work are: Voronoi, Visibility graph, Cell decomposition and Potential field. The obstacle avoidance approaches discussed in this research are: Neural network, Bug Algorithms, Dynamic Window Approach, Vector field histogram, Bubble band technique and Curvature velocity techniques. In this dissertation, simulation results and experimental results are presented. In the simulation, we address the motion planning issues using points extracted from a map. Algorithms used for simulation are: Voronoi algorithm, Hopfield neural network, Potential field and A* search algorithm. The simulation results show that the approaches used are effective and can be applied to real robots to solve motion planning problems. In the experiment, the Dynamic Window Approach (DWA) is used for obstacle-avoidance, a Pioneer robot explores the environment using an open source system, ROS (Robot Operating System). The experiment proved that DWA can be used to avoid obstacles in real time. keywords Motion planning, autonomous robot, optimal path problems, environment, search algorithm, classical approaches, obstacle avoidance approaches, exploration.
29

Off-line robot vision system programming using a computer aided design system

Sridaran, S. January 1985 (has links)
Robots with vision capability have been taught to recognize unknown objects by comparing their shape features with those of known objects, which are stored in the vision system as a knowledge base. Traditionally, this knowledge base is created by showing the robot the set of objects that it is likely to come across. This is done with the vision system to be used and must be done in an online mode. An approach to teach the robot in an off-line mode by integrating the robot vision system and an off-line graphic system, has been developed in this research. Instead of showing the objects that the robot is likely to come across, graphic models of the objects were created in an off-line graphic system and a FORTRAN program that processes the models to extract their shape parameters was developed. These shape parameters were passed to the vision system. A program to process an unknown object placed in front of the vision system was developed to extract its shape parameters. A program that compares the parameters of the unknown object with those of the known models was also developed. The vision system was calibrated to measure the pixel dimensions in inches. In the vision system, shape parameters of the objects were found to vary with different orientations. The range of variation for each parameter was established and this was taken into consideration in the parameter comparison program. / Master of Science
30

Adaptation of task-aware, communicative variance for motion control in social humanoid robotic applications

Gielniak, Michael Joseph 17 January 2012 (has links)
An algorithm for generating communicative, human-like motion for social humanoid robots was developed. Anticipation, exaggeration, and secondary motion were demonstrated as examples of communication. Spatiotemporal correspondence was presented as a metric for human-like motion, and the metric was used to both synthesize and evaluate motion. An algorithm for generating an infinite number of variants from a single exemplar was established to avoid repetitive motion. The algorithm was made task-aware by including the functionality of satisfying constraints. User studies were performed with the algorithm using human participants. Results showed that communicative, human-like motion can be harnessed to direct partner attention and communicate state information. Furthermore, communicative, human-like motion for social robots produced by the algorithm allows humans partners to feel more engaged in the interaction, recognize motion earlier, label intent sooner, and remember interaction details more accurately.

Page generated in 0.0609 seconds