41 |
Automation and modelling of robotic polishingHives, Paul, University of Western Sydney, Nepean, School of Mechatronic, Computer and Electrical Engineering January 2000 (has links)
This research effort highlights emerging areas in the field of robotic polishing and includes an extensive literature survey conducted by the author. This survey shows that areas in need of further investigation for achieving automated polishing are surface measurement, CAD/CAM integration and polishing mechanics. The work conducted has been based on the use of an available robot end-effector for polishing unknown three-dimensional surfaces. A model for determining the mass of material removed during the polishing process is based on hardness testing, surface grinding and milling theory. Using this model the material removed during the polishing process is compared to results from practical experiments. Polishing trajectory for a robot-end effector to follow has been produced using CAD files in Initial Graphics Exchange Specification (IGES) format. Using these files and two types of polishing patterns, the surface roughness of polished surfaces has been compared for simple planar polygonal surfaces. / Master of Engineering (Hons)
|
42 |
Machine Vision as the Primary Sensory Input for Mobile, Autonomous RobotsLovell, Nathan, N/A January 2006 (has links)
Image analysis, and its application to sensory input (computer vision) is a fairly mature field, so it is surprising that its techniques are not extensively used in robotic applications. The reason for this is that, traditionally, robots have been used in controlled environments where sophisticated computer vision was not necessary, for example in car manufacturing. As the field of robotics has moved toward providing general purpose robots that must function in the real world, it has become necessary that the robots be provided with robust sensors capable of understanding the complex world around them. However, when researchers apply techniques previously studied in image analysis literature to the field of robotics, several difficult problems emerge. In this thesis we examine four reasons why it is difficult to apply work in image analysis directly to real-time, general purpose computer vision applications. These are: improvement in the computational complexity of image analysis algorithms, robustness to dynamic and unpredictable visual conditions, independence from domain specific knowledge in object recognition and the development of debugging facilities. This thesis examines each of these areas making several innovative contributions in each area. We argue that, although each area is distinct, improvement must be made in all four areas before vision will be utilised as the primary sensory input for mobile, autonomous robotic applications. In the first area, the computational complexity of image analysis algorithms, we note the dependence of a large number of high-level processing routines on a small number of low-level algorithms. Therefore, improvement to a small set of highly utilised algorithms will yield benefits in a large number of applications. In this thesis we examine the common tasks of image segmentation, edge and straight line detection and vectorisation. In the second area, robustness to dynamic and unpredictable conditions, we examine how vision systems can be made more tolerant to changes of illumination in the visual scene. We examine the classical image segmentation task and present a method for illumination independence that builds on our work from the first area. The third area is the reliance on domain-specific knowledge in object recognition. Many current systems depend on a large amount of hard-coded domainspecific knowledge to understand the world around them. This makes the system hard to modify, even for slight changes in the environment, and very difficult to apply in a different context entirely. We present an XML-based language, the XML Object Definition (XOD) language, as a solution to this problem. The language is largely descriptive instead of imperative so, instead of describing how to locate objects within each image, the developer simply describes the properties of the objects. The final area is the development of support tools. Vision system programming is extremely difficult because large amounts of data are handled at a very fast rate. If the system is running on an embedded device (such as a robot) then locating defects in the code is a time consuming and frustrating task. Many development-support applications are available for specific applications. We present a general purpose development-support tool for embedded, real-time vision systems. The primary case study for this research is that of Robotic soccer, in the international RoboCup Four-Legged league. We utilise all of the research of this thesis to provide the first illumination-independent object recognition system for RoboCup. Furthermore we illustrate the flexibility of our system by applying it to several other tasks and to marked changes in the visual environment for RoboCup itself.
|
43 |
Vision-based navigation and decentralized control of mobile robots.Low, May Peng Emily, Electrical Engineering & Telecommunications, Faculty of Engineering, UNSW January 2007 (has links)
The first part of this thesis documents experimental investigation into the use of vision for wheeled robot navigation problems. Specifically, using a video camera as a source of feedback to control a wheeled robot toward a static and a moving object in an environment in real-time. The wheeled robot control algorithms are dependent on information from a vision system and an estimator. The vision system design consists of a pan video camera and a visual gaze algorithm which attempts to search and continuously maintain an object of interest within limited camera field of view. Several vision-based algorithms are presented to recognize simple objects of interest in an environment and to calculate relevant parameters required by the control algorithms. An estimator is designed for state estimation of the motion of an object using visual measurements. The estimator uses noisy measurements of relative bearing to an object and object's size on an image plane formed by perspective projection. These measurements can be obtained from the vision system. A set of algorithms have been designed and experimentally investigated using a pan video camera and two wheeled robots in real-time in a laboratory setting. Experimental results and discussion are presented on the performance of the vision-based control algorithms where a wheeled robot successfully approached an object in various motions. The second part of this thesis investigates the coordination problem of flocking in multi-robot system using concepts from graph theory. New control laws are presented for flocking motion of groups of mobile robots based on several leaders. Simulation results are provided to illustrate the control laws and its applications.
|
44 |
Fault Detection in Autonomous RobotsChristensen, Anders L 27 June 2008 (has links)
In this dissertation, we study two new approaches to fault detection for autonomous robots. The first approach involves the synthesis of software components that give a robot the capacity to detect faults which occur in itself. Our hypothesis is that hardware faults change the flow of sensory data and the actions performed by the control program. By detecting these changes, the presence of faults can be inferred. In order to test our hypothesis, we collect data in three different tasks performed by real robots. During a number of training runs, we record sensory data from the robots both while they are operating normally and after a fault has been injected. We use back-propagation neural networks to synthesize fault detection components based on the data collected in the training runs. We evaluate the performance of the trained fault detectors in terms of the number of false positives and the time it takes to detect a fault.
The results show that good fault detectors can be obtained. We extend the set of possible faults and go on to show that a single fault detector can be trained to detect several faults in both a robot's sensors and actuators. We show that fault detectors can be synthesized that are robust to variations in the task. Finally, we show how a fault detector can be trained to allow one robot to detect faults that occur in another robot.
The second approach involves the use of firefly-inspired synchronization to allow the presence of faulty robots to be determined by other non-faulty robots in a swarm robotic system. We take inspiration from the synchronized flashing behavior observed in some species of fireflies. Each robot flashes by lighting up its on-board red LEDs and neighboring robots are driven to flash in synchrony. The robots always interpret the absence of flashing by a particular robot as an indication that the robot has a fault. A faulty robot can stop flashing periodically for one of two reasons. The fault itself can render the robot unable to flash periodically.
Alternatively, the faulty robot might be able to detect the fault itself using endogenous fault detection and decide to stop flashing.
Thus, catastrophic faults in a robot can be directly detected by its peers, while the presence of less serious faults can be detected by the faulty robot itself, and actively communicated to neighboring robots. We explore the performance of the proposed algorithm both on a real world swarm robotic system and in simulation. We show that failed robots are detected correctly and in a timely manner, and we show that a system composed of robots with simulated self-repair capabilities can survive relatively high failure rates.
We conclude that i) fault injection and learning can give robots the capacity to detect faults that occur in themselves, and that ii) firefly-inspired synchronization can enable robots in a swarm robotic system to detect and communicate faults.
|
45 |
Autonomous ground vehicle terrain classification using internal sensorsSadhukhan, Debangshu. Moore, Carl A. January 2004 (has links)
Thesis (M.S.)--Florida State University, 2004. / Advisor: Dr. Carl A. Moore, Florida State University, College of Engineering, Dept. of Mechanical Engineering. Title and description from dissertation home page (viewed 6/21/04). Includes bibliographical references.
|
46 |
Planned perception within concurrent mapping and localization /Slavik, Michael P. January 1900 (has links)
Thesis (M.S. in Electrical Engineering and Computer Science)--Massachusetts Institute of Technology. / Includes bibliographical references (p. [127]-132). Also available online.
|
47 |
Design of an autonomous mobile robot for service applications.De Villiers, Mark. January 2011 (has links)
This research project proposes the development of an autonomous,
omnidirectional vehicle that will be used for general indoor service
applications. A suggested trial application for this service robot will be
to deliver printouts to various network users in their offices. The robot
will serve as a technology demonstrator and could later also be used for
other tasks in an office, medical or industrial environment. The robot will use Mecanum wheels (also known as Swedish 45° or Ilon wheels) to achieve omnidirectionality. This will be especially useful in the often cramped target environments, because the vehicle effectively has a zero radius turning circle and is able to change direction of motion without changing its pose. Part of the research will also be to investigate a novel propulsion system based on the Mecanum wheel. The robot will form part of a portfolio of service robots that the Mechatronics and Micro Manufacturing (MMM) group at the CSIR is busy developing. Service robots are typically used to perform Dull, Dangerous or Dirty work, where human presence is not essential if the robot can perform the task reliably and successfully. / Thesis (M.Sc.Eng.)-University of KwaZulu-Natal, Durban, 2011.
|
48 |
Design and construction of Meercat : an autonomous indoor and outdoor courier service robot.Bosscha, Peter Antoon. January 2011 (has links)
This project details the construction and development of, and experimentation with a mobile
service courier robot named Meercat. This robot has been built from the ground up using
parts sourced from various places. The application for this service robot is the delivery of
internal mail parcels between the buildings situated on the campus of the Council for
Scientific and Industrial Research (CSIR) in Pretoria. To achieve this, the robot has to be
able to localise and navigate through indoor office and laboratory environments and over
outdoor tarred roads which interconnect the various buildings.
Not many robots are intended for operation in both indoor and outdoor environments, and to
achieve this, multiple sensing systems are implemented on the platform, where the correct
selection of sensing inputs is a key aspect. Further testing and experiments will take place
with algorithms for localisation and navigation. As a limited budget was available for the
development of this robot, cost-effective solutions had to be found for the mechanical,
sensing and computation needs.
The Mechatronics group from the Mechatronics and Micro Manufacturing (MMM)
competency area at the CSIR is involved with the development of various autonomous
mobile robots. The particular robot developed in this project will be an addition to the
CSIR’s current fleet of robots and will be used as a stepping stone for experimentation with
new sensors and electronics, and the development of further positioning and navigation
algorithms. / Thesis (M.Sc.Eng.)-University of KwaZulu-Natal, Durban, 2011.
|
49 |
Vision-based navigation and decentralized control of mobile robots.Low, May Peng Emily, Electrical Engineering & Telecommunications, Faculty of Engineering, UNSW January 2007 (has links)
The first part of this thesis documents experimental investigation into the use of vision for wheeled robot navigation problems. Specifically, using a video camera as a source of feedback to control a wheeled robot toward a static and a moving object in an environment in real-time. The wheeled robot control algorithms are dependent on information from a vision system and an estimator. The vision system design consists of a pan video camera and a visual gaze algorithm which attempts to search and continuously maintain an object of interest within limited camera field of view. Several vision-based algorithms are presented to recognize simple objects of interest in an environment and to calculate relevant parameters required by the control algorithms. An estimator is designed for state estimation of the motion of an object using visual measurements. The estimator uses noisy measurements of relative bearing to an object and object's size on an image plane formed by perspective projection. These measurements can be obtained from the vision system. A set of algorithms have been designed and experimentally investigated using a pan video camera and two wheeled robots in real-time in a laboratory setting. Experimental results and discussion are presented on the performance of the vision-based control algorithms where a wheeled robot successfully approached an object in various motions. The second part of this thesis investigates the coordination problem of flocking in multi-robot system using concepts from graph theory. New control laws are presented for flocking motion of groups of mobile robots based on several leaders. Simulation results are provided to illustrate the control laws and its applications.
|
50 |
Investigating the cognitive processing of experience for decision making in robots accounting for internal states and appraisals /Gordon, Stephen Michael, January 2009 (has links)
Thesis (Ph. D. in Electrical Engineering)--Vanderbilt University, May 2009. / Title from title screen. Includes bibliographical references.
|
Page generated in 0.0692 seconds