71 |
Vision-based navigation and decentralized control of mobile robots.Low, May Peng Emily, Electrical Engineering & Telecommunications, Faculty of Engineering, UNSW January 2007 (has links)
The first part of this thesis documents experimental investigation into the use of vision for wheeled robot navigation problems. Specifically, using a video camera as a source of feedback to control a wheeled robot toward a static and a moving object in an environment in real-time. The wheeled robot control algorithms are dependent on information from a vision system and an estimator. The vision system design consists of a pan video camera and a visual gaze algorithm which attempts to search and continuously maintain an object of interest within limited camera field of view. Several vision-based algorithms are presented to recognize simple objects of interest in an environment and to calculate relevant parameters required by the control algorithms. An estimator is designed for state estimation of the motion of an object using visual measurements. The estimator uses noisy measurements of relative bearing to an object and object's size on an image plane formed by perspective projection. These measurements can be obtained from the vision system. A set of algorithms have been designed and experimentally investigated using a pan video camera and two wheeled robots in real-time in a laboratory setting. Experimental results and discussion are presented on the performance of the vision-based control algorithms where a wheeled robot successfully approached an object in various motions. The second part of this thesis investigates the coordination problem of flocking in multi-robot system using concepts from graph theory. New control laws are presented for flocking motion of groups of mobile robots based on several leaders. Simulation results are provided to illustrate the control laws and its applications.
|
72 |
Visual guidance of robot motionGu, Lifang January 1996 (has links)
Future robots are expected to cooperate with humans in daily activities. Efficient cooperation requires new techniques for transferring human skills to robots. This thesis presents an approach on how a robot can extract and replicate a motion by observing how a human instructor conducts it. In this way, the robot can be taught without any explicit instructions and the human instructor does not need any expertise in robot programming. A system has been implemented which consists of two main parts. The first part is data acquisition and motion extraction. Vision is the most important sensor with which a human can interact with the surrounding world. Therefore two cameras are used to capture the image sequences of a moving rigid object. In order to compress the incoming images from the cameras and extract 3D motion information of the rigid object, feature detection and tracking are applied to the images. Corners are chosen as the main features because they are more stable under perspective projection and during motion. A reliable corner detector is implemented and a new corner tracking algorithm is proposed based on smooth motion constraints. With both spatial and temporal constraints, 3D trajectories of a set of points on the object can be obtained and the 3D motion parameters of the object can be reliably calculated by the algorithm proposed in this thesis. Once the 3D motion parameters are available through the vision system, the robot should be programmed to replicate this motion. Since we are interested in smooth motion and the similarity between two motions, the task of the second part of our system is therefore to extract motion characteristics and to transfer these to the robot. It can be proven that the characteristics of a parametric cubic B-spline curve are completely determined by its control points, which can be obtained by the least-squares fitting method, given some data points on the curve. Therefore a parametric cubic B–spline curve is fitted to the motion data and its control points are calculated. Given the robot configuration the obtained control points can be scaled, translated, and rotated so that a motion trajectory can be generated for the robot to replicate the given motion in its own workspace with the required smoothness and similarity, although the absolute motion trajectories of the robot and the instructor can be different. All the above modules have been integrated and results of an experiment with the whole system show that the approach proposed in this thesis can extract motion characteristics and transfer these to a robot. A robot arm has successfully replicated a human arm movement with similar shape characteristics by our approach. In conclusion, such a system collects human skills and intelligence through vision and transfers them to the robot. Therefore, a robot with such a system can interact with its environment and learn by observation.
|
73 |
Real-time visual servo control of a planar robotWanichnukhrox, Nakrob. January 2003 (has links)
Thesis (M.S.)--Ohio University, March, 2003. / Title from PDF t.p. Includes bibliographical references (leaves 98-100).
|
74 |
Visual robot guidance in time-varying environment using quadtree data structure and parallel processingBohora, Anil R. January 1989 (has links)
Thesis (M.S.)--Ohio University, June, 1989. / Title from PDF t.p.
|
75 |
Reconnaissance visuelle pour un robot-cueilleur de tomates /Brassard, Louis. January 1990 (has links)
Mémoire (M.Sc.A.)--Université du Québec à Chicoutimi, 1990. / Document électronique également accessible en format PDF. CaQCU
|
76 |
Robust real-time perception for mobile robots /Kwok, Chung Tin. January 2004 (has links)
Thesis (Ph. D.)--University of Washington, 2004. / Vita. Includes bibliographical references (p. 188-204).
|
77 |
Biologically inspired vision and control for an autonomous flying vehicle /Garratt, Matthew A. January 2007 (has links)
Thesis (Ph.D.) -- Australian National University, 2007.
|
78 |
An evaluation of the lighting conditions for robot visionAckermann, Dirk Wouter 12 1900 (has links)
Thesis (MEng) -- Stellenbosch University, 1987. / ENGLISH ABSTRACT: A vision robot, with comparable characteristics currently
being used, was designed and built. The response of the
robot is evaluated in terms of the lighting conditions it is
subjected to, treated as a transfer function with a visual
display as input and a decision made as output. The
sensitivity for luminance, contrast and detail of the
display are given.
Successful classification of certain displays are accomplished. The limitations of each part of the robot is evaluated and the result of these limitations on the total
response of the robot is pointed out.
|
79 |
Development of an automated robot vision component handling systemJansen van Nieuwenhuizen, Rudolph Johannes January 2013 (has links)
Thesis (M. Tech. (Engineering: Electrical)) -- Central University of technology, Free State, 2013 / In the industry, automation is used to optimize production, improve product quality and increase profitability. By properly implementing automation systems, the risk of injury to workers can be minimized.
Robots are used in many low-level tasks to perform repetitive, undesirable or dangerous work. Robots can perform a task with higher precision and accuracy to lower errors and waste of material.
Machine Vision makes use of cameras, lighting and software to do visual inspections that a human would normally do. Machine Vision is useful in application where repeatability, high speed and accuracy are important.
This study concentrates on the development of a dedicated robot vision system to automatically place components exiting from a conveyor system onto Automatic Guided Vehicles (AGV).
A personal computer (PC) controls the automated system. Software modules were developed to do image processing for the Machine Vision system as well as software to control a Cartesian robot. These modules were integrated to work in a real-time system.
The vision system is used to determine the parts‟ position and orientation. The orientation data are used to rotate a gripper and the position data are used by the Cartesian robot to position the gripper over the part.
Hardware for the control of the gripper, pneumatics and safety systems were developed. The automated system‟s hardware was integrated by the use of the different communication protocols, namely DeviceNet (Cartesian robot), RS-232 (gripper) and Firewire (camera).
|
80 |
DESIGN AND DEVELOPMENT OF AN AUTONOMOUS SOCCER-PLAYING ROBOTOlson, Steven A. R., Dawson, Chad S., Jacobson, Jared 10 1900 (has links)
International Telemetering Conference Proceedings / October 21, 2002 / Town & Country Hotel and Conference Center, San Diego, California / This paper describes the construction of an autonomous soccer playing robot as part of a senior design project at Brigham Young University. Each participating team designed and built a robot to compete in an annual tournament. To accomplish this, each team had access to images received from a camera placed above a soccer field. The creation of image processing and artificial intelligence software were required to allow the robot to perform against other robots in a one-on-one competition. Each participating team was given resources to accomplish this project. This paper contains a summary of the experiences gained by team members and also a description of the key components created for the robot named Prometheus to compete and win the annual tournament.
|
Page generated in 0.0733 seconds