Spelling suggestions: "subject:"robot vision."" "subject:"cobot vision.""
61 |
Visual-based decision for iterative quality enhancement in robot drawing.January 2005 (has links)
Kwok, Ka Wai. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2005. / Includes bibliographical references (leaves 113-116). / Abstracts in English and Chinese. / ABSTRACT --- p.i / Chapter 1. --- INTRODUCTION --- p.1 / Chapter 1.1 --- Artistic robot in western art --- p.1 / Chapter 1.2 --- Chinese calligraphy robot --- p.2 / Chapter 1.3 --- Our robot drawing system --- p.3 / Chapter 1.4 --- Thesis outline --- p.3 / Chapter 2. --- ROBOT DRAWING SYSTEM --- p.5 / Chapter 2.1 --- Robot drawing manipulation --- p.5 / Chapter 2.2 --- Input modes --- p.6 / Chapter 2.3 --- Visual-feedback system --- p.8 / Chapter 2.4 --- Footprint study setup --- p.8 / Chapter 2.5 --- Chapter summary --- p.10 / Chapter 3. --- LINE STROKE EXTRACTION AND ORDER ASSIGNMENT --- p.11 / Chapter 3.1 --- Skeleton-based line trajectory generation --- p.12 / Chapter 3.2 --- Line stroke vectorization --- p.15 / Chapter 3.3 --- Skeleton tangential slope evaluation using MIC --- p.16 / Chapter 3.4 --- Skeleton-based vectorization using Bezier curve interpolation --- p.21 / Chapter 3.5 --- Line stroke extraction --- p.25 / Chapter 3.6 --- Line stroke order assignment --- p.30 / Chapter 3.7 --- Chapter summary --- p.33 / Chapter 4. --- PROJECTIVE RECTIFICATION AND VISION-BASED CORRECTION --- p.34 / Chapter 4.1 --- Projective rectification --- p.34 / Chapter 4.2 --- Homography transformation by selected correspondences --- p.35 / Chapter 4.3 --- Homography transformation using GA --- p.39 / Chapter 4.4 --- Visual-based iterative correction example --- p.45 / Chapter 4.5 --- Chapter summary --- p.49 / Chapter 5. --- ITERATIVE ENHANCEMENT ON OFFSET EFFECT AND BRUSH THICKNESS --- p.52 / Chapter 5.1 --- Offset painting effect by Chinese brush pen --- p.52 / Chapter 5.2 --- Iterative robot drawing process --- p.53 / Chapter 5.3 --- Iterative line drawing experimental results --- p.56 / Chapter 5.4 --- Chapter summary --- p.67 / Chapter 6. --- GA-BASED BRUSH STROKE GENERATION --- p.68 / Chapter 6.1 --- Brush trajectory representation --- p.69 / Chapter 6.2 --- Brush stroke modeling --- p.70 / Chapter 6.3 --- Stroke simulation using GA --- p.72 / Chapter 6.4 --- Evolutionary computing results --- p.77 / Chapter 6.5 --- Chapter summary --- p.95 / Chapter 7. --- BRUSH STROKE FOOTPRINT CHARACTERIZATION --- p.96 / Chapter 7.1 --- Footprint video capturing --- p.97 / Chapter 7.2 --- Footprint image property --- p.98 / Chapter 7.3 --- Experimental results --- p.102 / Chapter 7.4 --- Chapter summary --- p.109 / Chapter 8. --- CONCLUSIONS AND FUTURE WORKS --- p.111 / BIBLIOGRAPHY --- p.113
|
62 |
Calibration of an active vision system and feature tracking based on 8-point projective invariants.January 1997 (has links)
by Chen Zhi-Yi. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1997. / Includes bibliographical references. / List of Symbols S --- p.1 / Chapter Chapter 1 --- Introduction / Chapter 1.1 --- Active Vision Paradigm and Calibration of Active Vision System --- p.1.1 / Chapter 1.1.1 --- Active Vision Paradigm --- p.1.1 / Chapter 1.1.2 --- A Review of the Existing Active Vision Systems --- p.1.1 / Chapter 1.1.3 --- A Brief Introduction to Our Active Vision System --- p.1.2 / Chapter 1.1.4 --- The Stages of Calibrating an Active Vision System --- p.1.3 / Chapter 1.2 --- Projective Invariants and Their Applications to Feature Tracking --- p.1.4 / Chapter 1.3 --- Thesis Overview --- p.1.4 / References --- p.1.5 / Chapter Chapter 2 --- Calibration for an Active Vision System: Camera Calibration / Chapter 2.1 --- An Overview of Camera Calibration --- p.2.1 / Chapter 2.2 --- Tsai's RAC Based Camera Calibration Method --- p.2.5 / Chapter 2.2.1 --- The Pinhole Camera Model with Radial Distortion --- p.2.7 / Chapter 2.2.2 --- Calibrating a Camera Using Mono view Noncoplanar Points --- p.2.10 / Chapter 2.3 --- Reg Willson's Implementation of R. Y. Tsai's RAC Based Camera Calibration Algorithm --- p.2.15 / Chapter 2.4 --- Experimental Setup and Procedures --- p.2.20 / Chapter 2.5 --- Experimental Results --- p.2.23 / Chapter 2.6 --- Conclusion --- p.2.28 / References --- p.2.29 / Chapter Chapter 3 --- Calibration for an Active Vision System: Head-Eye Calibration / Chapter 3.1 --- Why Head-Eye Calibration --- p.3.1 / Chapter 3.2 --- Review of the Existing Head-Eye Calibration Algorithms --- p.3.1 / Chapter 3.2.1 --- Category I Classic Approaches --- p.3.1 / Chapter 3.2.2 --- Category II Self-Calibration Techniques --- p.3.2 / Chapter 3.3 --- R.Tsai's Approach for Hand-Eye (Head-Eye) Calibration --- p.3.3 / Chapter 3.3.1 --- Introduction --- p.3.3 / Chapter 3.3.2 --- Definitions of Coordinate Frames and Homogeoeous Transformation Matrices --- p.3.3 / Chapter 3.3.3 --- Formulation of the Head-Eye Calibration Problem --- p.3.6 / Chapter 3.3.4 --- Using Principal Vector to Represent Rotation Transformation Matrix --- p.3.7 / Chapter 3.3.5 --- Calculating R cg and Tcg --- p.3.9 / Chapter 3.4 --- Our Local Implementation of Tsai's Head Eye Calibration Algorithm --- p.3.14 / Chapter 3.4.1 --- Using Denavit - Hartternberg's Approach to Establish a Body-Attached Coordinate Frame for Each Link of the Manipulator --- p.3.16 / Chapter 3.5 --- Function of Procedures and Formats of Data Files --- p.3.23 / Chapter 3.6 --- Experimental Results --- p.3.26 / Chapter 3.7 --- Discussion --- p.3.45 / Chapter 3.8 --- Conclusion --- p.3.46 / References --- p.3.47 / Appendix I Procedures --- p.3.48 / Chapter Chapter 4 --- A New Tracking Method for Shape from Motion Using an Active Vision System / Chapter 4.1 --- Introduction --- p.4.1 / Chapter 4.2 --- A New Tracking Method --- p.4.1 / Chapter 4.2.1 --- Our approach --- p.4.1 / Chapter 4.2.2 --- Using an Active Vision System to Track the Projective Basis Across Image Sequence --- p.4.2 / Chapter 4.2.3 --- Using Projective Invariants to Track the Remaining Feature Points --- p.4.2 / Chapter 4.3 --- Using Factorisation Method to Recover Shape from Motion --- p.4.11 / Chapter 4.4 --- Discussion and Future Research --- p.4.31 / References --- p.4.32 / Chapter Chapter 5 --- Experiments on Feature Tracking with 3D Projective Invariants / Chapter 5.1 --- 8-point Projective Invariant --- p.5.1 / Chapter 5.2 --- Projective Invariant Based Tranfer between Distinct Views of a 3-D Scene --- p.5.4 / Chapter 5.3 --- Transfer Experiments on the Image Sequence of an Calibration Block --- p.5.6 / Chapter 5.3.1 --- Experiment 1. Real Image Sequence 1 of a Camera Calibration Block --- p.5.6 / Chapter 5.3.2 --- Experiment 2. Real Image Sequence 2 of a Camera Calibration Block --- p.5.15 / Chapter 5.3.3 --- Experiment 3. Real Image Sequence 3 of a Camera Calibration Block --- p.5.22 / Chapter 5.3.4 --- Experiment 4. Synthetic Image Sequence of a Camera Calibration Block --- p.5.27 / Chapter 5.3.5 --- Discussions on the Experimental Results --- p.5.32 / Chapter 5.4 --- Transfer Experiments on the Image Sequence of a Human Face Model --- p.5.33 / References --- p.5.44 / Chapter Chapter 6 --- Conclusions and Future Researches / Chapter 6.1 --- Contributions and Conclusions --- p.6.1 / Chapter 6.2 --- Future Researches --- p.6.1 / Bibliography --- p.B.1
|
63 |
Natural feature extraction as a front end for simultaneous localization and mapping.Kiang, Kai-Ming, Mechanical & Manufacturing Engineering, Faculty of Engineering, UNSW January 2006 (has links)
This thesis is concerned with algorithms for finding natural features that are then used for simultaneous localisation and mapping, commonly known as SLAM in navigation theory. The task involves capturing raw sensory inputs, extracting features from these inputs and using the features for mapping and localising during navigation. The ability to extract natural features allows automatons such as robots to be sent to environments where no human beings have previously explored working in a way that is similar to how human beings understand and remember where they have been. In extracting natural features using images, the way that features are represented and matched is a critical issue in that the computation involved could be wasted if the wrong method is chosen. While there are many techniques capable of matching pre-defined objects correctly, few of them can be used for real-time navigation in an unexplored environment, intelligently deciding on what is a relevant feature in the images. Normally, feature analysis that extracts relevant features from an image is a 2-step process, the steps being firstly to select interest points and then to represent these points based on the local region properties. A novel technique is presented in this thesis for extracting a small enough set of natural features robust enough for navigation purposes. The technique involves a 3-step approach. The first step involves an interest point selection method based on extrema of difference of Gaussians (DOG). The second step applies Textural Feature Analysis (TFA) on the local regions of the interest points. The third step selects the distinctive features using Distinctness Analysis (DA) based mainly on the probability of occurrence of the features extracted. The additional step of DA has shown that a significant improvement on the processing speed is attained over previous methods. Moreover, TFA / DA has been applied in a SLAM configuration that is looking at an underwater environment where texture can be rich in natural features. The results demonstrated that an improvement in loop closure ability is attained compared to traditional SLAM methods. This suggests that real-time navigation in unexplored environments using natural features could now be a more plausible option.
|
64 |
Vision-based navigation and decentralized control of mobile robots.Low, May Peng Emily, Electrical Engineering & Telecommunications, Faculty of Engineering, UNSW January 2007 (has links)
The first part of this thesis documents experimental investigation into the use of vision for wheeled robot navigation problems. Specifically, using a video camera as a source of feedback to control a wheeled robot toward a static and a moving object in an environment in real-time. The wheeled robot control algorithms are dependent on information from a vision system and an estimator. The vision system design consists of a pan video camera and a visual gaze algorithm which attempts to search and continuously maintain an object of interest within limited camera field of view. Several vision-based algorithms are presented to recognize simple objects of interest in an environment and to calculate relevant parameters required by the control algorithms. An estimator is designed for state estimation of the motion of an object using visual measurements. The estimator uses noisy measurements of relative bearing to an object and object's size on an image plane formed by perspective projection. These measurements can be obtained from the vision system. A set of algorithms have been designed and experimentally investigated using a pan video camera and two wheeled robots in real-time in a laboratory setting. Experimental results and discussion are presented on the performance of the vision-based control algorithms where a wheeled robot successfully approached an object in various motions. The second part of this thesis investigates the coordination problem of flocking in multi-robot system using concepts from graph theory. New control laws are presented for flocking motion of groups of mobile robots based on several leaders. Simulation results are provided to illustrate the control laws and its applications.
|
65 |
ACTIVE SENSING FOR INTELLIGENT ROBOT VISION WITH RANGE IMAGING SENSORFukuda, Toshio, Kubota, Naoyuki, Sun, Baiqing, Chen, Fei, Fukukawa, Tomoya, Sasaki, Hironobu January 2010 (has links)
No description available.
|
66 |
Visual place categorizationWu, Jianxin. January 2009 (has links)
Thesis (Ph.D)--Computing, Georgia Institute of Technology, 2010. / Committee Chair: Rehg, James M.; Committee Member: Christensen, Henrik; Committee Member: Dellaert, Frank; Committee Member: Essa, Irfan; Committee Member: Malik, Jitendra. Part of the SMARTech Electronic Thesis and Dissertation Collection.
|
67 |
Vision based 3D obstacle detectionShah, Syed Irtiza Ali. January 2009 (has links)
Thesis (M. S.)--Mechanical Engineering, Georgia Institute of Technology, 2010. / Committee Co-Chair: Johnson, Eric; Committee Co-Chair: Lipkin, Harvey; Committee Member: Sadegh, Nader. Part of the SMARTech Electronic Thesis and Dissertation Collection.
|
68 |
Quasi-static force analysis of an automated live-bird transfer systemJoni, Jeffry Hartono 12 1900 (has links)
No description available.
|
69 |
Model-based vision-guided automated cutting of natural productsSandlin, Melissa C. 08 1900 (has links)
No description available.
|
70 |
Efficient biomorphic vision for autonomous mobile robotsMikhalsky, Maxim January 2006 (has links)
Autonomy is the most enabling and the least developed robot capability. A mobile robot is autonomous if capable of independently attaining its objectives in unpredictable environment. This requires interaction with the environment by sensing, assessing, and responding to events. Such interaction has not been achieved. The core problem consists in limited understanding of robot autonomy and its aspects, and is exacerbated by the limited resources available in a small autonomous mobile robot such as energy, information, and space. This thesis describes an efficient biomorphic visual capability that can provide purposeful interaction with environment for a small autonomous mobile robot. The method used for achieving this capability comprises synthesis of an integral paradigm of a purposeful autonomous mobile robot, formulation of requirements for the visual capability, and development of efficient algorithmic and technological solutions. The paradigm is a product of analysis of fundamental aspects of the problem, and the insights found in inherently autonomous biological organisms. Based on this paradigm, analysis of the biological vision and the available technological basis, and the state-of-the-art in vision algorithms, the requirements were formulated for a biomorphic visual capability that provides the situation awareness capability for a small autonomous mobile robot. The developed visual capability is comprised of a sensory and processing architecture, an integral set of motion vision algorithms, and a method for visual ranging of still objects that is based on them. These vision algorithms provide motion detection, fixation, and tracking functionality with low latency and computational complexity. High temporal resolution of CMOS imagers is exploited for reducing the logical complexity of image analysis, and consequently the computational complexity of the algorithms. The structure of the developed algorithms conforms to the arithmetic and memory resources available in a system on a programmable chip (SoPC), which allows complete confinement of the high-bandwidth datapath within a SoPC device and therefore high-speed operation by design. The algorithms proved to be functional, which validates the developed visual capability. The experiments confirm that high temporal resolution imaging simplifies image motion structure, and ultimately the design of the robot vision system.
|
Page generated in 0.0725 seconds