• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 67
  • 8
  • 8
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 119
  • 119
  • 38
  • 29
  • 25
  • 25
  • 20
  • 18
  • 18
  • 9
  • 9
  • 9
  • 9
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Three dimensional dynamic video position sensing

Jansky, L. Andrew 17 December 1993 (has links)
A comprehensive system to locate and track objects in two or three dimensional space, using non-contact video sensing techniques is described. The need exists to be able to quantify range and proximity of objects that would be difficult or impossible to measure using standard contact based sensor technology. Available video technology is surveyed and classified. Then, a hardware system is assembled that fulfills the project goal, within given budgetary constraints. The individual components of the system are described in detail. The theoretical solution for single camera, 2-D positioning, is developed. A device dependent computer algorithm is developed to perform the object location. An accurate multi-camera, 3-D positioning algorithm is also developed. A method to calibrate the cameras is also described and applied. Computer algorithms to perform calibration and solve the multiple view, 3-D location geometry are presented. The theoretical equations and most of the algorithms are transferable, not hardware specific. Examples using the 2-D model are presented. The first test is a submerged, single degree of freedom model that was subjected to wave action. Video tracking data is compared with available positioning data from string potentiometers. The second test is a surface float application where contact sensing methods were not possible. The 3-D algorithm is demonstrated in an above water test. The longitudinal motion of a linear constrained target is measured with a string potentiometer and compared with a two-camera, 3-D video interpretation of the motion. The calibration method is verified with the 3-D algorithm. / Graduation date: 1994
2

On application of vision and manipulator with redunduncy to automatic locating and handling of objects

余永康, Yu, Wing-hong, William. January 1989 (has links)
published_or_final_version / Electrical and Electronic Engineering / Doctoral / Doctor of Philosophy
3

Hierarchical modelling of mobile, seeing robots

Luh, Cheng-Jye, 1960- January 1989 (has links)
This thesis describes the implementation of a hierarchical robot simulation environment which supports the design of robots with vision and mobility. A seeing robot model applies a classification expert system for visual identification of laboratory objects. The visual data acquisition algorithm used by the robot vision system has been developed to exploit multiple viewing distances and perspectives. Several different simulations have been run testing the visual logic in a laboratory environment. Much work remains to integrate the vision system with the rest of the robot system.
4

Active visual inference of surface shape

Cipolla, Roberto January 1991 (has links)
No description available.
5

Visual feedback for maniplulator arm control

Shuttleworth, P. J. January 1989 (has links)
No description available.
6

Recognizing parameterized objects from range data

Reid, Ian D. January 1991 (has links)
No description available.
7

Pipelining : an approach for machine vision

Foster, D. J. January 1987 (has links)
Much effort has been spent over the last decade in producing so called "Machine Vision" systems for use in robotics, automated inspection, assembly and numerous other fields. Because of the large amount of data involved in an image (typically ¼ MByte) and the complexity of many algorithms used, the processing times required have been far in excess of real time on a VAX-class serial processor. We review a number of image understanding algorithms that compute a globally defined "state", and show that they may be computed using simple local operations that are suited to parallel implementation. In recent years, many massively parallel machines have been designed to apply local operations rapidly across an image. We review several vision machines. We develop an algebraic analysis of the performance of a vision machine and show that, contrary to the commonly-held belief, the time taken to relay images between serial streams can exceed by far the time spent processing. We proceed to investigate the roles that a variety of pipelining techniques might play. We then present three pipelined designs for vision, one of which has been built. This is a parallel pipelined bit slice convolution processor, capable of operating at video rates. This design is examined in detail, and its performance analysed in relation to the theoretical framework of the preceeding chapters. The construction and debugging of the device, which is now operational in its hardware is detailed.
8

Self location of vision guided autonomous mobile robots.

January 2000 (has links)
Lau Ah Wai, Calvin. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2000. / Includes bibliographical references (leaves 108-111). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- An Overview --- p.4 / Chapter 1.1.1 --- Robot Self Location --- p.4 / Chapter 1.1.2 --- Robot Navigation --- p.10 / Chapter 1.2 --- Scope of Thesis --- p.12 / Chapter 2 --- Theory --- p.13 / Chapter 2.1 --- Coordinate Systems Transformations --- p.13 / Chapter 2.2 --- Problem Specification --- p.21 / Chapter 2.3 --- The Process of Stereo Vision --- p.22 / Chapter 2.3.1 --- Disparity and Depth --- p.22 / Chapter 2.3.2 --- Vertical Edge Detection and Extraction --- p.25 / Chapter 2.3.3 --- Line Matching Using Dynamic Programming --- p.27 / Chapter 3 --- Mobile Robot Self Location --- p.29 / Chapter 3.1 --- Physical Points by Stereo Reconstruction --- p.29 / Chapter 3.1.1 --- Physical Points Refinement --- p.32 / Chapter 3.2 --- Motion Uncertainties Modeling --- p.33 / Chapter 3.3 --- Improved Physical Point Estimations by EKF --- p.36 / Chapter 3.4 --- Matching Physical Points to Model by Geometric Hashing --- p.40 / Chapter 3.4.1 --- Similarity Invariant --- p.44 / Chapter 3.5 --- Initial Pose Estimation --- p.47 / Chapter 3.5.1 --- Initial Pose Refinement --- p.50 / Chapter 3.6 --- Self Location Using Other Camera Combinations --- p.50 / Chapter 4 --- Improvements to Self Location Using Bayesian Inference --- p.55 / Chapter 4.1 --- Statistical Characteristics of Edges --- p.57 / Chapter 4.2 --- Evidence at One Pixel --- p.60 / Chapter 4.3 --- Evidence Over All Pixels --- p.62 / Chapter 4.4 --- A Simplification Of Geometric Hashing --- p.62 / Chapter 4.4.1 --- Simplification of The Similarity Invariant --- p.63 / Chapter 4.4.2 --- Translation Invariant --- p.63 / Chapter 4.4.3 --- Simplification to The Hashing Table --- p.65 / Chapter 5 --- Robot Navigation --- p.67 / Chapter 5.1 --- Propagation of Motion Uncertainties to Estimated Pose --- p.68 / Chapter 5.2 --- Expectation Map Derived from the CAD Model --- p.70 / Chapter 6 --- Experimental Results --- p.74 / Chapter 6.1 --- Results Using Simulated Environment --- p.74 / Chapter 6.1.1 --- Results and Error Analysis --- p.75 / Chapter 6.2 --- Results Using Real Environment --- p.85 / Chapter 6.2.1 --- Camera Calibration Using Tsai's Algorithm --- p.85 / Chapter 6.2.2 --- Pose Estimation By Geometric Hashing --- p.88 / Chapter 6.2.3 --- Pose Estimation by Bayesian Inference and Geometric Hash- ing --- p.90 / Chapter 6.2.4 --- Comparison of Self Location Approaches --- p.92 / Chapter 6.2.5 --- Motion Tracking --- p.93 / Chapter 7 --- Conclusion and Discussion --- p.95 / Chapter 7.1 --- Conclusion and Discussion --- p.95 / Chapter 7.2 --- Contributions --- p.97 / Chapter 7.3 --- Subjects for Future Research --- p.98 / Chapter A --- Appendix --- p.100 / Chapter A.1 --- Extended Kalman Filter --- p.100 / Chapter A.2 --- Visualizing Uncertainty for 2D Points --- p.105
9

Development and analysis of an absolute three degree of freedom vision based orientation sensor

Klement, Martin 12 1900 (has links)
No description available.
10

2D object-based visual landmark recognition in a topological mobile robot /

Do, Quoc Vong. Unknown Date (has links)
This thesis addresses the issues of visual landmark recognition in autonomous robot navigation along known routes, by intuitively exploiting the functions of the human visual system and its navigational ability. A feedforward-feedbackward architecture has been developed for recognising visual landmarks in real-time. It integrates the theoretical concepts from the pre-attentive and attentive stages in the human visual system, the selective attention adaptive resonance theory neural network and its derivatives, and computational approaches toward object recognition in computer vision. / The main contributions of this thesis lie within the emulations of the pre-attentive and attentive stages in the context of object recognition, embedding various concepts from neural networks into a computational template-matching approach in the computer vision. The real-time landmark recognition capability is achieved by mimicking the pre-attentive stage, where it models a selective attention mechanism for computational resource allocation, focusing only on the regions of interest. This results in a parsimonious searching method, addressing the computational restrictive nature of current computer processing power. Subsequently, the recognition of visual landmarks in both clean and cluttered backgrounds (invariant to different viewpoints) is implemented in the attentive stage. This is achieved by developing a memory feedback modulation (MFM) mechanism that enables knowledge from the memory to interact and enhance the efficiency of earlier stages in the system, and the use of viewer-centre object representation which is mimicked from the human visual system. Furthermore, the architecture has been extended to incorporate both top-down and bottom-up facilitatory and inhibition pathways between the memory and the earlier stages to enable the architecture to recognise a 2D landmark, which is partially occluded by adjacent features in the neighbourhood. / The feasibility of the architecture in recognising objects in cluttered backgrounds is demonstrated via computer simulations using real-images, consisting of a larger number of real cluttered indoor and outdoor scenes. The system's applicability in mobile robot navigation is revealed through real-time navigation trials of known routes, using a real robotic vehicle which is designed and constructed from the component level. The system has been evaluated by providing the robot with a topological map of the routes prior to navigation, such that object recognition serves as landmark detection with reference to the given map, where autonomous guidance is based on the recognition of familiar objects to compute the robot's absolute position along the pathways. / Thesis (PhD)--University of South Australia, 2006.

Page generated in 0.0733 seconds