• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 67
  • 8
  • 8
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 119
  • 119
  • 38
  • 29
  • 25
  • 25
  • 20
  • 18
  • 18
  • 9
  • 9
  • 9
  • 9
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Quantifying optimum fault tolerance of manipulators and robotic vision systems

Ukidve, Chinmay S. January 2008 (has links)
Thesis (Ph.D.)--University of Wyoming, 2008. / Title from PDF title page (viewed on July 13, 2009). Includes bibliographical references (p. 104-107).
52

Vision-guided tracking of complex tree-dimensional seams for robotic gas metal arc welding

Hamed, Maien January 2011 (has links)
Automation of welding systems is often restricted by the requirements of spatial information of the seams to be welded. When this cannot be obtained from the design of the welded parts and maintained using accurate xturing, the use of a seam teaching or tracking system becomes necessary. Optical seam teaching and tracking systems have many advantages compared to systems implemented with other sensor families. Direct vision promises to be a viable strategy for implementing optical seam tracking, which has been mainly done with laser vision. The current work investigated direct vision as a strategy for optical seam teaching and tracking. A robotic vision system has been implemented, consisting of an articulated robot, a hand mounted camera and a control computer. A description of the calibration methods and the seam and feature detection and three-dimensional scene reconstruction is given. The results showed that direct vision is a suitable strategy for seam detection and learning. A discussion of generalizing the method used as an architecture for simultanious system calibration and measurement estimation is provided.
53

Bifocal vision : a holdsite-based approach to the acquisition of randomly stacked parts

Kornitzer, Daniel January 1988 (has links)
No description available.
54

Multi-robot workcell with vision for integrated circuit assembly

Michaud, Christian, 1958- January 1986 (has links)
No description available.
55

An automated vision system using a fast 2-dimensional moment invariants algorithm /

Zakaria, Marwan F. January 1987 (has links)
No description available.
56

The Development of a Visual System for MantisBot: A RobotModeled after the Praying Mantis

Getsy, Andrew Paul 13 September 2016 (has links)
No description available.
57

Intergrating vision into a computer integrated manufacturing system

Berg, Paula M. 15 July 2010 (has links)
An industrial vision system is a useful and often integral part of a computer integrated manufacturing system. Successful integration of vision capabilities into a manufacturing system involves extracting from image data the information which has meaning to the task at hand, and communicating that information to the larger system. The goal of this research was to integrate the activities of a stand-alone vision system into the operation of a manufacturing system; more specifically, the host controller and vision system were expected to work together to determine the status of pallets moving through the system. Pallet status was based on whether the objects on the pallet were correct in shape, location, and orientation, as compared to a pallet model generated using the microcomputer-based CADKEY CAD program. Cadd.c, a C language program developed for this research, extracts object area, perimeter, centroid, and principal angle from the CAD KE Y model for comparison to counterparts generated by the vision system. This off-line approach to supplying known parameters to the vision system was chosen over the traditional "teach by showing" method to take advantage of existing CAD data and to avoid disruption of the production system. The actual comparison of model and image data was performed by a program written in VPL, the resident language of the GE Optomation II Vision System. The comparison program relies on another short VPL program to obtain a pixel/inch ratio which equates the disparate units of the two systems. Model parameters are passed to the vision system via hardware and software links developed as part of this research. Three C language programs enable the host computer to communicate commands and parameters, and receive program results from the vision system. Preliminary testing of the system revealed that the object location and surface texture, lighting conditions, and pallet background all affected the image parameter calculations and hence the comparison process. / Master of Science
58

Human-Inspired Robotic Hand-Eye Coordination

Unknown Date (has links)
My thesis covers the design and fabrication of novel humanoid robotic eyes and the process of interfacing them with the industry robot, Baxter. The mechanism can reach a maximum saccade velocity comparable to that of human eyes. Unlike current robotic eye designs, these eyes have independent left-right and up-down gaze movements achieved using a servo and DC motor, respectively. A potentiometer and rotary encoder enable closed-loop control. An Arduino board and motor driver control the assembly. The motor requires a 12V power source, and all other components are powered through the Arduino from a PC. Hand-eye coordination research influenced how the eyes were programmed to move relative to Baxter’s grippers. Different modes were coded to adjust eye movement based on the durability of what Baxter is handling. Tests were performed on a component level as well as on the full assembly to prove functionality. / Includes bibliography. / Thesis (M.S.)--Florida Atlantic University, 2018. / FAU Electronic Theses and Dissertations Collection
59

Vision based localization and trajectory tracking of nonholonomic mobile robots

January 2014 (has links)
Localization is one of the most difficult and costly problems in mobile robotics. Vision and odometry/AHRS (Attitude and Heading Reference System, three axial gyroscopes, accelerometers and magnetometers) sensors fusion strategy is prevalent in the recent years for the robot localization, due to its low cost and effectiveness in GPS-denied environments. In this thesis, a new adaptive estimation algorithm is proposed to estimate the robot position by fusing the monocular vision and odometry/AHRS sensors, and utilizing the properties of perspective projection. By the new method, the robot can be localized in real time in the GPS-denied and mapless environments, and the localization results can be theoretically proved convergent to their real values. Compared to other methods, our algorithm is simple to implement and suitable for parallel processing. To achieve the real-time performance, the algorithm is implemented in parallel using GPU (Graphics Processing Unit), and therefore it can be easily integrated into mobile robots’ tasks like navigation and motion control, which need the real-time localization information. Simulations and experiments were conducted to validate the good convergence and longtime robustness performances of the proposed real-time localization algorithm. / With the developed vision based localization method as a position estimator, a new controller for trajectory tracking of the non-holonomic wheeled robot is proposed without direct position measurement. The nonholonomic motion constraint of mobile robots is fully taken into account, compared to most of existing visual sevo controllers for mobile robots. It is proved by Lyapunov theory that the proposed adaptive visual servo controller for the wheeled robot gives rise to asymptotic tracking of a desired trajectory and convergence of the position estimation to the actual position. Experiments on a wheeled robot are conducted to validate the effectiveness and robust performance of the proposed controller. / Adopting the similar idea, the new vision based localization method is once again embedded into a trajectory tracking controller for the underactuated water surface robot. It is proved once again by Lyapunov theory that the proposed adaptive visual servo controller for the underactuated water surface robot gives rise to asymptotic tracking of a desired trajectory and convergence of the position estimation to the actual position. Experiments are conducted on an underactuated water surface robot to validate the effectiveness and robust performance of the proposed controller. / The contribution of this thesis can be summarized as follows: firstly, a novel localization algorithm based on the fusion of the monocular vision and AHRS/odometry sensors is proposed. Secondly, with the former localization method embedded as a position estimator, a new controller for visually servoed trajectory tracking of the nonholonomic wheeled robot is developed. Finally, by adopting the similar strategy, this thesis proposes a new controller for visually servoed trajectory tracking of the underactuated water surface robot without direct position measurement. / 定位是移動機器人中最困難和花費最高的問題之一。由於其低成本和在無GPS(全球定位系統)環境中的有效性,視覺和里程計/ AHRS(姿態航向參考系統,三軸陀螺儀,加速度計和磁力計)傳感器融合是近年來流行的機器人定位策略。這篇論文提出了一種新的自適應估計算法,融合單目視覺和里程計/ AHRS 傳感器,並利用透視投影的特性來估計機器人位置。利用這種新方法,機器人可以實時地在無GPS 和無地圖的環境中被定位,而且定位結果可從理論上證明收斂到他們的真實值。與其它方法相比,我們的算法很容易實現,並適於並行處理。為了得到實時性能,算法是用GPU(圖形處理單元)來並行實現的,因此它可以很容易地集成到移動機器人需要實時定位信息的任務,如導航和運動控制。仿真和實驗驗證了我們的實時定位算法具有很好的收斂及長時間的魯棒表現。 / 利用上述基於視覺的定位方法作為位置估計器,我們為一階非完整移動機器人的軌跡跟踪提出了一種新的、不直接依賴位置測量的控制器。相比於大多數現有的用於移動機器人的視覺伺服控制器,我們的方法充分考慮了移動機器人的非完整運動約束。我們通過Lyapunov穩定性理論證明了本論文所提出的自適應視覺伺服控制器可以保證一階非完整移動機器人對理想軌跡的跟蹤,並且被估計的機器人位置會漸近收斂到其實際的位置。我們在輪式機器人上進行了相應的實驗,驗證了本論文所提出的控制器的有效性和魯棒性。 / 採用類似的思路,這種基於視覺的定位方法被再次嵌入到二階非完整移動機器人(欠驅動水面機器人)的軌跡跟踪控制器。我們再一次通過Lyapunov穩定性理論證明了本論文所提出的自適應視覺伺服控制器可以保證二階非完整移動機器人對理想軌跡的跟蹤,並且被估計的機器人位置會漸近收斂到其iv實際的位置。我們在欠驅動水面機器人上進行了相應的實驗,驗證了本論文所提出的控制器的有效性和魯棒性。 / 這篇論文的貢獻可以歸納如下:首先,基於單目視覺和AHRS/測距傳感器的融合,我們提出了一種新的定位算法。其次,通過將上述基於視覺的定位方法內嵌為位置估計器,我們為一階非完整移動機器人(輪式機器人)設計了一種新的基於視覺伺服的軌跡跟踪控制器。最後,通過採用類似的避免機器人位置測量的策略,本文為二階非完整移動機器人(欠驅動水面機器人)設計了一種新的基於視覺伺服的軌跡跟踪控制器。 / Wang, Kai. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2014. / Includes bibliographical references (leaves 93-100). / Abstracts also in Chinese. / Title from PDF title page (viewed on 20, December, 2016). / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only.
60

A rule-based drawing robot.

January 1999 (has links)
by Tang Kai Hung. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1999. / Includes bibliographical references. / Abstracts in English and Chinese. / Acknowledgements --- p.vi / Abstract --- p.1 / Chapter 1 --- Introduction / Chapter 1.1 --- Motivation --- p.3 / Chapter 1.2 --- Objective --- p.7 / Chapter 1.3 --- Outline --- p.9 / Chapter 2 --- Color Identification / Chapter 2.1 --- Grabbing --- p.11 / Chapter 2.2 --- Digital Image Representation --- p.13 / Chapter 2.3 --- Color Segmentation --- p.15 / Chapter 2.3.1 --- Fuzzy Rule-Based Method --- p.15 / Chapter 2.3.2 --- Fuzzy Clustering Method --- p.20 / Chapter 2.4 --- Conclusion --- p.25 / Chapter 3 --- Shape Recognition / Chapter 3.1 --- Labeling --- p.29 / Chapter 3.1.1 --- Pre-processing --- p.29 / Chapter 3.1.2 --- Connected Components --- p.30 / Chapter 3.2 --- Blob Analysis --- p.33 / Chapter 3.2.1 --- Characteristic Values --- p.33 / Chapter 3.2.2 --- Corner Detection --- p.35 / Chapter 3.3 --- Type Classification --- p.37 / Chapter 3.3.1 --- Standard Blob --- p.37 / Chapter 3.3.2 --- Non-standard Object --- p.39 / Chapter 3.4 --- Flow Chart --- p.39 / Chapter 3.5 --- Point Generation --- p.42 / Chapter 3.5.1 --- Draw the Boundary --- p.42 / Chapter 3.5.2 --- Filling in Color by Lines --- p.48 / Chapter 3.6 --- Conclusion --- p.50 / Chapter 4 --- Drawing / Chapter 4.1 --- Difficulties & Remedies --- p.54 / Chapter 4.1.1 --- Data Transmission Difficulty --- p.54 / Chapter 4.1.2 --- Robot Drawing Plane --- p.56 / Chapter 4.2 --- Coordinates Conversion --- p.59 / Chapter 4.3 --- Quantitative Performance Measure --- p.64 / Chapter 4.4 --- Conclusion --- p.66 / Chapter 5 --- Conclusions & Future Works --- p.69 / Appendix / Bibliography

Page generated in 0.0454 seconds