• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Semi Autonomous Vehicle Intelligence: Real Time Target Tracking For Vision Guided Autonomous Vehicles

Anderson, Jonathan D. 16 March 2007 (has links) (PDF)
Unmanned vehicles (UVs) are seeing more widespread use in military, scientific, and civil sectors in recent years. These UVs range from unmanned air and ground vehicles to surface and underwater vehicles. Each of these different UVs has its own inherent strengths and weaknesses, from payload to freedom of movement. Research in this field is growing primarily because of the National Defense Act of 2001 mandating that one-third of all military vehicles be unmanned by 2015. Research using small UVs, in particular, is a growing because small UVs can go places that may be too dangerous for humans. Because of the limitations inherent in small UVs, including power consumption and payload, the selection of light weight and low power sensors and processors becomes critical. Low power CMOS cameras and real-time vision processing algorithms can provide fast and reliable information to the UVs. These vision algorithms often require computational power that limits their use in traditional general purpose processors using conventional software. The latest developments in field programmable gate arrays (FPGAs) provide an alternative for hardware and software co-design of complicated real-time vision algorithms. By tracking features from one frame to another, it becomes possible to perform many different high-level vision tasks, including object tracking and following. This thesis describes a vision guidance system for unmanned vehicles in general and the FPGA hardware implementation that operates vision tasks in real-time. This guidance system uses an object following algorithm to provide information that allows the UV to follow a target. The heart of the object following algorithm is real-time rank transform, which transforms the image into a more robust image that maintains the edges found in the original image. A minimum sum of absolute differences algorithm is used to determine the best correlation between frames, and the output of this correlation is used to update the tracking of the moving target. Control code can use this information to move the UV in pursuit of a moving target such as another vehicle.
2

An Onboard Vision System for Unmanned Aerial Vehicle Guidance

Edwards, Barrett Bruce 17 November 2010 (has links) (PDF)
The viability of small Unmanned Aerial Vehicles (UAVs) as a stable platform for specific application use has been significantly advanced in recent years. Initial focus of lightweight UAV development was to create a craft capable of stable and controllable flight. This is largely a solved problem. Currently, the field has progressed to the point that unmanned aircraft can be carried in a backpack, launched by hand, weigh only a few pounds and be capable of navigating through unrestricted airspace. The most basic use of a UAV is to visually observe the environment and use that information to influence decision making. Previous attempts at using visual information to control a small UAV used an off-board approach where the video stream from an onboard camera was transmitted down to a ground station for processing and decision making. These attempts achieved limited results as the two-way transmission time introduced unacceptable amounts of latency into time-sensitive control algorithms. Onboard image processing offers a low-latency solution that will avoid the negative effects of two-way communication to a ground station. The first part of this thesis will show that onboard visual processing is capable of meeting the real-time control demands of an autonomous vehicle, which will also include the evaluation of potential onboard computing platforms. FPGA-based image processing will be shown to be the ideal technology for lightweight unmanned aircraft. The second part of this thesis will focus on the exact onboard vision system implementation for two proof-of-concept applications. The first application describes the use of machine vision algorithms to locate and track a target landing site for a UAV. GPS guidance was insufficient for this task. A vision system was utilized to localize the target site during approach and provide course correction updates to the UAV. The second application describes a feature detection and tracking sub-system that can be used in higher level application algorithms.
3

Dense 3D Point Cloud Representation of a Scene Using Uncalibrated Monocular Vision

Diskin, Yakov 23 May 2013 (has links)
No description available.

Page generated in 0.0678 seconds