• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 2
  • 1
  • Tagged with
  • 8
  • 8
  • 5
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Vision Based Station-Keeping for the Unmanned Underwater Vehicle

Lee, Chen-wei 01 August 2008 (has links)
Station-Keeping is an important capability of the Unmanned Underwater Vehicle in a variety of mission , including inspection and repair of undersea pipeline , and surveillance . Station-Keeping control includes two parts : motion estimation and Station-Keeping control system . In this thesis we propose a monocular vision system for determining the motion of an Unmanned Underwater Vehicle . The vehicle is equipped with a down-looking camera , which provides images of the sea-floor . The motion of vehicle is estimated with a feature-based mosaicking method which requires the extraction and the matching of relevant features . We designed a visual servo control system for maintaining the position of vehicle relative to a visual landmark , while maintaining a fixed depth .
2

Camera Based Navigation : Matching between Sensor reference and Video image

Olgemar, Markus January 2008 (has links)
<p>an Internal Navigational System and a Global Navigational Satellite System (GNSS). In navigational warfare the GNSS can be jammed, therefore are a third navigational system is needed. The system that has been tried in this thesis is camera based navigation. Through a video camera and a sensor reference the position is determined. This thesis will process the matching between the sensor reference and the video image.</p><p>Two methods have been implemented: normalized cross correlation and position determination through a homography. Normalized cross correlation creates a correlation matrix. The other method uses point correspondences between the images to determine a homography between the images. And through the homography obtain a position. The more point correspondences the better the position determination will be.</p><p>The results have been quite good. The methods have got the right position when the Euler angles of the UAV have been known. Normalized cross correlation has been the best method of the tested methods.</p>
3

Corner Detection Approach to the Building Footprint Extraction from Lidar Data

Yun, Guan-Chyun 29 January 2008 (has links)
The essential procedure of constructing 3-D building models in urban areas is to extract the building boundary footprint. In the past researches, the common procedures used in extracting the building footprint are applying edge detection, vectorization, and generalization. However, the derived boundary lines occasionally occur zigzag patterns, thus, it still needs further building footprint regularization. This study proposed a new approach in the point of view that the points, lines and polygons are the essential elements in reconstructing 3-D building models. The proposed new method is based on ¡§corner detection approach (CDA)¡¨ and ¡§Adjustment of building footprints and corner points (ABFCO)¡¨ algorithm on Light Detection And Ranging (LiDAR) or binary classification resultant imagery. This study implements Harris and Local Binary Pattern (LBP) corner detection, afterward, connects all detected points by using convex hull algorithm. However, ortho-non-rectangle buildings would compose poor outlines after convex hull. This study combines open and dilation morphology with the find ignored point algorithm to improve any incorrect connections. Finally, performs the ABFCO algorithm to those points which belong to the same boundary to generalize a line segment, and to figure out the intersections and boundary lines of the buildings. The experiment results have proved that the overall accuracy of LBP corner detection is about 3.5% higher than Harris corner detection, its overall accuracy is about 92% in rectangular buildings and about 91% in non-rectangular buildings, its standard deviation of boundary length is 0.29m and better than Harris¡¦s 0.55m. We also compared LBP corner detection with edge detection. The overall accuracy of corner detection is about 3% higher than edge detection, standard deviation of boundary length 0.37m is also better than edge detection 0.75m. This study not only proved the corner detection is better than edge detection from data, but also developed ABFCO algorithm is helpful for extracting more accurate building footprint lines.
4

Optimizing Harris Corner Detection on GPGPUs Using CUDA

Loundagin, Justin 01 March 2015 (has links) (PDF)
ABSTRACT Optimizing Harris Corner Detection on GPGPUs Using CUDA The objective of this thesis is to optimize the Harris corner detection algorithm implementation on NVIDIA GPGPUs using the CUDA software platform and measure the performance benefit. The Harris corner detection algorithm—developed by C. Harris and M. Stephens—discovers well defined corner points within an image. The corner detection implementation has been proven to be computationally intensive, thus realtime performance is difficult with a sequential software implementation. This thesis decomposes the Harris corner detection algorithm into a set of parallel stages, each of which are implemented and optimized on the CUDA platform. The performance results show that by applying strategic CUDA optimizations to the Harris corner detection implementation, realtime performance is feasible. The optimized CUDA implementation of the Harris corner detection algorithm showed significant speedup over several platforms: standard C, MATLAB, and OpenCV. The optimized CUDA implementation of the Harris corner detection algorithm was then applied to a feature matching computer vision system, which showed significant speedup over the other platforms.
5

Camera Based Navigation : Matching between Sensor reference and Video image

Olgemar, Markus January 2008 (has links)
an Internal Navigational System and a Global Navigational Satellite System (GNSS). In navigational warfare the GNSS can be jammed, therefore are a third navigational system is needed. The system that has been tried in this thesis is camera based navigation. Through a video camera and a sensor reference the position is determined. This thesis will process the matching between the sensor reference and the video image. Two methods have been implemented: normalized cross correlation and position determination through a homography. Normalized cross correlation creates a correlation matrix. The other method uses point correspondences between the images to determine a homography between the images. And through the homography obtain a position. The more point correspondences the better the position determination will be. The results have been quite good. The methods have got the right position when the Euler angles of the UAV have been known. Normalized cross correlation has been the best method of the tested methods.
6

Camera Motion Blur And Its Effect On Feature Detectors

Uzer, Ferit 01 September 2010 (has links) (PDF)
Perception, hence the usage of visual sensors is indispensable in mobile and autonomous robotics. Visual sensors such as cameras, rigidly mounted on a robot frame are the most common usage scenario. In this case, the motion of the camera due to the motion of the moving platform as well as the resulting shocks or vibrations causes a number of distortions on video frame sequences. Two most important ones are the frame-to-frame changes of the line-of-sight (LOS) and the presence of motion blur in individual frames. The latter of these two, namely motion blur plays a particularly dominant role in determining the performance of many vision algorithms used in mobile robotics. It is caused by the relative motion between the vision sensor and the scene during the exposure time of the frame. Motion blur is clearly an undesirable phenomenon in computer vision not only because it degrades the quality of images but also causes other feature extraction procedures to degrade or fail. Although there are many studies on feature based tracking, navigation, object recognition algorithms in the computer vision and robotics literature, there is no comprehensive work on the effects of motion blur on different image features and their extraction. In this thesis, a survey of existing models of motion blur and approaches to motion deblurring is presented. We review recent literature on motion blur and deblurring and we focus our attention on motion blur induced degradation of a number of popular feature detectors. We investigate and characterize this degradation using video sequences captured by the vision system of a mobile legged robot platform. Harris Corner detector, Canny Edge detector and Scale Invariant Feature Transform (SIFT) are chosen as the popular feature detectors that are most commonly used for mobile robotics applications. The performance degradation of these feature detectors due to motion blur are categorized to analyze the effect of legged locomotion on feature performance for perception. These analysis results are obtained as a first step towards the stabilization and restoration of video sequences captured by our experimental legged robotic platform and towards the development of motion blur robust vision system.
7

Tvorba panoramatických fotografií / Panoramic Photo Creation

Cacek, Pavel January 2015 (has links)
This thesis deals with issues automatic composing panoramic photos from individual photos. Gradually examines the various steps of algorithms and methods used in them, which are used in creating panoramas. It also focuses on the design of the own system based on methods discussed to construct panoramas. This system is implemented using OpenCV library and it is created also a graphical interface using a Qt library. Finally, are in this thesis evaluated outcomes of this designed and implemented system on available datasets.
8

Vision-based navigation and mapping for flight in GPS-denied environments

Wu, Allen David 15 November 2010 (has links)
Traditionally, the task of determining aircraft position and attitude for automatic control has been handled by the combination of an inertial measurement unit (IMU) with a Global Positioning System (GPS) receiver. In this configuration, accelerations and angular rates from the IMU can be integrated forward in time, and position updates from the GPS can be used to bound the errors that result from this integration. However, reliance on the reception of GPS signals places artificial constraints on aircraft such as small unmanned aerial vehicles (UAVs) that are otherwise physically capable of operation in indoor, cluttered, or adversarial environments. Therefore, this work investigates methods for incorporating a monocular vision sensor into a standard avionics suite. Vision sensors possess the potential to extract information about the surrounding environment and determine the locations of features or points of interest. Having mapped out landmarks in an unknown environment, subsequent observations by the vision sensor can in turn be used to resolve aircraft position and orientation while continuing to map out new features. An extended Kalman filter framework for performing the tasks of vision-based mapping and navigation is presented. Feature points are detected in each image using a Harris corner detector, and these feature measurements are corresponded from frame to frame using a statistical Z-test. When GPS is available, sequential observations of a single landmark point allow the point's location in inertial space to be estimated. When GPS is not available, landmarks that have been sufficiently triangulated can be used for estimating vehicle position and attitude. Simulation and real-time flight test results for vision-based mapping and navigation are presented to demonstrate feasibility in real-time applications. These methods are then integrated into a practical framework for flight in GPS-denied environments and verified through the autonomous flight of a UAV during a loss-of-GPS scenario. The methodology is also extended to the application of vehicles equipped with stereo vision systems. This framework enables aircraft capable of hovering in place to maintain a bounded pose estimate indefinitely without drift during a GPS outage.

Page generated in 0.1593 seconds