• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 30
  • 8
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 57
  • 57
  • 19
  • 13
  • 12
  • 10
  • 10
  • 10
  • 9
  • 8
  • 8
  • 8
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Rail Platform Obstacle Detection Using LabVIEW Simulation

Tang, Shengjie January 2015 (has links)
As the rapid development of the rail transportation industry, rail transportation becomes more popular as a component of urban public transport systems, but the fallen obstacle(s) from the rail platform becomes the terrible hidden danger for the rail transportation. As an enclosed public transport systems, rail transportation creates gathered crowd both on board and on the platform. Although railway is the safest form of land transportation, it is capable of producing lots of casualties, when there is an accident.There are several conventional systems of obstacles detection in platform monitoring systems like stereo visions, thermal scanning, and vision metric scanning, etc. As the traditional detection systems could not achieve the demand of detecting the obstacles on the rail within the platform. In this thesis, the author designs a system within the platform based on laser sensors, virtual instruments technology, and image processing technology (machine vision) to increase the efficiency of detection system. The system is useful for guarantying the safety of rail vehicle when coming into the platform and avoid obstacle(s) on the rail fallen from the platform, having a positive impact on traffic safety to protect lives of people.The author used LabVIEW software to create a simulation environment where the input blocks represent the functionalities of the system, in which simulated train detection and fallen object detection. In this thesis, the author mainly focuses on fallen object detection. For fallen object detection, the author used 2D image processing method to detect obstacle(s), so the function is, before the rail vehicle comes into the platform, the system could detect whether there is fallen obstacle(s) on the rail within the platform, simultaneously categorize size of the obstacle(s), and then alarm for delivering the results.
2

Stereo imaging and obstacle detection methods for vehicle guidance

Zhao, Jun, Mechanical & Manufacturing Engineering, Faculty of Engineering, UNSW January 2008 (has links)
With modern day computer power, developing intelligent vehicles is fast becoming a reality. An Intelligent Vehicle is a vehicle equipped with sensors and computing that allow it to perceive the world around it, and to decide on appropriate action. Vision cameras are a good choice to sense the environment. One key task of the camera in an intelligent vehicle is to detect and localise the obstacles, which is the preparation of path planning. Stereo vision based obstacle detection is used in this research. It does not analyse semantic meaning of image features, but directly measures the 3-D coordinates of image pixels, and thus is suitable for obstacle detection in an unknown environment. In this research, a novel correlation based stereo vision method is developed which greatly improves its accuracy while maintaining its real-time performance. Since a vision system provides a large amount of data, extracting refined information may sometimes be complex. In obstacle detection tasks, the purpose is to distinguish the obstacle pixels from the ground pixels in the disparity image. V-Disparity image approach is used in this research to detect the ground plane, however this approach relies heavily on sufficient road features. In this research, a correlation method to locate the ground plane in the disparity image, even without significant road features, is developed. Moreover, traditional V-Disparity images have difficulties detecting non-flat ground, thus having limited applications. This research also develops a method to detect non-flat ground using V-Disparity images, thus greatly widening its application.
3

Monocular Obstacle Detection for Moving Vehicles

Lalonde, Jeffrey R. 18 January 2012 (has links)
This thesis presents a 3D reconstruction approach to the detection of static obstacles from a single rear view parking camera. Corner features are tracked to estimate the vehicle’s motion and to perform multiview triangulation in order to reconstruct the scene. We model the camera motion as planar motion and use the knowledge of the camera pose to efficiently solve motion parameters. Based on the observed motion, we selected snapshots from which the scene is reconstructed. These snapshots guarantee a sufficient baseline between the images and result in more robust scene modeling. Multiview triangulation of a feature is performed only if the feature obeys the epipolar constraint. Triangulated features are semantically labelled according to their 3D location. Obstacle features are spatially clustered to reduce false detections. Finally, the distance to the nearest obstacle cluster is reported to the driver.
4

Monocular Obstacle Detection for Moving Vehicles

Lalonde, Jeffrey R. 18 January 2012 (has links)
This thesis presents a 3D reconstruction approach to the detection of static obstacles from a single rear view parking camera. Corner features are tracked to estimate the vehicle’s motion and to perform multiview triangulation in order to reconstruct the scene. We model the camera motion as planar motion and use the knowledge of the camera pose to efficiently solve motion parameters. Based on the observed motion, we selected snapshots from which the scene is reconstructed. These snapshots guarantee a sufficient baseline between the images and result in more robust scene modeling. Multiview triangulation of a feature is performed only if the feature obeys the epipolar constraint. Triangulated features are semantically labelled according to their 3D location. Obstacle features are spatially clustered to reduce false detections. Finally, the distance to the nearest obstacle cluster is reported to the driver.
5

Monocular Obstacle Detection for Moving Vehicles

Lalonde, Jeffrey R. 18 January 2012 (has links)
This thesis presents a 3D reconstruction approach to the detection of static obstacles from a single rear view parking camera. Corner features are tracked to estimate the vehicle’s motion and to perform multiview triangulation in order to reconstruct the scene. We model the camera motion as planar motion and use the knowledge of the camera pose to efficiently solve motion parameters. Based on the observed motion, we selected snapshots from which the scene is reconstructed. These snapshots guarantee a sufficient baseline between the images and result in more robust scene modeling. Multiview triangulation of a feature is performed only if the feature obeys the epipolar constraint. Triangulated features are semantically labelled according to their 3D location. Obstacle features are spatially clustered to reduce false detections. Finally, the distance to the nearest obstacle cluster is reported to the driver.
6

Monocular Obstacle Detection for Moving Vehicles

Lalonde, Jeffrey R. January 2012 (has links)
This thesis presents a 3D reconstruction approach to the detection of static obstacles from a single rear view parking camera. Corner features are tracked to estimate the vehicle’s motion and to perform multiview triangulation in order to reconstruct the scene. We model the camera motion as planar motion and use the knowledge of the camera pose to efficiently solve motion parameters. Based on the observed motion, we selected snapshots from which the scene is reconstructed. These snapshots guarantee a sufficient baseline between the images and result in more robust scene modeling. Multiview triangulation of a feature is performed only if the feature obeys the epipolar constraint. Triangulated features are semantically labelled according to their 3D location. Obstacle features are spatially clustered to reduce false detections. Finally, the distance to the nearest obstacle cluster is reported to the driver.
7

Detecção e rastreamento de obstáculos com uso de sensor laser de varredura. / Obstacle detection and tracking using laser 2D.

Habermann, Danilo 27 July 2010 (has links)
Este trabalho apresenta um sistema de rastreamento de obstáculos, utilizando sensor laser 2D e filtro de Kalman. Este filtro não é muito eficiente em situações em que ocorrem severas perturbações na posição medida do obstáculo, como, por exemplo, um objeto rastreado passando por trás de uma barreira, interrompendo o feixe de laser por alguns instantes, tornando impossível receber do sensor as informações sobre sua posição. Este trabalho sugere um método de minimizar esse problema com o uso de um algoritmo denominado Corretor de Discrepâncias. / An obstacle detection and tracking system using a 2D laser sensor and the Kalman filter is presented. This filter is not very efficient in case of severe disturbances in the measured position of the obstacle, as for instance, when an object being tracked is behind a barrier, thus interrupting the laser beam, making it impossible to receive the sensor information about its position. This work suggests a method to minimize this problem by using an algorithm called Corrector of Discrepancies.
8

Obstacle detection using stereo vision for unmanned ground vehicles

Olsson, Martin January 2009 (has links)
No description available.
9

Monocular Vision-Based Obstacle Detection for Unmanned Systems

Wang, Carlos January 2011 (has links)
Many potential indoor applications exist for autonomous vehicles, such as automated surveillance, inspection, and document delivery. A key requirement for autonomous operation is for the vehicles to be able to detect and map obstacles in order to avoid collisions. This work develops a comprehensive 3D scene reconstruction algorithm based on known vehicle motion and vision data that is specifically tailored to the indoor environment. Visible light cameras are one of the many sensors available for capturing information from the environment, and their key advantages over other sensors are that they are light weight, power efficient, cost effective, and provide abundant information about the scene. The emphasis on 3D indoor mapping enables the assumption that a large majority of the area to be mapped is comprised of planar surfaces such as floors, walls and ceilings, which can be exploited to simplify the complex task of dense reconstruction of the environment from monocular vision data. In this thesis, the Planar Surface Reconstruction (PSR) algorithm is presented. It extracts surface information from images and combines it with 3D point estimates in order to generate a reliable and complete environment map. It was designed to be used for single cameras with the primary assumptions that the objects in the environment are flat, static and chromatically unique. The algorithm finds and tracks Scale Invariant Feature Transform (SIFT) features from a sequence of images to calculate 3D point estimates. The individual surface information is extracted using a combination of the Kuwahara filter and mean shift segmentation, which is then coupled with the 3D point estimates to fit these surfaces in the environment map. The resultant map consists of both surfaces and points that are assumed to represent obstacles in the scene. A ground vehicle platform was developed for the real-time implementation of the algorithm and experiments were done to assess the PSR algorithm. Both clean and cluttered scenarios were used to evaluate the quality of the surfaces generated from the algorithm. The clean scenario satisfies the primary assumptions underlying the PSR algorithm, and as a result produced accurate surface details of the scene, while the cluttered scenario generated lower quality, but still promising, results. The significance behind these findings is that it is shown that incorporating object surface recognition into dense 3D reconstruction can significantly improve the overall quality of the environment map.
10

Optical flow templates for mobile robot environment understanding

Roberts, Richard Joseph William 08 June 2015 (has links)
In this work we develop optical flow templates. In doing so, we introduce a practical tool for inferring robot egomotion and semantic superpixel labeling using optical flow in imaging systems with arbitrary optics. In order to do this we develop valuable understanding of geometric relationships and mathematical methods that are useful in interpreting optical flow to the robotics and computer vision communities. This work is motivated by what we perceive as directions for advancing the current state of the art in obstacle detection and scene understanding for mobile robots. Specifically, many existing methods build 3D point clouds, which are not directly useful for autonomous navigation and require further processing. Both the step of building the point clouds and the later processing steps are challenging and computationally intensive. Additionally, many current methods require a calibrated camera, which introduces calibration challenges and places limitations on the types of camera optics that may be used. Wide-angle lenses, systems with mirrors, and multiple cameras all require different calibration models and can be difficult or impossible to calibrate at all. Finally, current pixel and superpixel obstacle labeling algorithms typically rely on image appearance. While image appearance is informative, image motion is a direct effect of the scene structure that determines whether a region of the environment is an obstacle. The egomotion estimation and obstacle labeling methods we develop here based on optical flow templates require very little computation per frame and do not require building point clouds. Additionally, they do not require any specific type of camera optics, nor a calibrated camera. Finally, they label obstacles using optical flow alone without image appearance. In this thesis we start with optical flow subspaces for egomotion estimation and detection of “motion anomalies”. We then extend this to multiple subspaces and develop mathematical reasoning to select between them, comprising optical flow templates. Using these we classify environment shapes and label superpixels. Finally, we show how performing all learning and inference directly from image spatio-temporal gradients greatly improves computation time and accuracy.

Page generated in 0.114 seconds