• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • 1
  • Tagged with
  • 9
  • 9
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A Prototype Polarimetric Camera for Unmanned Ground Vehicles

Umansky, Mark 26 August 2013 (has links)
Unmanned ground vehicles are increasingly employing a combination of active sensors such as LIDAR with passive sensors like cameras to perform at all levels of perception, which includes detection, recognition and classification. Typical cameras measure the intensity of light at a variety of different wavelengths to classify objects in different areas of an image. A polarimetric camera not only measures intensity of light, but can also determine its state of polarization. The polarization of light is the angle the electric field of the wave of light takes as it travels. A polarimetric camera can identify the state of polarization of the light, which can be used to segment highly polarizing areas in a natural environment, such the surface of water. The polarimetric camera designed and built for this thesis was created with low cost in mind, as commercial polarimetric cameras are very expensive. It uses multiple beam splitters to split incoming light into four machine vision cameras. In front of each machine vision camera is a linear polarizing filter that is set to a specific orientation. Using the data from each camera, the Stokes vector can be calculated on a pixel by pixel basis to determine what areas of the image are more polarized. Test images of various scenes that included running water, standing water, mud, and vehicles showed promise in using polarization data to highlight and identify areas of interest. This data could be used by a UGV to make more informed decisions in an autonomous navigation mode. / Master of Science
2

Broadband World Modeling and Scene Reconstruction

Goldman, Benjamin Joseph 24 May 2013 (has links)
Perception is a key feature in how any creature or autonomous system relates to its environment. While there are many types of perception, this thesis focuses on the improvement of the visual robotics perception systems. By implementing a broadband passive sensing system in conjunction with current perception algorithms, this thesis explores scene reconstruction and world modeling. The process involves two main steps. The first is stereo correspondence using block matching algorithms with filtering to improve the quality of this matching process. The disparity maps are then transformed into 3D point clouds. These point clouds are filtered again before the registration process is done. The registration uses a SAC-IA matching technique to align the point clouds with minimum error.  The registered final cloud is then filtered again to smooth and down sample the large amount of data. This process was implemented through software architecture that utilizes Qt, OpenCV, and Point Cloud Library. It was tested using a variety of experiments on each of the components of the process.  It shows promise for being able to replace or augment existing UGV perception systems in the future. / Master of Science
3

Semantic mapping for service robots: building and using maps for mobile manipulators in semi-structured environments

Trevor, Alexander J. B. 08 June 2015 (has links)
Although much progress has been made in the field of robotic mapping, many challenges remain including: efficient semantic segmentation using RGB-D sensors, map representations that include complex features (structures and objects), and interfaces for interactive annotation of maps. This thesis addresses how prior knowledge of semi-structured human environments can be leveraged to improve segmentation, mapping, and semantic annotation of maps. We present an organized connected component approach for segmenting RGB-D data into planes and clusters. These segments serve as input to our mapping approach that utilizes them as planar landmarks and object landmarks for Simultaneous Localization and Mapping (SLAM), providing necessary information for service robot tasks and improving data association and loop closure. These features are meaningful to humans, enabling annotation of mapped features to establish common ground and simplifying tasking. A modular, open-source software framework, the OmniMapper, is also presented that allows a number of different sensors and features to be combined to generate a combined map representation, and enabling easy addition of new feature types.
4

Sensor Fused Scene Reconstruction and Surface Inspection

Moodie, Daniel Thien-An 17 April 2014 (has links)
Optical three dimensional (3D) mapping routines are used in inspection robots to detect faults by creating 3D reconstructions of environments. To detect surface faults, sub millimeter depth resolution is required to determine minute differences caused by coating loss and pitting. Sensors that can detect these small depth differences cannot quickly create contextual maps of large environments. To solve the 3D mapping problem, a sensor fused approach is proposed that can gather contextual information about large environments with one depth sensor and a SLAM routine; while local surface defects can be measured with an actuated optical profilometer. The depth sensor uses a modified Kinect Fusion to create a contextual map of the environment. A custom actuated optical profilometer is created and then calibrated. The two systems are then registered to each other to place local surface scans from the profilometer into a scene context created by Kinect Fusion. The resulting system can create a contextual map of large scale features (0.4 m) with less than 10% error while the optical profilometer can create surface reconstructions with sub millimeter resolution. The combination of the two allows for the detection and quantification of surface faults with the profilometer placed in a contextual reconstruction. / Master of Science
5

IRIS: Intelligent Roadway Image Segmentation

Brown, Ryan Charles 23 June 2014 (has links)
The problem of roadway navigation and obstacle avoidance for unmanned ground vehicles has typically needed very expensive sensing to operate properly. To reduce the cost of sensing, it is proposed that an algorithm be developed that uses a single visual camera to image the roadway, determine where the lane of travel is in the image, and segment that lane. The algorithm would need to be as accurate as current lane finding algorithms as well as faster than a standard k- means segmentation across the entire image. This algorithm, named IRIS, was developed and tested on several sets of roadway images. The algorithm was tested for its accuracy and speed, and was found to be better than 86% accurate across all data sets for an optimal choice of algorithm parameters. IRIS was also found to be faster than a k-means segmentation across the entire image. IRIS was found to be adequate for fulfilling the design goals for the algorithm. IRIS is a feasible system for lane identification and segmentation, but it is not currently a viable system. More work to increase the speed of the algorithm and the accuracy of lane detection and to extend the inherent lane model to more complex road types is needed. IRIS represents a significant step forward in the single camera roadway perception field. / Master of Science
6

Visual object perception in unstructured environments

Choi, Changhyun 12 January 2015 (has links)
As robotic systems move from well-controlled settings to increasingly unstructured environments, they are required to operate in highly dynamic and cluttered scenarios. Finding an object, estimating its pose, and tracking its pose over time within such scenarios are challenging problems. Although various approaches have been developed to tackle these problems, the scope of objects addressed and the robustness of solutions remain limited. In this thesis, we target a robust object perception using visual sensory information, which spans from the traditional monocular camera to the more recently emerged RGB-D sensor, in unstructured environments. Toward this goal, we address four critical challenges to robust 6-DOF object pose estimation and tracking that current state-of-the-art approaches have, as yet, failed to solve. The first challenge is how to increase the scope of objects by allowing visual perception to handle both textured and textureless objects. A large number of 3D object models are widely available in online object model databases, and these object models provide significant prior information including geometric shapes and photometric appearances. We note that using both geometric and photometric attributes available from these models enables us to handle both textured and textureless objects. This thesis presents our efforts to broaden the spectrum of objects to be handled by combining geometric and photometric features. The second challenge is how to dependably estimate and track the pose of an object despite the clutter in backgrounds. Difficulties in object perception rise with the degree of clutter. Background clutter is likely to lead to false measurements, and false measurements tend to result in inaccurate pose estimates. To tackle significant clutter in backgrounds, we present two multiple pose hypotheses frameworks: a particle filtering framework for tracking and a voting framework for pose estimation. Handling of object discontinuities during tracking, such as severe occlusions, disappearances, and blurring, presents another important challenge. In an ideal scenario, a tracked object is visible throughout the entirety of tracking. However, when an object happens to be occluded by other objects or disappears due to the motions of the object or the camera, difficulties ensue. Because the continuous tracking of an object is critical to robotic manipulation, we propose to devise a method to measure tracking quality and to re-initialize tracking as necessary. The final challenge we address is performing these tasks within real-time constraints. Our particle filtering and voting frameworks, while time-consuming, are composed of repetitive, simple and independent computations. Inspired by that observation, we propose to run massively parallelized frameworks on a GPU for those robotic perception tasks which must operate within strict time constraints.
7

Approche géométrique couleur pour le traitement des images catadioptriques / A geometric-color approach for processing catadioptric images

Aziz, Fatima 11 December 2018 (has links)
Ce manuscrit étudie les images omnidirectionnelles catadioptriques couleur en tant que variétés Riemanniennes. Cette représentation géométrique ouvre des pistes intéressantes pour résoudre les problèmes liés aux distorsions introduites par le système catadioptrique dans le cadre de la perception couleur des systèmes autonomes. Notre travail démarre avec un état de l’art sur la vision omnidirectionnelle, les différents dispositifs et modèles de projection géométriques. Ensuite, nous présentons les notions de base de la géométrie Riemannienne et son utilisation en traitement d’images. Ceci nous amène à introduire les opérateurs différentiels sur les variétés Riemanniennes, qui nous seront utiles dans cette étude. Nous développons alors une méthode de construction d’un tenseur métrique hybride adapté aux images catadioptriques couleur. Ce tenseur a la double caractéristique, de dépendre de la position géométrique des points dans l’image, et de leurs coordonnées photométriques également. L’exploitation du tenseur métrique proposé pour différents traitements des images catadioptriques, est une partie importante dans cette thèse. En effet, on constate que la fonction Gaussienne est au cœur de plusieurs filtres et opérateurs pour diverses applications comme le débruitage, ou bien l’extraction des caractéristiques bas niveau à partir de la représentation dans l’espace-échelle Gaussien. On construit ainsi un nouveau noyau Gaussien dépendant du tenseur métrique Riemannien. Il présente l’avantage d’être applicable directement sur le plan image catadioptrique, également, variable dans l’espace et dépendant de l’information image locale. Dans la dernière partie de cette thèse, nous discutons des applications robotiques de la métrique hybride, en particulier, la détection de l’espace libre navigable pour un robot mobile, et nous développons une méthode de planification de trajectoires optimal. / This manuscript investigates omnidirectional catadioptric color images as Riemannian manifolds. This geometric representation offers insights into the resolution of problems related to the distortions introduced by the catadioptric system in the context of the color perception of autonomous systems. The report starts with an overview of the omnidirectional vision, the different used systems, and the geometric projection models. Then, we present the basic notions and tools of Riemannian geometry and its use in the image processing domain. This leads us to introduce some useful differential operators on Riemannian manifolds. We develop a method of constructing a hybrid metric tensor adapted to color catadioptric images. This tensor has the dual characteristic of depending on the geometric position of the image points and their photometric coordinates as well.In this work, we mostly deal with the exploitation of the previously constructed hybrid metric tensor in the catadioptric image processing. Indeed, it is recognized that the Gaussian function is at the core of several filters and operators for various applications, such as noise reduction, or the extraction of low-level characteristics from the Gaussian space- scale representation. We thus build a new Gaussian kernel dependent on the Riemannian metric tensor. It has the advantage of being applicable directly on the catadioptric image plane, also, variable in space and depending on the local image information. As a final part in this thesis, we discuss some possible robotic applications of the hybrid metric tensor. We propose to define the free space and distance transforms in the omni- image, then to extract geodesic medial axis. The latter is a relevant topological representation for autonomous navigation, that we use to define an optimal trajectory planning method.
8

Text Localization for Unmanned Ground Vehicles

Kirchhoff, Allan Richard 16 October 2014 (has links)
Unmanned ground vehicles (UGVs) are increasingly being used for civilian and military applications. Passive sensing, such as visible cameras, are being used for navigation and object detection. An additional object of interest in many environments is text. Text information can supplement the autonomy of unmanned ground vehicles. Text most often appears in the environment in the form of road signs and storefront signs. Road hazard information, unmapped route detours and traffic information are available to human drivers through road signs. Premade road maps lack these traffic details, but with text localization the vehicle could fill the information gaps. Leading text localization algorithms achieve ~60% accuracy; however, practical applications are cited to require at least 80% accuracy [49]. The goal of this thesis is to test existing text localization algorithms against challenging scenes, identify the best candidate and optimize it for scenes a UGV would encounter. Promising text localization methods were tested against a custom dataset created to best represent scenes a UGV would encounter. The dataset includes road signs and storefront signs against complex background. The methods tested were adaptive thresholding, the stroke filter and the stroke width transform. A temporal tracking proof of concept was also tested. It tracked text through a series of frames in order to reduce false positives. Best results were obtained using the stroke width transform with temporal tracking which achieved an accuracy of 79%. That level of performance approaches requirements for use in practical applications. Without temporal tracking the stroke width transform yielded an accuracy of 46%. The runtime was 8.9 seconds per image, which is 44.5 times slower than necessary for real-time object tracking. Converting the MATLAB code to C++ and running the text localization on a GPU could provide the necessary speedup. / Master of Science
9

Evaluation of probabilistic representations for modeling and understanding shape based on synthetic and real sensory data / Utvärdering av probabilistiska representationer för modellering och förståelse av form baserat på syntetisk och verklig sensordata

Zarzar Gandler, Gabriela January 2017 (has links)
The advancements in robotic perception in the recent years have empowered robots to better execute tasks in various environments. The perception of objects in the robot work space significantly relies on how sensory data is represented. In this context, 3D models of object’s surfaces have been studied as a means to provide useful insights on shape of objects and ultimately enhance robotic perception. This involves several challenges, because sensory data generally presents artifacts, such as noise and incompleteness. To tackle this problem, we employ Gaussian Process Implicit Surface (GPIS), a non-parametric probabilistic reconstruction of object’s surfaces from 3D data points. This thesis investigates different configurations for GPIS, as a means to tackle the extraction of shape information. In our approach we interpret an object’s surface as the level-set of an underlying sparse Gaussian Process (GP) with variational formulation. Results show that the variational formulation for sparse GP enables a reliable approximation to the full GP solution. Experiments are performed on a synthetic and a real sensory data set. We evaluate results by assessing how close the reconstructed surfaces are to the ground-truth correspondences, and how well objects from different categories are clustered based on the obtained representation. Finally we conclude that the proposed solution derives adequate surface representations to reason about object shape and to discriminate objects based on shape information. / Framsteg inom robotperception de senaste åren har resulterat i robotar som är bättre på attutföra uppgifter i olika miljöer. Perception av objekt i robotens arbetsmiljö är beroende avhur sensorisk data representeras. I det här sammanhanget har 3D-modeller av objektytorstuderats för att ge användbar insikt om objektens form och i slutändan bättre robotperception. Detta innebär flera utmaningar, eftersom sensoriska data ofta innehåller artefakter, såsom brus och brist på data. För att hantera detta problem använder vi oss av Gaussian Process Implicit Surface (GPIS), som är en icke-parametrisk probabilistisk rekonstruktion av ett objekts yta utifrån 3D-punkter. Detta examensarbete undersöker olika konfigurationer av GPIS för att på detta sätt kunna extrahera forminformation. I vår metod tolkar vi ett objekts yta som nivåkurvor hos en underliggande gles variational Gaussian Process (GP) modell. Resultat visar att en gles variational GP möjliggör en tillförlitlig approximation av en komplett GP-lösningen. Experiment utförs på ett syntetisk och ett reellt sensorisk dataset. Vi utvärderar resultat genom att bedöma hur nära de rekonstruerade ytorna är till grundtruth- korrespondenser, och hur väl objektkategorier klustras utifrån den erhållna representationen. Slutligen konstaterar vi att den föreslagna lösningen leder till tillräckligt goda representationer av ytor för tolkning av objektens form och för att diskriminera objekt utifrån forminformation.

Page generated in 0.1184 seconds