• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 8
  • 5
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 33
  • 33
  • 10
  • 8
  • 7
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 5
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Modélisation et commande d'un convoi de véhicules urbains par vision / Modeling and ordering a convoy of urban vehicles by vision

Avanzini, Pierre 06 December 2010 (has links)
Cette thèse concerne la commande d’un convoi de véhicules avec l’objectif sociétal de réduire les problèmes de pollution et d’engorgement dans les milieux urbains. La recherche se concentre ici sur la navigation coopérative d’une flotte de véhicules communicants en s’appuyant sur une approche de commande globale : chaque véhicule est contrôlé à partir d’informations partagées par l’ensemble de la flotte, en s’appuyant sur des techniques de linéarisation exacte. Le développement de nouvelles fonctionnalités de navigation fait l’objet des deux contributions théoriques développées dans ce manuscrit et leur mise en oeuvre constitue la contribution pratique. Dans un premier temps, on s’intéresse à la mise en place d’un mode manuel de navigation dans lequel le premier véhicule, guidé par un opérateur, définit et retransmet la trajectoire à suivre aux membres du convoi. Il convient que la trajectoire, dont la représentation évolue au fur et à mesure de l’avancée du véhicule de tête, soit numériquement stable afin que les véhicules suiveurs qui sont asservis dessus puissent être contrôlés avec précision et sans subir de perturbations. A ces fins, la trajectoire a été modélisée par des courbes B-Spline et un algorithme itératif a été développé pour étendre cette dernière selon un critère d’optimisation évalué au regard des positions successives occupées par le véhicule de tête. Une analyse paramétrique a finalement permis d’aboutir à une synthèse optimale de la trajectoire en terme de fidélité et de stabilité de la représentation. Dans un second temps, on considère l’intégration d’une stratégie de localisation par vision monoculaire pour la navigation en convoi. L’approche repose sur une cartographie 3D de l’environnement préalablement construite à partir d’une séquence vidéo. Cependant, un tel monde virtuel comporte des distorsions locales par rapport au monde réel, ce qui affecte les performances des lois de commande en convoi. Une analyse des distorsions a permis de démontrer qu’il était possible de recouvrer des performances de navigation satisfaisantes à partir d’un jeu de facteurs d’échelle estimés localement le long de la trajectoire de référence. Plusieurs stratégies ont alors été élaborées pour estimer en-ligne ces facteurs d’échelle, soit à partir de données odométriques alimentant un observateur, soit à partir de données télémétriques intégrées dans un processus d’optimisation. Comme précédemment, l’influence des paramètres a été évaluée afin de mettre en évidence les meilleures configurations à utiliser en vue d’applications expérimentales. Pour finir, les algorithmes développés précédemment ont été mis en oeuvre lors d’expérimentations et ont permis d’obtenir des démonstrateurs en vraie grandeur comprenant jusqu’à quatre véhicules de type CyCab et RobuCab. Une attention particulière a été accordée à la cohérence temporelle des données. Celles-ci sont collectées de manière asynchrone au sein du convoi. L’utilisation du protocole NTP a permis de synchroniser l’horloge des véhicules et le middleware AROCCAM d’estampiller les données et de gérer le cadencement de la commande. Ainsi, le modèle d’évolution des véhicules a pu être intégré afin de disposer d’une estimation précise de l’état du convoi à l’instant où la commande est évaluée. / Vehicle platooning is addressed in this thesis with the aim of contributing in reducing congestion and pollution in urban areas. The researches here focus on cooperative navigation of a fleet of communicating vehicles and relies on a global control approach : each vehicle is controlled from information shared by the entire fleet, using exact linearization techniques. This manuscript presents two theoretical contributions, introducing two new navigation functionalities, and the third contribution consists in their practical implementation. As a first part, the introduction of a manual navigation mode has been investigated, in which the first vehicle, guided by an operator, defines and broadcasts the path to be followed by platoon members. In that case, the numerical representation of the trajectory must be extended without disturbing the portion previously generated, to ensure that the entire platoon can precisely and smoothly follow the leader track. To meet these requirements, the trajectory is modeled using B-Spline curves and an iterative path creating algorithm has been developed from successive positions collected by the lead vehicle during its motion. A parametric analysis has finally resulted in the design of an optimal trajectory with respect to the fidelity of the path representation and to the robustness regarding the disturbances that could arise during the creation procedure. In a second part, a localization strategy relying on monocular vision has been integrated in platoon control algorithms. The localization approach is based on a learning phase during which a video sequence is used to perform a 3D mapping of the environment. However, such a virtual vision world is slightly distorted with respect to the actual metric one. This affects the performances of the platoon control laws. An analysis of distortion has demonstrated that platooning performances can be recovered, provided if a set of local scale factors attached to the reference trajectory is available. Several strategies have then been designed to estimate online such scale factors, either from an observer relying on odometric data, or from an optimization process based on telemetric data. As before, intensive simulations has been run to evaluate parameter influence and highlight the best configurations to use during experiments. Finally, above mentionned algorithms have been implemented in order to present full-scale demonstrators with up to four vehicles (either CyCab or RobuCab). Particular attention is data temporal consistency : since data are collected asynchronously within the platoon, a NTP has been used to synchronize vehicle clock, so that middleware AROCCAM can stamp vehicle data and manage the control scheduling. Thus, vehicle models can be integrated in order to obtain an accurate estimate of the platoon state at the time the control laws are evaluated.
2

Monocular vision based localization and mapping

Jama, Michal January 1900 (has links)
Doctor of Philosophy / Department of Electrical and Computer Engineering / Balasubramaniam Natarajan / Dale E. Schinstock / In this dissertation, two applications related to vision-based localization and mapping are considered: (1) improving navigation system based satellite location estimates by using on-board camera images, and (2) deriving position information from video stream and using it to aid an auto-pilot of an unmanned aerial vehicle (UAV). In the first part of this dissertation, a method for analyzing a minimization process called bundle adjustment (BA) used in stereo imagery based 3D terrain reconstruction to refine estimates of camera poses (positions and orientations) is presented. In particular, imagery obtained with pushbroom cameras is of interest. This work proposes a method to identify cases in which BA does not work as intended, i.e., the cases in which the pose estimates returned by the BA are not more accurate than estimates provided by a satellite navigation systems due to the existence of degrees of freedom (DOF) in BA. Use of inaccurate pose estimates causes warping and scaling effects in the reconstructed terrain and prevents the terrain from being used in scientific analysis. Main contributions of this part of work include: 1) formulation of a method for detecting DOF in the BA; and 2) identifying that two camera geometries commonly used to obtain stereo imagery have DOF. Also, this part presents results demonstrating that avoidance of the DOF can give significant accuracy gains in aerial imagery. The second part of this dissertation proposes a vision based system for UAV navigation. This is a monocular vision based simultaneous localization and mapping (SLAM) system, which measures the position and orientation of the camera and builds a map of the environment using a video-stream from a single camera. This is different from common SLAM solutions that use sensors that measure depth, like LIDAR, stereoscopic cameras or depth cameras. The SLAM solution was built by significantly modifying and extending a recent open-source SLAM solution that is fundamentally different from a traditional approach to solving SLAM problem. The modifications made are those needed to provide the position measurements necessary for the navigation solution on a UAV while simultaneously building the map, all while maintaining control of the UAV. The main contributions of this part include: 1) extension of the map building algorithm to enable it to be used realistically while controlling a UAV and simultaneously building the map; 2) improved performance of the SLAM algorithm for lower camera frame rates; and 3) the first known demonstration of a monocular SLAM algorithm successfully controlling a UAV while simultaneously building the map. This work demonstrates that a fully autonomous UAV that uses monocular vision for navigation is feasible, and can be effective in Global Positioning System denied environments.
3

Monocular Vision and Image Correlation to Accomplish Autonomous Localization

Schlachtman, Matthew Paul 01 June 2010 (has links)
For autonomous navigation, robots and vehicles must have accurate estimates of their current state (i.e. location and orientation) within an inertial coordinate frame. If a map is given a priori, the process of determining this state is known as localization. When operating in the outdoors, localization is often assumed to be a solved problem when GPS measurements are available. However, in urban canyons and other areas where GPS accuracy is decreased, additional techniques with other sensors and filtering are required. This thesis aims to provide one such technique based on monocular vision. First, the system requires a map be generated, which consists of a set of geo-referenced video images. This map is generated offline before autonomous navigation is required. When an autonomous vehicle is later deployed, it will be equipped with an on-board camera. As the vehicle moves and obtains images, it will be able to compare its current images with images from the pre-generated map. To conduct this comparison, a method known as image correlation, developed at Johns Hopkins University by Rob Thompson, Daniel Gianola and Christopher Eberl, is used. The output from this comparison is used within a particle filter to provide an estimate of vehicle location. Experimentation demonstrates the particle filter's ability to successfully localize the vehicle within a small map that consists of a short section of road. Notably, no initial assumption of vehicle location within this map is required.
4

Monocular Vision based Particle Filter Localization in Urban Environments

Leung, Keith Yu Kit 17 September 2007 (has links)
This thesis presents the design and experimental result of a monocular vision based particle filter localization system for urban settings that uses aerial orthoimagery as a reference map. The topics of perception and localization are reviewed along with their modeling using a probabilistic framework. Computer vision techniques used to create the feature map and to extract features from camera images are discussed. Localization results indicate that the design is viable.
5

Monocular Vision based Particle Filter Localization in Urban Environments

Leung, Keith Yu Kit 17 September 2007 (has links)
This thesis presents the design and experimental result of a monocular vision based particle filter localization system for urban settings that uses aerial orthoimagery as a reference map. The topics of perception and localization are reviewed along with their modeling using a probabilistic framework. Computer vision techniques used to create the feature map and to extract features from camera images are discussed. Localization results indicate that the design is viable.
6

Neuronal basis of horizontal eye velocity-to-position integration

Debowy, Owen G. 20 January 2007 (has links)
Motion of an image across the retina degrades visual accuracy, thus eye position must be held stationary. The horizontal eye velocity-to-position neural integrator (PNI), located in the caudal hindbrain of vertebrates, is believed to be responsible since the neuronal firing rate is sustained and proportional to eye position. The physiological mechanism for PNI function has been envisioned to be either (1) network dynamics within or between the bilateral PNI including brainstem/cerebellar pathways or (2) cellular properties of PNI neurons. These hypotheses were investigated by recording PNI neuronal activity in goldfish during experimental paradigms consisting of disconjugacy, commissurectomy and cerebellectomy.In goldfish, the eye position time constant ([tau]) is modifiable by short-term (~1 hr) visual feedback training to either drift away from, or towards, the center of the oculomotor range. Although eye movements are yoked in direction and timing, disconjugate motion during [tau] modification suggested separate PNIs to exist for each eye. Correlation of PNI neural activity with eye position during disconjugacy demonstrated the presence of two discrete neuronal populations exhibiting ipsilateral and conjugate eye sensitivity. During monocular PNI plasticity, [tau] was differentially modified for each eye corroborating coexistence of distinct neuronal populations within PNI.The hypothesized role of reciprocal inhibitory feedback between PNI was tested by commissurectomy. Both sustained PNI activity and [tau] remained with a concurrent nasal shift in eye position and decrease in oculomotor range. [tau] modification also was unaffected, suggesting that PNI function is independent of midline connections.The mammalian cerebellum has been suggested to play a dominant role for both [tau] and [tau] modification. In goldfish, cerebellar inactivation by either aspiration or pharmacology both prevented and abolished [tau] modifications, but did not affect eye position holding. PNI neurons still exhibited eye position related firing and modulation during training.By excluding all network circuitry either intrinsic or extrinsic to PNI, these results favor a cellular mechanism as the major determinate of sustained neural activity and eye position holding. By contrast, while cerebellar pathways are important for sustaining large [tau] (>20s), they are unequivocally essential for [tau] modification.
7

Learning with ALiCE II

Lockery, Daniel Alexander 14 September 2007 (has links)
The problem considered in this thesis is the development of an autonomous prototype robot capable of gathering sensory information from its environment allowing it to provide feedback on the condition of specific targets to aid in maintenance of hydro equipment. The context for the solution to this problem is based on the power grid environment operated by the local hydro utility. The intent is to monitor power line structures by travelling along skywire located at the top of towers, providing a view of everything beneath it including, for example, insulators, conductors, and towers. The contribution of this thesis is a novel robot design with the potential to prevent hazardous situations and the use of rough coverage feedback modified reinforcement learning algorithms to establish behaviours. / October 2007
8

Reconstruction techniques for fixed 3-D lines and fixed 3-D points using the relative pose of one or two cameras

Kalghatgi, Roshan Satish 18 January 2012 (has links)
In general, stereovision can be defined as a two part problem. The first is the correspondence problem. This involves determining the image point in each image of a set of images that correspond to the same physical point P. We will call this set of image points, N. The second problem is the reconstruction problem. Once a set of image points, N, that correspond to point P has been determined, N is then used to extract three dimensional information about point P. This master's thesis presents three novel solutions to the reconstruction problem. Two of the techniques presented are for detecting the location of a 3-D point and one for detecting a line expressed in a three dimensional coordinate system. These techniques are tested and validated using a unique 3-D finger detection algorithm. The techniques presented are unique because of their simplicity and because they do not require the cameras to be placed in specific locations, orientations or have specific alignments. On the contrary, it will be shown that the techniques presented in this thesis allow the two cameras used to assume almost any relative pose provided that the object of interest is within their field of view. The relative pose of the cameras at a given instant in time, along with basic equations from the perspective image model are used to form a system of equations that when solved, reveal the 3-D coordinates of a particular fixed point of interest or the three dimensional equation of a fixed line of interest. Finally, it will be shown that a single moving camera can successfully perform the same line and point detection accomplished by two cameras by altering the pose of the camera. The results presented in this work are beneficial to any typical stereovision application because of the computational ease in comparison to other point and line reconstruction techniques. But more importantly, this work allows for a single moving camera to perceive three-dimensional position information, which effectively removes the two camera constraint for a stereo vision system. When used with other monocular cues such as texture or color, the work presented in this thesis could be as accurate as binocular stereo vision at interpreting three dimensional information. Thus, this work could potentially increase the three dimensional perception of a robot that normally uses one camera, such as an eye-in-hand robot or a snake like robot.
9

Learning with ALiCE II

Lockery, Daniel Alexander 14 September 2007 (has links)
The problem considered in this thesis is the development of an autonomous prototype robot capable of gathering sensory information from its environment allowing it to provide feedback on the condition of specific targets to aid in maintenance of hydro equipment. The context for the solution to this problem is based on the power grid environment operated by the local hydro utility. The intent is to monitor power line structures by travelling along skywire located at the top of towers, providing a view of everything beneath it including, for example, insulators, conductors, and towers. The contribution of this thesis is a novel robot design with the potential to prevent hazardous situations and the use of rough coverage feedback modified reinforcement learning algorithms to establish behaviours.
10

Learning with ALiCE II

Lockery, Daniel Alexander 14 September 2007 (has links)
The problem considered in this thesis is the development of an autonomous prototype robot capable of gathering sensory information from its environment allowing it to provide feedback on the condition of specific targets to aid in maintenance of hydro equipment. The context for the solution to this problem is based on the power grid environment operated by the local hydro utility. The intent is to monitor power line structures by travelling along skywire located at the top of towers, providing a view of everything beneath it including, for example, insulators, conductors, and towers. The contribution of this thesis is a novel robot design with the potential to prevent hazardous situations and the use of rough coverage feedback modified reinforcement learning algorithms to establish behaviours.

Page generated in 0.0832 seconds