• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 70
  • 24
  • 10
  • 5
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 137
  • 68
  • 36
  • 34
  • 27
  • 27
  • 26
  • 24
  • 21
  • 16
  • 16
  • 15
  • 15
  • 14
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Monitoring-Camera-Assisted SLAM for Indoor Positioning and Navigation

Zheng, Haoyue January 2021 (has links)
In the information age, intelligent indoor positioning and navigation services are required in many application scenarios. However, most current visual positioning systems cannot function alone and have to rely on additional information from other modules. Nowadays, public places are usually equipped with monitoring cameras, which can be exploited as anchors for positioning, thus enabling the vision module to work independently. In this thesis, a high-precision indoor positioning and navigation system is proposed, which integrates monitoring cameras and smartphone cameras. Firstly, based on feature matching and geometric relationships, the system obtains the transformation scale from relative lengths in the cameras’ perspective to actual distances in the floor plan. Secondly, by scale transformation, projection, rotation and translation, the user's initial position in the real environment can be determined. Then, as the user moves forward, the system continues to track and provide correct navigation prompts. The designed system is implemented and tested in different application scenarios. It is proved that our system achieves a positioning accuracy of 0.46m and a successful navigation rate of 90.6%, which outperforms the state-of-the-art schemes by 13% and 3% respectively. Moreover, the system latency is only 0.2s, which meets the real-time demands. In summary, assisted by widely deployed monitoring cameras, our system can provide users with accurate and reliable indoor positioning and navigation services. / Thesis / Master of Applied Science (MASc)
2

Modélisation et commande d'un convoi de véhicules urbains par vision / Modeling and ordering a convoy of urban vehicles by vision

Avanzini, Pierre 06 December 2010 (has links)
Cette thèse concerne la commande d’un convoi de véhicules avec l’objectif sociétal de réduire les problèmes de pollution et d’engorgement dans les milieux urbains. La recherche se concentre ici sur la navigation coopérative d’une flotte de véhicules communicants en s’appuyant sur une approche de commande globale : chaque véhicule est contrôlé à partir d’informations partagées par l’ensemble de la flotte, en s’appuyant sur des techniques de linéarisation exacte. Le développement de nouvelles fonctionnalités de navigation fait l’objet des deux contributions théoriques développées dans ce manuscrit et leur mise en oeuvre constitue la contribution pratique. Dans un premier temps, on s’intéresse à la mise en place d’un mode manuel de navigation dans lequel le premier véhicule, guidé par un opérateur, définit et retransmet la trajectoire à suivre aux membres du convoi. Il convient que la trajectoire, dont la représentation évolue au fur et à mesure de l’avancée du véhicule de tête, soit numériquement stable afin que les véhicules suiveurs qui sont asservis dessus puissent être contrôlés avec précision et sans subir de perturbations. A ces fins, la trajectoire a été modélisée par des courbes B-Spline et un algorithme itératif a été développé pour étendre cette dernière selon un critère d’optimisation évalué au regard des positions successives occupées par le véhicule de tête. Une analyse paramétrique a finalement permis d’aboutir à une synthèse optimale de la trajectoire en terme de fidélité et de stabilité de la représentation. Dans un second temps, on considère l’intégration d’une stratégie de localisation par vision monoculaire pour la navigation en convoi. L’approche repose sur une cartographie 3D de l’environnement préalablement construite à partir d’une séquence vidéo. Cependant, un tel monde virtuel comporte des distorsions locales par rapport au monde réel, ce qui affecte les performances des lois de commande en convoi. Une analyse des distorsions a permis de démontrer qu’il était possible de recouvrer des performances de navigation satisfaisantes à partir d’un jeu de facteurs d’échelle estimés localement le long de la trajectoire de référence. Plusieurs stratégies ont alors été élaborées pour estimer en-ligne ces facteurs d’échelle, soit à partir de données odométriques alimentant un observateur, soit à partir de données télémétriques intégrées dans un processus d’optimisation. Comme précédemment, l’influence des paramètres a été évaluée afin de mettre en évidence les meilleures configurations à utiliser en vue d’applications expérimentales. Pour finir, les algorithmes développés précédemment ont été mis en oeuvre lors d’expérimentations et ont permis d’obtenir des démonstrateurs en vraie grandeur comprenant jusqu’à quatre véhicules de type CyCab et RobuCab. Une attention particulière a été accordée à la cohérence temporelle des données. Celles-ci sont collectées de manière asynchrone au sein du convoi. L’utilisation du protocole NTP a permis de synchroniser l’horloge des véhicules et le middleware AROCCAM d’estampiller les données et de gérer le cadencement de la commande. Ainsi, le modèle d’évolution des véhicules a pu être intégré afin de disposer d’une estimation précise de l’état du convoi à l’instant où la commande est évaluée. / Vehicle platooning is addressed in this thesis with the aim of contributing in reducing congestion and pollution in urban areas. The researches here focus on cooperative navigation of a fleet of communicating vehicles and relies on a global control approach : each vehicle is controlled from information shared by the entire fleet, using exact linearization techniques. This manuscript presents two theoretical contributions, introducing two new navigation functionalities, and the third contribution consists in their practical implementation. As a first part, the introduction of a manual navigation mode has been investigated, in which the first vehicle, guided by an operator, defines and broadcasts the path to be followed by platoon members. In that case, the numerical representation of the trajectory must be extended without disturbing the portion previously generated, to ensure that the entire platoon can precisely and smoothly follow the leader track. To meet these requirements, the trajectory is modeled using B-Spline curves and an iterative path creating algorithm has been developed from successive positions collected by the lead vehicle during its motion. A parametric analysis has finally resulted in the design of an optimal trajectory with respect to the fidelity of the path representation and to the robustness regarding the disturbances that could arise during the creation procedure. In a second part, a localization strategy relying on monocular vision has been integrated in platoon control algorithms. The localization approach is based on a learning phase during which a video sequence is used to perform a 3D mapping of the environment. However, such a virtual vision world is slightly distorted with respect to the actual metric one. This affects the performances of the platoon control laws. An analysis of distortion has demonstrated that platooning performances can be recovered, provided if a set of local scale factors attached to the reference trajectory is available. Several strategies have then been designed to estimate online such scale factors, either from an observer relying on odometric data, or from an optimization process based on telemetric data. As before, intensive simulations has been run to evaluate parameter influence and highlight the best configurations to use during experiments. Finally, above mentionned algorithms have been implemented in order to present full-scale demonstrators with up to four vehicles (either CyCab or RobuCab). Particular attention is data temporal consistency : since data are collected asynchronously within the platoon, a NTP has been used to synchronize vehicle clock, so that middleware AROCCAM can stamp vehicle data and manage the control scheduling. Thus, vehicle models can be integrated in order to obtain an accurate estimate of the platoon state at the time the control laws are evaluated.
3

Monocular Visual SLAMbased on Inverse DepthParametrizationMonocular Visual SLAMbased on Inverse DepthParametrization

Rivero Pindado, Victor January 2010 (has links)
<p>The first objective of this research has always been carry out a study of visual techniques SLAM (Simultaneous localization and mapping), specifically the type monovisual, less studied than the stereo. These techniques have been well studied in the world of robotics. These techniques are focused on reconstruct a map of the robot enviroment while maintaining its position information in that map. We chose to investigate a method to encode the points by the inverse of its depth, from the first time that the feature was observed. This method permits efficient and accurate representation of uncertainty during undelayed initialization and beyond, all within the standard extended Kalman filter (EKF).At first, the study mentioned it should be consolidated developing an application that implements this method. After suffering various difficulties, it was decided to make use of a platform developed by the same author of Slam method mentioned in MATLAB. Until then it had developed the tasks of calibration, feature extraction and matching. From that point, that application was adapted to the characteristics of our camera and our video to work. We recorded a video with our camera following a known trajectory to check the calculated path shown in the application. Corroborating works and studying the limitations and advantages of this method.</p>
4

Lane Departure and Front Collision Warning System Using Monocular and Stereo Vision

Xie, Bingqian 24 April 2015 (has links)
Driving Assistance Systems such as lane departure and front collision warning has caught great attention for its promising usage on road driving. This, this research focus on implementing lane departure and front collision warning at same time. In order to make the system really useful for real situation, it is critical that the whole process could be near real-time. Thus we chose Hough Transform as the main algorithm for detecting lane on the road. Hough Transform is used for that it is a very fast and robust algorithm, which makes it possible to execute as many frames as possible per frames. Hough Transform is used to get boundary information, so that we could decide if the car is doing lane departure based on the car's position in lane. Later, we move on to use front car's symmetry character to do front car detection, and combine it with Camshift tracking algorithm to fill the gap for failure of detection. Later we introduce camera calibration, stereo calibration, and how to calculate real distance from depth map.
5

Monocular Visual SLAMbased on Inverse DepthParametrizationMonocular Visual SLAMbased on Inverse DepthParametrization

Rivero Pindado, Victor January 2010 (has links)
The first objective of this research has always been carry out a study of visual techniques SLAM (Simultaneous localization and mapping), specifically the type monovisual, less studied than the stereo. These techniques have been well studied in the world of robotics. These techniques are focused on reconstruct a map of the robot enviroment while maintaining its position information in that map. We chose to investigate a method to encode the points by the inverse of its depth, from the first time that the feature was observed. This method permits efficient and accurate representation of uncertainty during undelayed initialization and beyond, all within the standard extended Kalman filter (EKF).At first, the study mentioned it should be consolidated developing an application that implements this method. After suffering various difficulties, it was decided to make use of a platform developed by the same author of Slam method mentioned in MATLAB. Until then it had developed the tasks of calibration, feature extraction and matching. From that point, that application was adapted to the characteristics of our camera and our video to work. We recorded a video with our camera following a known trajectory to check the calculated path shown in the application. Corroborating works and studying the limitations and advantages of this method.
6

Asymmetric monocular smooth pursuit performance of people with infantile esotropia /

Zanette, Christopher G. January 2008 (has links)
Thesis (M.Sc.)--York University, 2008. Graduate Programme in Biology. / Typescript. Includes bibliographical references (leaves 87-100). Also available on the Internet. MODE OF ACCESS via web browser by entering the following URL: http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&res_dat=xri:pqdiss&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:MR51629
7

Monocular Adaptation of Vestibulo-Ocular Reflex (VOR)

Sehizadeh, Mina January 2005 (has links)
Purpose: This study asks whether active horizontal angular Vestibulo-Ocular Reflex (VOR) gain is capable of monocular adaptation after 4 hours of wearing 10 dioptres (D) of induced anisometropia in healthy human adults. Method: The participants (average age 28 years) wore a contact lenses/spectacles combination for 4 hours. The power of the spectacle was +5. 00D (magnified images 8. 65%) in front of the right eye and ?5. 00D (minified images 5. 48%) for the left eye, while the power of the contact lenses was equal to the subjects? habitual correction, summed with the opposite power of the spectacle lens. Eye and head position data was collected in complete darkness, in one-minute trials before adaptation and every 30 minutes for 2 hours after adaptation. Eye and head position data obtained using a video-based eye tracking system, was analyzed offline using Fast Fourier Transform in MATHCADTM 11. 1 software to calculate VOR gain. The VOR gain was compared between the right eyes and left eyes for the trials before and after adaptation. Results: In the first post-adaptation trial, a significant decrease in VOR gain (? 6%) occurred in the left eye in response to the miniaturizing lens. The right eye VOR gain did not show a significant change in the first post-adaptation trial (?2% decrease). During the remaining trials in the 2 hour follow-up time, both eyes showed a significant decrease compared to the baseline trial. This might indicate habituation of the VOR from repeated testing, or fatigue. Conclusion: There was monocular adaptation of VOR in response to the combined contact lenses/spectacles, but it was not complete and it was not as we expected. However, trying different amounts of anisometropia in one or two directions, a longer adaptation period (more than 4 hours) or monitoring the gain for more than 2 hours after adaptation with a longer separation between trials, might show different results.
8

Monocular Adaptation of Vestibulo-Ocular Reflex (VOR)

Sehizadeh, Mina January 2005 (has links)
Purpose: This study asks whether active horizontal angular Vestibulo-Ocular Reflex (VOR) gain is capable of monocular adaptation after 4 hours of wearing 10 dioptres (D) of induced anisometropia in healthy human adults. Method: The participants (average age 28 years) wore a contact lenses/spectacles combination for 4 hours. The power of the spectacle was +5. 00D (magnified images 8. 65%) in front of the right eye and ?5. 00D (minified images 5. 48%) for the left eye, while the power of the contact lenses was equal to the subjects? habitual correction, summed with the opposite power of the spectacle lens. Eye and head position data was collected in complete darkness, in one-minute trials before adaptation and every 30 minutes for 2 hours after adaptation. Eye and head position data obtained using a video-based eye tracking system, was analyzed offline using Fast Fourier Transform in MATHCADTM 11. 1 software to calculate VOR gain. The VOR gain was compared between the right eyes and left eyes for the trials before and after adaptation. Results: In the first post-adaptation trial, a significant decrease in VOR gain (? 6%) occurred in the left eye in response to the miniaturizing lens. The right eye VOR gain did not show a significant change in the first post-adaptation trial (?2% decrease). During the remaining trials in the 2 hour follow-up time, both eyes showed a significant decrease compared to the baseline trial. This might indicate habituation of the VOR from repeated testing, or fatigue. Conclusion: There was monocular adaptation of VOR in response to the combined contact lenses/spectacles, but it was not complete and it was not as we expected. However, trying different amounts of anisometropia in one or two directions, a longer adaptation period (more than 4 hours) or monitoring the gain for more than 2 hours after adaptation with a longer separation between trials, might show different results.
9

The effects of losing an eye early in life on face processing /

Kelly, Krista R. January 2008 (has links)
Thesis (M.A.)--York University, 2008. Graduate Programme in Psychology. / Typescript. Includes bibliographical references (leaves 72-94). Also available on the Internet. MODE OF ACCESS via web browser by entering the following URL: http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&res_dat=xri:pqdiss&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:MR45951
10

Improved monocular videogrammetry for generating 3D dense point clouds of built infrastructure

Rashidi, Abbas 27 August 2014 (has links)
Videogrammetry is an affordable and easy-to-use technology for spatial 3D scene recovery. When applied to the civil engineering domain, a number of issues have to be taken into account. First, videotaping large scale civil infrastructure scenes usually results in large video files filled with blurry, noisy, or simply redundant frames. This is often due to higher frame rate over camera speed ratio than necessary, camera and lens imperfections, and uncontrolled motions of the camera that results in motion blur. Only a small percentage of the collected video frames are required to achieve robust results. However, choosing the right frames is a tough challenge. Second, the generated point cloud using a monocular videogrammetric pipeline is up to scale, i.e. the user has to know at least one dimension of an object in the scene to scale up the entire scene. This issue significantly narrows applications of generated point clouds in civil engineering domain since measurement is an essential part of every as-built documentation technology. Finally, due to various reasons including the lack of sufficient coverage during videotaping of the scene or existence of texture-less areas which are common in most indoor/outdoor civil engineering scenes, quality of the generated point clouds are sometimes poor. This deficiency appears in the form of outliers or existence of holes or gaps on surfaces of point clouds. Several researchers have focused on this particular problem; however, the major issue with all of the currently existing algorithms is that they basically treat holes and gaps as part of a smooth surface. This approach is not robust enough at the intersections of different surfaces or corners while there are sharp edges. A robust algorithm for filling holes/gaps should be able to maintain sharp edges/corners since they usually contain useful information specifically for applications in the civil and infrastructure engineering domain. To tackle these issues, this research presents and validates an improved videogrammetric pipeline for as built documentation of indoor/outdoor applications in civil engineering areas. The research consists of three main components: 1. Optimized selection of key frames for processing. It is necessary to choose a number of informative key frames to get the best results from the videogrammetric pipeline. This step is particularly important for outdoor environments as it is impossible to process a large number of frames existing in a large video clip. 2. Automated calculation of absolute scale of the scene. In this research, a novel approach for the process of obtaining absolute scale of points cloud by using 2D and 3D patterns is proposed and validated. 3. Point cloud data cleaning and filling holes on the surfaces of generated point clouds. The proposed algorithm to achieve this goal is able to fill holes/gaps on surfaces of point cloud data while maintaining sharp edges. In order to narrow the scope of the research, the main focus will be on two specific applications: 1. As built documentation of bridges and building as outdoor case studies. 2. As built documentation of offices and rooms as indoor case studies. Other potential applications of monocular videogrammetry in the civil engineering domain are out of scope of this research. Two important metrics, i.e. accuracy, completeness and processing time, are utilized for evaluation of the proposed algorithms.

Page generated in 0.0725 seconds