• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 1
  • Tagged with
  • 14
  • 14
  • 14
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Cooperative Perception for Connected Autonomous Vehicle Edge Computing System

Chen, Qi 08 1900 (has links)
This dissertation first conducts a study on raw-data level cooperative perception for enhancing the detection ability of self-driving systems for connected autonomous vehicles (CAVs). A LiDAR (Light Detection and Ranging sensor) point cloud-based 3D object detection method is deployed to enhance detection performance by expanding the effective sensing area, capturing critical information in multiple scenarios and improving detection accuracy. In addition, a point cloud feature based cooperative perception framework is proposed on edge computing system for CAVs. This dissertation also uses the features' intrinsically small size to achieve real-time edge computing, without running the risk of congesting the network. In order to distinguish small sized objects such as pedestrian and cyclist in 3D data, an end-to-end multi-sensor fusion model is developed to implement 3D object detection from multi-sensor data. Experiments show that by solving multiple perception on camera and LiDAR jointly, the detection model can leverage the advantages from high resolution image and physical world LiDAR mapping data, which leads the KITTI benchmark on 3D object detection. At last, an application of cooperative perception is deployed on edge to heal the live map for autonomous vehicles. Through 3D reconstruction and multi-sensor fusion detection, experiments on real-world dataset demonstrate that a high definition (HD) map on edge can afford well sensed local data for navigation to CAVs.
2

Multi-sensor Information Fusion for Classification of Driver's Physiological Sensor Data

Barua, Shaibal January 2013 (has links)
Physiological sensor signals analysis is common practice in medical domain for diagnosis andclassification of various physiological conditions. Clinicians’ frequently use physiologicalsensor signals to diagnose individual’s psychophysiological parameters i.e., stress tiredness,and fatigue etc. However, parameters obtained from physiological sensors could vary becauseof individual’s age, gender, physical conditions etc. and analyzing data from a single sensorcould mislead the diagnosis result. Today, one proposition is that sensor signal fusion canprovide more reliable and efficient outcome than using data from single sensor and it is alsobecoming significant in numerous diagnosis fields including medical diagnosis andclassification. Case-Based Reasoning (CBR) is another well established and recognizedmethod in health sciences. Here, an entropy based algorithm, “Multivariate MultiscaleEntropy analysis” has been selected to fuse multiple sensor signals. Other physiologicalsensor signals measurements are also taken into consideration for system evaluation. A CBRsystem is proposed to classify ‘healthy’ and ‘stressed’ persons using both fused features andother physiological i.e. Heart Rate Variability (HRV), Respiratory Sinus Arrhythmia (RSA),Finger Temperature (FT) features. The evaluation and performance analysis of the system have been done and the results ofthe classification based on data fusion and physiological measurements are presented in thisthesis work.
3

FUSION OF VIDEO AND MULTI-WAVEFORM FMCW RADAR FOR TRAFFIC SURVEILLANCE

Gale, Nicholas C. 19 September 2011 (has links)
No description available.
4

Self-Powered Intelligent Traffic Monitoring Using IR Lidar and Camera

Tian, Yi 06 February 2017 (has links)
This thesis presents a novel self-powered infrastructural traffic monitoring approach that estimates traffic information by combining three detection techniques. The traffic information can be obtained from the presented approach includes vehicle counts, speed estimation and vehicle classification based on size. Two categories of sensors are used including IR Lidar and IR camera. With the two sensors, three detection techniques are used: Time of Flight (ToF) based, vision based and Laser spot flow based. Each technique outputs observations about vehicle location at different time step. By fusing the three observations in the framework of Kalman filter, vehicle location is estimated, based on which other concerned traffic information including vehicle counts, speed and class is obtained. In this process, high reliability is achieved by combing the strength of each techniques. To achieve self-powering, a dynamic power management strategy is developed to reduce system total energy cost and optimize power supply in traffic monitoring based on traffic pattern recognition. The power manager attempts to adjust the power supply by reconfiguring system setup according to its estimation about current traffic condition. A system prototype has been built and multiple field experiments and simulations were conducted to demonstrate traffic monitoring accuracy and power reduction efficacy. / Master of Science
5

Safe human-robot interaction based on multi-sensor fusion and dexterous manipulation planning

Corrales Ramón, Juan Antonio 21 July 2011 (has links)
This thesis presents several new techniques for developing safe and flexible human-robot interaction tasks where human operators cooperate with robotic manipulators. The contributions of this thesis are divided in two fields: the development of safety strategies which modify the normal behavior of the robotic manipulator when the human operator is near the robot and the development of dexterous manipulation tasks for in-hand manipulation of objects with a multi-fingered robotic hand installed at the end-effector of a robotic manipulator. / Valencian Government by the research project "Infraestructura 05/053". Spanish Ministry of Education and Science by the pre-doctoral grant AP2005-1458 and the research projects DPI2005-06222 and DPI2008-02647, which constitute the research framework of this thesis.
6

Automatic geo-referencing by integrating camera vision and inertial measurements

Randeniya, Duminda I. B 01 June 2007 (has links)
Importance of an alternative sensor system to an inertial measurement unit (IMU) is essential for intelligent land navigation systems when the vehicle travels in a GPS deprived environment. The sensor system that has to be used in updating the IMU for a reliable navigation solution has to be a passive sensor system which does not depend on any outside signal. This dissertation presents the results of an effort where position and orientation data from vision and inertial sensors are integrated. Information from a sequence of images captured by a monocular camera attached to a survey vehicle at a maximum frequency of 3 frames per second was used in upgrading the inertial system installed in the same vehicle for its inherent error accumulation. Specifically, the rotations and translations estimated from point correspondences tracked through a sequence of images were used in the integration. However, for such an effort, two types of tasks need to be performed. The first task is the calibration to estimate the intrinsic properties of the vision sensors (cameras), such as the focal length and lens distortion parameters and determination of the transformation between the camera and the inertial systems. Calibration of a two sensor system under indoor conditions does not provide an appropriate and practical transformation for use in outdoor maneuvers due to invariable differences between outdoor and indoor conditions. Also, use of custom calibration objects in outdoor operational conditions is not feasible due to larger field of view that requires relatively large calibration object sizes. Hence calibration becomes one of the critical issues particularly if the integrated system is used in Intelligent Transportation Systems applications. In order to successfully estimate the rotations and translations from vision system the calibration has to be performed prior to the integration process. The second task is the effective fusion of inertial and vision sensor systems. The automated algorithm that identifies point correspondences in images enables its use in real-time autonomous driving maneuvers. In order to verify the accuracy of the established correspondences, independent constraints such as epipolar lines and correspondence flow directions were used. Also a pre-filter was utilized to smoothen out the noise associated with the vision sensor (camera) measurements. A novel approach was used to obtain the geodetic coordinates, i.e. latitude, longitude and altitude, from the normalized translations determined from the vision sensor. Finally, the position locations based on the vision sensor was integrated with those of the inertial system in a decentralized format using a Kalman filter. The vision/inertial integrated position estimates are successfully compared with those from 1) inertial/GPS system output and 2) actual survey performed on the same roadway. This comparison demonstrates that vision can in fact be used successfully to supplement the inertial measurements during potential GPS outages. The derived intrinsic properties and the transformation between individual sensors are also verified during two separate test runs on an actual roadway section.
7

Improved detection and tracking of objects in surveillance video

Denman, Simon Paul January 2009 (has links)
Surveillance networks are typically monitored by a few people, viewing several monitors displaying the camera feeds. It is then very dicult for a human op- erator to eectively detect events as they happen. Recently, computer vision research has begun to address ways to automatically process some of this data, to assist human operators. Object tracking, event recognition, crowd analysis and human identication at a distance are being pursued as a means to aid human operators and improve the security of areas such as transport hubs. The task of object tracking is key to the eective use of more advanced technolo- gies. To recognize an event people and objects must be tracked. Tracking also enhances the performance of tasks such as crowd analysis or human identication. Before an object can be tracked, it must be detected. Motion segmentation tech- niques, widely employed in tracking systems, produce a binary image in which objects can be located. However, these techniques are prone to errors caused by shadows and lighting changes. Detection routines often fail, either due to erro- neous motion caused by noise and lighting eects, or due to the detection routines being unable to split occluded regions into their component objects. Particle l- ters can be used as a self contained tracking system, and make it unnecessary for the task of detection to be carried out separately except for an initial (of- ten manual) detection to initialise the lter. Particle lters use one or more extracted features to evaluate the likelihood of an object existing at a given point each frame. Such systems however do not easily allow for multiple objects to be tracked robustly, and do not explicitly maintain the identity of tracked objects. This dissertation investigates improvements to the performance of object tracking algorithms through improved motion segmentation and the use of a particle lter. A novel hybrid motion segmentation / optical ow algorithm, capable of simulta- neously extracting multiple layers of foreground and optical ow in surveillance video frames is proposed. The algorithm is shown to perform well in the presence of adverse lighting conditions, and the optical ow is capable of extracting a mov- ing object. The proposed algorithm is integrated within a tracking system and evaluated using the ETISEO (Evaluation du Traitement et de lInterpretation de Sequences vidEO - Evaluation for video understanding) database, and signi- cant improvement in detection and tracking performance is demonstrated when compared to a baseline system. A Scalable Condensation Filter (SCF), a particle lter designed to work within an existing tracking system, is also developed. The creation and deletion of modes and maintenance of identity is handled by the underlying tracking system; and the tracking system is able to benet from the improved performance in uncertain conditions arising from occlusion and noise provided by a particle lter. The system is evaluated using the ETISEO database. The dissertation then investigates fusion schemes for multi-spectral tracking sys- tems. Four fusion schemes for combining a thermal and visual colour modality are evaluated using the OTCBVS (Object Tracking and Classication in and Beyond the Visible Spectrum) database. It is shown that a middle fusion scheme yields the best results and demonstrates a signicant improvement in performance when compared to a system using either mode individually. Findings from the thesis contribute to improve the performance of semi- automated video processing and therefore improve security in areas under surveil- lance.
8

Filtres de Kalman étendus reposant sur une variable d'erreur non linéaire avec applications à la navigation / Non-linear state error based extended Kalman filters with applications to navigation

Barrau, Axel 15 September 2015 (has links)
Cette thèse étudie l'utilisation de variables d'erreurs non linéaires dans la conception de filtres de Kalman étendus (EKF). La théorie des observateurs invariants sur les groupes de Lie sert de point de départ au développement d'un cadre plus général mais aussi plus simple, fournissant des variables d'erreur non linéaires assurant la propriété nouvelle et surprenante de suivre une équation différentielle (partiellement) linéaire. Ce résultat est mis à profit pour prouver, sous des hypothèses naturelles d'observabilité, la stabilité de l'EKF invariant (IEKF) une fois adapapté à la classe de systèmes (non-invariants) introduite. Le gain de performance remarquable par rapport à l'EKF classique est illustré par des applications à des problèmes industriels réels, réalisées en partenariat avec l'entreprise SAGEM.Dans une seconde approche, les variables d'erreurs sont étudiées en tant que processus stochastiques. Pour les observateurs convergeant globalement si les bruits sont ignorés, on montre que les ajouter conduit la variable d'erreur à converger en loi vers une distribution limite indépendante de l'initialisation. Ceci permet de choisir des gains à l'avance en optimisant la densité asymptotique. La dernière approche adoptée consiste à prendre un peu de recul vis-à-vis des groupes de Lie, et à étudier les EKF utilisant des variables d'erreur non linéaires de façon générale. Des propriété globales nouvelles sont obtenues. En particulier, on montre que ces méthodes permettent de résoudre le célèbre problème de fausse observabilité créé par l'EKF s'il est appliqué aux questions de localisation et cartographie simultanées (SLAM). / The present thesis explores the use of non-linear state errors to devise extended Kalman filters (EKFs). First we depart from the theory of invariant observers on Lie groups and propose a more general yet simpler framework allowing to obtain non-linear error variables having the novel unexpected property of being governed by a (partially) linear differential equation. This result is leveraged to ensure local stability of the invariant EKF (IEKF) under standard observability assumptions, when extended to this class of (non-invariant) systems. Real applications to some industrial problems in partnership with the company SAGEM illustrate the remarkable performance gap over the conventional EKF. A second route we investigate is to turn the noise on and consider the invariant errors as stochastic processes. Convergence in law of the error to a fixed probability distribution, independent of the initialization, is obtained if the error with noise turned off is globally convergent, which in turn allows to assess gains in advance that minimize the error's asymptotic dispersion. The last route consists in stepping back a little and exploring general EKFs (beyond the Lie group case) relying on a non-linear state error. Novel mathematical (global) properties are derived. In particular, these methods are shown to remedy the famous problem of false observability created by the EKF if applied to simultaneous localization and mapping (SLAM), which is a novel result.
9

Guaranteed Localization and Mapping for Autonomous Vehicles / Localisation et cartographie garanties pour les véhicules autonomes

Wang, Zhan 19 October 2018 (has links)
Avec le développement rapide et les applications étendues de la technologie de robot, la recherche sur le robot mobile intelligent a été programmée dans le plan de développement de haute technologie dans beaucoup de pays. La navigation autonome joue un rôle de plus en plus important dans le domaine de recherche du robot mobile intelligent. La localisation et la construction de cartes sont les principaux problèmes à résoudre par le robot pour réaliser une navigation autonome. Les techniques probabilistes (telles que le filtre étendu de Kalman et le filtre de particules) ont longtemps été utilisées pour résoudre le problème de localisation et de cartographie robotisées. Malgré leurs bonnes performances dans les applications pratiques, ils pourraient souffrir du problème d'incohérence dans les scénarios non linéaires, non gaussiens. Cette thèse se concentre sur l'étude des méthodes basées sur l'analyse par intervalles appliquées pour résoudre le problème de localisation et de cartographie robotisées. Au lieu de faire des hypothèses sur la distribution de probabilité, tous les bruits de capteurs sont supposés être bornés dans des limites connues. Sur la base d'une telle base, cette thèse formule le problème de localisation et de cartographie dans le cadre du problème de satisfaction de contraintes d'intervalle et applique des techniques d'intervalles cohérentes pour les résoudre de manière garantie. Pour traiter le problème du "lacet non corrigé" rencontré par les approches de localisation par ICP (Interval Constraint Propagation), cette thèse propose un nouvel algorithme ICP traitant de la localisation en temps réel du véhicule. L'algorithme proposé utilise un algorithme de cohérence de bas niveau et est capable de diriger la correction d'incertitude. Par la suite, la thèse présente un algorithme SLAM basé sur l'analyse d'intervalle (IA-SLAM) dédié à la caméra monoculaire. Une paramétrisation d'erreur liée et une initialisation non retardée pour un point de repère naturel sont proposées. Le problème SLAM est formé comme ICSP et résolu par des techniques de propagation par contrainte d'intervalle. Une méthode de rasage pour la contraction de l'incertitude historique et une méthode d'optimisation basée sur un graphique ICSP sont proposées pour améliorer le résultat obtenu. L'analyse théorique de la cohérence de la cartographie est également fournie pour illustrer la force de IA-SLAM. De plus, sur la base de l'algorithme IA-SLAM proposé, la thèse présente une approche cohérente et peu coûteuse pour la localisation de véhicules en extérieur. Il fonctionne dans un cadre en deux étapes (enseignement visuel et répétition) et est validé avec un véhicule de type voiture équipé de capteurs de navigation à l'estime et d'une caméra monoculaire. / With the rapid development and extensive applications of robot technology, the research on intelligent mobile robot has been scheduled in high technology development plan in many countries. Autonomous navigation plays a more and more important role in the research field of intelligent mobile robot. Localization and map building are the core problems to be solved by the robot to realize autonomous navigation. Probabilistic techniques (such as Extented Kalman Filter and Particle Filter) have long been used to solve the robotic localization and mapping problem. Despite their good performance in practical applications, they could suffer the inconsistency problem in the non linear, non Gaussian scenarios. This thesis focus on study the interval analysis based methods applied to solve the robotic localization and mapping problem. Instead of making hypothesis on the probability distribution, all the sensor noises are assumed to be bounded within known limits. Based on such foundation, this thesis formulates the localization and mapping problem in the framework of Interval Constraint Satisfaction Problem and applied consistent interval techniques to solve them in a guaranteed way. To deal with the “uncorrected yaw” problem encountered by Interval Constraint Propagation (ICP) based localization approaches, this thesis proposes a new ICP algorithm dealing with the real-time vehicle localization. The proposed algorithm employs a low-level consistency algorithm and is capable of heading uncertainty correction. Afterwards, the thesis presents an interval analysis based SLAM algorithm (IA-SLAM) dedicates for monocular camera. Bound-error parameterization and undelayed initialization for nature landmark are proposed. The SLAM problem is formed as ICSP and solved via interval constraint propagation techniques. A shaving method for landmark uncertainty contraction and an ICSP graph based optimization method are put forward to improve the obtaining result. Theoretical analysis of mapping consistency is also provided to illustrated the strength of IA-SLAM. Moreover, based on the proposed IA-SLAM algorithm, the thesis presents a low cost and consistent approach for outdoor vehicle localization. It works in a two-stage framework (visual teach and repeat) and is validated with a car-like vehicle equipped with dead reckoning sensors and monocular camera.
10

Multi-sources fusion based vehicle localization in urban environments under a loosely coupled probabilistic framework / Localisation de véhicules intelligents par fusion de données multi-capteurs en milieu urbain

Wei, Lijun 17 July 2013 (has links)
Afin d’améliorer la précision des systèmes de navigation ainsi que de garantir la sécurité et la continuité du service, il est essentiel de connaitre la position et l’orientation du véhicule en tout temps. La localisation absolue utilisant des systèmes satellitaires tels que le GPS est souvent utilisée `a cette fin. Cependant, en environnement urbain, la localisation `a l’aide d’un récepteur GPS peut s’avérer peu précise voire même indisponible `a cause des phénomènes de réflexion des signaux, de multi-trajet ou de la faible visibilité satellitaire. Afin d’assurer une estimation précise et robuste du positionnement, d’autres capteurs et méthodes doivent compléter la mesure. Dans cette thèse, des méthodes de localisation de véhicules sont proposées afin d’améliorer l’estimation de la pose en prenant en compte la redondance et la complémentarité des informations du système multi-capteurs utilisé. Tout d’abord, les mesures GPS sont fusionnées avec des estimations de la localisation relative du véhicule obtenues `a l’aide d’un capteur proprioceptif (gyromètre), d’un système stéréoscopique(Odométrie visuelle) et d’un télémètre laser (recalage de scans télémétriques). Une étape de sélection des capteurs est intégrée pour valider la cohérence des observations provenant des différents capteurs. Seules les informations validées sont combinées dans un formalisme de couplage lâche avec un filtre informationnel. Si l’information GPS est indisponible pendant une longue période, la trajectoire estimée par uniquement les approches relatives tend `a diverger, en raison de l’accumulation de l’erreur. Pour ces raisons, les informations d’une carte numérique (route + bâtiment) ont été intégrées et couplées aux mesures télémétriques de deux télémètres laser montés sur le toit du véhicule (l’un horizontalement, l’autre verticalement). Les façades des immeubles détectées par les télémètres laser sont associées avec les informations_ bâtiment _ de la carte afin de corriger la position du véhicule.Les approches proposées sont testées et évaluées sur des données réelles. Les résultats expérimentaux obtenus montrent que la fusion du système stéréoscopique et du télémètre laser avec le GPS permet d’assurer le service de localisation lors des courtes absences de mesures GPS et de corriger les erreurs GPS de type saut. Par ailleurs, la prise en compte des informations de la carte numérique routière permet d’obtenir une approximation de la position du véhicule en projetant la position du véhicule sur le tronc¸on de route correspondant et enfin l’intégration de la carte numérique des bâtiments couplée aux données télémétriques permet d’affiner cette estimation, en particulier la position latérale. / In some dense urban environments (e.g., a street with tall buildings around), vehicle localization result provided by Global Positioning System (GPS) receiver might not be accurate or even unavailable due to signal reflection (multi-path) or poor satellite visibility. In order to improve the accuracy and robustness of assisted navigation systems so as to guarantee driving security and service continuity on road, a vehicle localization approach is presented in this thesis by taking use of the redundancy and complementarities of multiple sensors. At first, GPS localization method is complemented by onboard dead-reckoning (DR) method (inertial measurement unit, odometer, gyroscope), stereovision based visual odometry method, horizontal laser range finder (LRF) based scan alignment method, and a 2D GIS road network map based map-matching method to provide a coarse vehicle pose estimation. A sensor selection step is applied to validate the coherence of the observations from multiple sensors, only information provided by the validated sensors are combined under a loosely coupled probabilistic framework with an information filter. Then, if GPS receivers encounter long term outages, the accumulated localization error of DR-only method is proposed to be bounded by adding a GIS building map layer. Two onboard LRF systems (a horizontal LRF and a vertical LRF) are mounted on the roof of the vehicle and used to detect building facades in urban environment. The detected building facades are projected onto the 2D ground plane and associated with the GIS building map layer to correct the vehicle pose error, especially for the lateral error. The extracted facade landmarks from the vertical LRF scan are stored in a new GIS map layer. The proposed approach is tested and evaluated with real data sequences. Experimental results with real data show that fusion of the stereoscopic system and LRF can continue to localize the vehicle during GPS outages in short period and to correct the GPS positioning error such as GPS jumps; the road map can help to obtain an approximate estimation of the vehicle position by projecting the vehicle position on the corresponding road segment; and the integration of the building information can help to refine the initial pose estimation when GPS signals are lost for long time.

Page generated in 0.0896 seconds