Spelling suggestions: "subject:"[een] SENSOR FUSION"" "subject:"[enn] SENSOR FUSION""
201 |
Multiple sensor fusion for detection, classification and tracking of moving objects in driving environments / Fusion multi-capteur pour la détection, classification et suivi d'objets mobiles en environnement routierChavez Garcia, Ricardo Omar 25 September 2014 (has links)
Les systèmes avancés d'assistance au conducteur (ADAS) aident les conducteurs à effectuer des tâches de conduite complexes et à éviter ou atténuer les situations dangereuses. Le véhicule détecte le monde extérieur au moyen de capteurs, et ensuite construit et met à jour un modèle interne de la configuration de l'environnement. La perception de véhicule consiste à établir des relations spatiales et temporelles entre le véhicule et les obstacles statiques et mobiles dans l'environnement. Cette perception se compose de deux tâches principales : la localisation et cartographie simultanées (SLAM) traite de la modélisation de pièces statiques; et la détection et le suivi d'objets en mouvement (DATMO) est responsable de la modélisation des pièces mobiles dans l'environnement. Afin de réaliser un bon raisonnement et contrôle, le système doit modéliser correctement l'environnement. La détection précise et la classification des objets en mouvement est un aspect essentiel d'un système de suivi d'objets. Classification des objets en mouvement est nécessaire pour déterminer le comportement possible des objets entourant le véhicule, et il est généralement réalisée au niveau de suivi des objets. La connaissance de la classe d'objets en mouvement au niveau de la détection peut aider à améliorer leur suivi. La plupart des solutions de perception actuels considèrent informations de classification seulement comme information additional pour la sortie final de la perception. Aussi, la gestion de l'information incomplète est une exigence importante pour les systèmes de perception. Une information incomplète peut être originaire de raisons liées à la détection, tels que les problèmes d calibrage et les dysfonctionnements des capteurs; ou des perturbations de la scène, comme des occlusions, des problèmes de météo et objet déplacement. Les principales contributions de cette thèse se concentrent sur la scène DATMO. Précisément, nous pensons que l'inclusion de la classe de l'objet comme un élément clé de la représentation de l'objet et la gestion de l'incertitude de plusieurs capteurs de détections, peut améliorer les résultats de la tâche de perception. Par conséquent, nous abordons les problèmes de l'association de données, la fusion de capteurs, la classification et le suivi à différents niveaux au sein de la phase de DATMO. Même si nous nous concentrons sur un ensemble de trois capteurs principaux: radar, lidar, et la caméra, nous proposons une architecture modifiables pour inclure un autre type ou nombre de capteurs. Premièrement, nous définissons une représentation composite de l'objet pour inclure des informations de classe et de l'état d'objet deouis le début de la tâche de perception. Deuxièmement, nous proposons, mettre en œuvre, et comparons deux architectures de perception afin de résoudre le problème de DATMO selon le niveau où l'association des objets, la fusion et la classification des informations sont inclus et appliquées. Nos méthodes de fusion de données sont basées sur la théorie de l'evidence, qui est utilisé pour gérer et inclure l'incertitude de la détection du capteur et de la classification des objets. Troisièmement, nous proposons une approche d'association de données bassée en la théorie de l'evidence pour établir une relation entre deux liste des détections d'objets. Quatrièmement, nous intégrons nos approches de fusion dans le cadre d'une application véhicule en temps réel. Cette intégration a été réalisée dans un réelle démonstrateur de véhicule du projet European InteractIVe. Finalement, nous avons analysé et évalué expérimentalement les performances des méthodes proposées. Nous avons comparé notre fusion rapproche les uns contre les autres et contre une méthode state-of-the-art en utilisant des données réelles de scénarios de conduite différents. Ces comparaisons sont concentrés sur la détection, la classification et le suivi des différents objets en mouvement: piétons, vélos, voitures et camions. / Advanced driver assistance systems (ADAS) help drivers to perform complex driving tasks and to avoid or mitigate dangerous situations. The vehicle senses the external world using sensors and then builds and updates an internal model of the environment configuration. Vehicle perception consists of establishing the spatial and temporal relationships between the vehicle and the static and moving obstacles in the environment. Vehicle perception is composed of two main tasks: simultaneous localization and mapping (SLAM) deals with modelling static parts; and detection and tracking moving objects (DATMO) is responsible for modelling moving parts in the environment. In order to perform a good reasoning and control, the system has to correctly model the surrounding environment. The accurate detection and classification of moving objects is a critical aspect of a moving object tracking system. Therefore, many sensors are part of a common intelligent vehicle system. Classification of moving objects is needed to determine the possible behaviour of the objects surrounding the vehicle, and it is usually performed at tracking level. Knowledge about the class of moving objects at detection level can help improve their tracking. Most of the current perception solutions consider classification information only as aggregate information for the final perception output. Also, management of incomplete information is an important requirement for perception systems. Incomplete information can be originated from sensor-related reasons, such as calibration issues and hardware malfunctions; or from scene perturbations, like occlusions, weather issues and object shifting. It is important to manage these situations by taking them into account in the perception process. The main contributions in this dissertation focus on the DATMO stage of the perception problem. Precisely, we believe that including the object's class as a key element of the object's representation and managing the uncertainty from multiple sensors detections, we can improve the results of the perception task, i.e., a more reliable list of moving objects of interest represented by their dynamic state and appearance information. Therefore, we address the problems of sensor data association, and sensor fusion for object detection, classification, and tracking at different levels within the DATMO stage. Although we focus on a set of three main sensors: radar, lidar, and camera, we propose a modifiable architecture to include other type or number of sensors. First, we define a composite object representation to include class information as a part of the object state from early stages to the final output of the perception task. Second, we propose, implement, and compare two different perception architectures to solve the DATMO problem according to the level where object association, fusion, and classification information is included and performed. Our data fusion approaches are based on the evidential framework, which is used to manage and include the uncertainty from sensor detections and object classifications. Third, we propose an evidential data association approach to establish a relationship between two sources of evidence from object detections. We observe how the class information improves the final result of the DATMO component. Fourth, we integrate the proposed fusion approaches as a part of a real-time vehicle application. This integration has been performed in a real vehicle demonstrator from the interactIVe European project. Finally, we analysed and experimentally evaluated the performance of the proposed methods. We compared our evidential fusion approaches against each other and against a state-of-the-art method using real data from different driving scenarios. These comparisons focused on the detection, classification and tracking of different moving objects: pedestrian, bike, car and truck.
|
202 |
Controle de posição com múltiplos sensores em um robô colaborativo utilizando liquid state machinesSala, Davi Alberto January 2017 (has links)
A ideia de usar redes neurais biologicamente inspiradas na computação tem sido amplamente utilizada nas últimas décadas. O fato essencial neste paradigma é que um neurônio pode integrar e processar informações, e esta informação pode ser revelada por sua atividade de pulsos. Ao descrever a dinâmica de um único neurônio usando um modelo matemático, uma rede pode ser implementada utilizando um conjunto desses neurônios, onde a atividade pulsante de cada neurônio irá conter contribuições, ou informações, da atividade pulsante da rede em que está inserido. Neste trabalho é apresentado um controlador de posição no eixo Z utilizando fusão de sensores baseado no paradigma de Redes Neurais Recorrentes. O sistema proposto utiliza uma Máquina de Estado Líquido (LSM) para controlar o robô colaborativo BAXTER. O framework foi projetado para trabalhar em paralelo com as LSMs que executam trajetórias em formas fechadas de duas dimensões, com o objetivo de manter uma caneta de feltro em contato com a superfície de desenho, dados de sensores de força e distância são alimentados ao controlador. O sistema foi treinado utilizando dados de um controlador Proporcional Integral Derivativo (PID), fundindo dados de ambos sensores. Resultados mostram que a LSM foi capaz de aprender o comportamento do controlador PID em diferentes situações. / The idea of employing biologically inspired neural networks to perform computation has been widely used over the last decades. The essential fact in this paradigm is that a neuron can integrate and process information, and this information can be revealed by its spiking activity. By describing the dynamics of a single neuron using a mathematical model, a network in which the spiking activity of every single neuron will get contributions, or information, from the spiking activity of the embedded network. A positioning controller based on Spiking Neural Networks for sensor fusion suitable to run on a neuromorphic computer is presented in this work. The proposed framework uses the paradigm of reservoir computing to control the collaborative robot BAXTER. The system was designed to work in parallel with Liquid State Machines that performs trajectories in 2D closed shapes. In order to keep a felt pen touching a drawing surface, data from sensors of force and distance are fed to the controller. The system was trained using data from a Proportional Integral Derivative controller, merging the data from both sensors. The results show that the LSM can learn the behavior of a PID controller on di erent situations.
|
203 |
Navegação terrestre usando unidade de medição inercial de baixo desempenho e fusão sensorial com filtro de Kalman adaptativo suavizado. / Terrestrial navigation using low-grade inertial measurement unit and sensor fusion with smoothed adaptive Kalman filter.Douglas Daniel Sampaio Santana 01 June 2011 (has links)
Apresenta-se o desenvolvimento de modelos matemáticos e algoritmos de fusão sensorial para navegação terrestre usando uma unidade de medição inercial (UMI) de baixo desempenho e o Filtro Estendido de Kalman. Os modelos foram desenvolvidos com base nos sistemas de navegação inercial strapdown (SNIS). O termo baixo desempenho refere-se à UMIs que por si só não são capazes de efetuar o auto- alinhamento por girocompassing. A incapacidade de se navegar utilizando apenas uma UMI de baixo desempenho motiva a investigação de técnicas que permitam aumentar o grau de precisão do SNIS com a utilização de sensores adicionais. Esta tese descreve o desenvolvimento do modelo completo de uma fusão sensorial para a navegação inercial de um veículo terrestre usando uma UMI de baixo desempenho, um hodômetro e uma bússola eletrônica. Marcas topográficas (landmarks) foram instaladas ao longo da trajetória de teste para se medir o erro da estimativa de posição nesses pontos. Apresenta-se o desenvolvimento do Filtro de Kalman Adaptativo Suavizado (FKAS), que estima conjuntamente os estados e o erro dos estados estimados do sistema de fusão sensorial. Descreve-se um critério quantitativo que emprega as incertezas de posição estimadas pelo FKAS para se determinar a priori, dado os sensores disponíveis, o intervalo de tempo máximo que se pode navegar dentro de uma margem de confiabilidade desejada. Conjuntos reduzidos de landmarks são utilizados como sensores fictícios para testar o critério de confiabilidade proposto. Destacam-se ainda os modelos matemáticos aplicados à navegação terrestre, unificados neste trabalho. Os resultados obtidos mostram que, contando somente com os sensores inerciais de baixo desempenho, a navegação terrestre torna-se inviável após algumas dezenas de segundos. Usando os mesmos sensores inerciais, a fusão sensorial produziu resultados muito superiores, permitindo reconstruir trajetórias com deslocamentos da ordem de 2,7 km (ou 15 minutos) com erro final de estimativa de posição da ordem de 3 m. / This work presents the development of the mathematical models and the algorithms of a sensor fusion system for terrestrial navigation using a low-grade inertial measurement unit (IMU) and the Extended Kalman Filter. The models were developed on the basis of the strapdown inertial navigation systems (SINS). Low-grade designates an IMU that is not able to perform girocompassing self-alignment. The impossibility of navigating relying on a low performance IMU is the motivation for investigating techniques to improve the SINS accuracy with the use of additional sensors. This thesis describes the development of a comprehensive model of a sensor fusion for the inertial navigation of a ground vehicle using a low-grade IMU, an odometer and an electronic compass. Landmarks were placed along the test trajectory in order to allow the measurement of the error of the position estimation at these points. It is presented the development of the Smoothed Adaptive Kalman Filter (SAKF), which jointly estimates the states and the errors of the estimated states of the sensor fusion system. It is presented a quantitative criteria which employs the position uncertainties estimated by SAKF in order to determine - given the available sensors, the maximum time interval that one can navigate within a desired reliability. Reduced sets of landmarks are used as fictitious sensors to test the proposed reliability criterion. Also noteworthy are the mathematical models applied to terrestrial navigation that were unified in this work. The results show that, only relying on the low performance inertial sensors, the terrestrial navigation becomes impracticable after few tens of seconds. Using the same inertial sensors, the sensor fusion produced far better results, allowing the reconstruction of trajectories with displacements of about 2.7 km (or 15 minutes) with a final error of position estimation of about 3 m.
|
204 |
Monitoramento de operações de retificação usando fusão de sensores / Monitoring of operations the rectification using sensors of fusionLuciano Alcindo Schühli 02 August 2007 (has links)
O presente trabalho trata da análise experimental de um sistema de monitoramento baseado na técnica de fusão de sensores, aplicado em uma retificadora cilíndrica externa. A fusão é realizada entre os sinais de potência e emissão acústica para obtenção do parâmetro FAP (Fast Abrasive Power) através do método desenvolvido por Valente (2003). Através da simulação de problemas encontrados nos processos de retificação (falha de sobremetal, colisão, desbalanceamento e vibração), foram captados os sinais de potência e emissão acústica e a partir destes gerado o parâmetro FAP, comparando seu desempenho, na detecção dos problemas, com os outros dois sinais. Para a análise foram construídos os gráficos das variações dos sinais em relação ao tempo de execução do processo e os mapas do FAP e acústico. O sistema de monitoramento avaliado tem como característica baixa complexidade de instalação e execução. Os dados experimentais revelam que o FAP apresenta uma velocidade de resposta maior que a potência e levemente amortecida em relação à emissão acústica. O nível do seu sinal é igual ao da potência mantendo-se homogêneo durante o processo, ao contrário da emissão acústica que pode ser influenciada por diversos outros parâmetros, tais como geometria da peça, distância do sensor, montagem do sensor, entre outros, que independem da interação ferramenta-peça. O resultado é uma resposta dinâmica e confiável, associada à energia do sistema. Estas características são interessantes para o monitoramento de processos de retificação (excluindo a dressagem) sendo superiores àquelas apresentadas isoladamente pelos sinais de potência e emissão acústica. / The present study deals with an experimental analysis of a monitoring system based on a sensor fusion strategy applied to a cylindrical grinding machine. It comprises a fusion of the power and acoustic emission signals and has as main goal to obtain the FAP (Fast Abrasive Power) using the method developed by Valente (2003). Initially, the power and acoustic emission signals were captured under operational dysfunction conditions during the grinding process (stock imperfection, collision, unbalancing e vibration). Then, based on these signals, the FAP parameter was generated and its capability in characterizing operational dysfunctions evaluated against the performance of an individual analysis of the power and acoustic emission signals. For this analysis, FAP and acoustic maps plus plots showing the FAP signals vs. working time were implemented. The experimental data revealed that the FAP presents a faster response than the power signal and a slightly dumped response when compared against the acoustic signal. The signal level of the FAP is similar to the power signal and is homogenous during the machining process. On contrary to the FAP, the acoustic emission signal may be affected by parameters that are not related to the tool-workpiece interactions, workpiece geometry and sensor positioning. The dynamic response of FAP is reliable and linked to the energy of the system. Finally, it should be highlightened that the monitoring system based on the FAP parameter presents low complexity in both implementation and execution. Such characteristics are superior to those observed when using either the power or acoustic emission signals and highly valuable in a system designed to monitor grinding processes.
|
205 |
Detecção de obstáculos usando fusão de dados de percepção 3D e radar em veículos automotivos / Obstacle detection using 3D perception and radar data fusion in automotive vehiclesLuis Alberto Rosero Rosero 30 January 2017 (has links)
Este projeto de mestrado visa a pesquisa e o desenvolvimento de métodos e algoritmos, relacionados ao uso de radares, visão computacional, calibração e fusão de sensores em veículos autônomos/inteligentes para fazer a detecção de obstáculos. O processo de detecção de obstáculos se divide em três etapas, a primeira é a leitura de sinais de Radar, do LiDAR e a captura de dados da câmera estéreo devidamente calibrados, a segunda etapa é a fusão de dados obtidos na etapa anterior (Radar+câmera, Radar+LIDAR 3D), a terceira etapa é a extração de características das informações obtidas, identificando e diferenciando o plano de suporte (chão) dos obstáculos, e finalmente realizando a detecção dos obstáculos resultantes da fusão dos dados. Assim é possível diferenciar os diversos tipos de elementos identificados pelo Radar e que são confirmados e unidos aos dados obtidos por visão computacional ou LIDAR (nuvens de pontos), obtendo uma descrição mais precisa do contorno, formato, tamanho e posicionamento destes. Na tarefa de detecção é importante localizar e segmentar os obstáculos para posteriormente tomar decisões referentes ao controle do veículo autônomo/inteligente. É importante destacar que o Radar opera em condições adversas (pouca ou nenhuma iluminação, com poeira ou neblina), porém permite obter apenas pontos isolados representando os obstáculos (esparsos). Por outro lado, a câmera estéreo e o LIDAR 3D permitem definir os contornos dos objetos representando mais adequadamente seu volume, porém no caso da câmera esta é mais suscetível a variações na iluminação e a condições restritas ambientais e de visibilidade (p.ex. poeira, neblina, chuva). Também devemos destacar que antes do processo de fusão é importante alinhar espacialmente os dados dos sensores, isto e calibrar adequadamente os sensores para poder transladar dados fornecidos por um sensor referenciado no próprio sistema de coordenadas para um outro sistema de coordenadas de outro sensor ou para um sistema de coordenadas global. Este projeto foi desenvolvido usando a plataforma CaRINA II desenvolvida junto ao Laboratório LRM do ICMC/USP São Carlos. Por fim, o projeto foi implementado usando o ambiente ROS, OpenCV e PCL, permitindo a realização de experimentos com dados reais de Radar, LIDAR e câmera estéreo, bem como realizando uma avaliação da qualidade da fusão dos dados e detecção de obstáculos comestes sensores. / This masters project aims to research and develop methods and algorithms related to the use of radars, computer vision, calibration and sensor data fusion in autonomous / intelligent vehicles to detect obstacles. The obstacle detection process is divided into three stages, the first one is the reading of Radar, LiDAR signals and the data capture of the stereo camera properly calibrated, the second stage is the fusion of data obtained in the previous stage(Radar + Camera, Radar + 3D LIDAR), the third step is the extraction of characteristics of the information obtained, identifying and differentiating the support plane(ground) of the obstacles, and finally realizing the detection of the obstacles resulting from the fusion of the data. Thus it is possible to differentiate types of elements identified by the Radar and that are confirmed and united to the data obtained by computational vision or LIDAR (point cloud), obtaining amore precise description of the contour, format, size and positioning of these. During the detection task it is important to locate and segment the obstacles to later make decisions regarding the control of the autonomous / intelligent vehicle. It is important to note that Radar operates in adverse conditions (little or no light, with dust or fog), but allows only isolated points representing obstacles (sparse), where on the other hand, the stereo camera and LIDAR 3D allow to define the shapeand size of objects. As for the camera, this is more susceptible to variations in lighting and to environmental and visibility restricted conditions (eg dust, haze, rain). It is important to spatially align the sensor data, calibrating the sensors appropriately, to be able to translate data provided by a sensor referenced in the coordinate system itself to another coordinate system of another sensor or to a global coordinate system. This project was developed using the CaRINA II platform developed by the LRM Laboratory ICMC / USP São Carlos. Finally, the project was implemented using the ROS, OpenCV and PCL environments, allowing experiments with real data from Radar, LIDAR and stereo camera, as well as performing an evaluation of the quality of the data fusion and detection of obstacles with these sensors .
|
206 |
Color Fusion and Super-Resolution for Time-of-Flight CamerasZins, Matthieu January 2017 (has links)
The recent emergence of time-of-flight cameras has opened up new possibilities in the world of computer vision. These compact sensors, capable of recording the depth of a scene in real-time, are very advantageous in many applications, such as scene or object reconstruction. This thesis first addresses the problem of fusing depth data with color images. A complete process to combine a time-of-flight camera with a color camera is described and its accuracy is evaluated. The results show that a satisfying precision is reached and that the step of calibration is very important. The second part of the work consists of applying super-resolution techniques to the time-of-flight camera in order to improve its low resolution. Different types of super-resolution algorithms exist but this thesis focuses on the combination of multiple shifted depth maps. The proposed framework is made of two steps: registration and reconstruction. Different methods for each step are tested and compared according to the improvements reached in term of level of details, sharpness and noise reduction. The results obtained show that Lucas-Kanade performs the best for the registration and that a non-uniform interpolation gives the best results in term of reconstruction. Finally, a few suggestions are made about future work and extensions for our solutions.
|
207 |
Autonomous road vehicles localization using satellites, lane markings and vision / Localisation de véhicules routiers autonomes en utilisant des mesures de satellites et de caméra sur des marquages au solTao, Zui 29 February 2016 (has links)
L'estimation de la pose (position et l'attitude) en temps réel est une fonction clé pour les véhicules autonomes routiers. Cette thèse vise à étudier des systèmes de localisation pour ces véhicules en utilisant des capteurs automobiles à faible coût. Trois types de capteurs sont considérés : des capteurs à l'estime qui existent déjà dans les automobiles modernes, des récepteurs GNSS mono-fréquence avec antenne patch et une caméra de détection de la voie regardant vers l’avant. Les cartes très précises sont également des composants clés pour la navigation des véhicules autonomes. Dans ce travail, une carte de marquage de voies avec une précision de l’ordre du décimètre est considérée. Le problème de la localisation est étudié dans un repère de travail local Est-Nord-Haut. En effet, les sorties du système de localisation sont utilisées en temps réel comme entrées dans un planificateur de trajectoire et un contrôleur de mouvement pour faire en sorte qu’un véhicule soit capable d'évoluer au volant de façon autonome à faible vitesse avec personne à bord. Ceci permet de développer des applications de voiturier autonome aussi appelées « valet de parking ». L'utilisation d'une caméra de détection de voie rend possible l’exploitation des informations de marquage de voie stockées dans une carte géoréférencée. Un module de détection de marquage détecte la voie hôte du véhicule et fournit la distance latérale entre le marquage de voie détecté et le véhicule. La caméra est également capable d'identifier le type des marquages détectés au sol (par exemple, de type continu ou pointillé). Comme la caméra donne des mesures relatives, une étape importante consiste à relier les mesures à l'état du véhicule. Un modèle d'observation raffiné de la caméra est proposé. Il exprime les mesures métriques de la caméra en fonction du vecteur d'état du véhicule et des paramètres des marquages au sol détectés. Cependant, l'utilisation seule d'une caméra a des limites. Par exemple, les marquages des voies peuvent être absents dans certaines parties de la zone de navigation et la caméra ne parvient pas toujours à détecter les marquages au sol, en particulier, dans les zones d’intersection. Un récepteur GNSS, qui est obligatoire pour le démarrage à froid, peut également être utilisé en continu dans le système de localisation multi-capteur du fait qu’il permet de compenser la dérive de l’estime. Les erreurs de positionnement GNSS ne peuvent pas être modélisées simplement comme des bruits blancs, en particulier avec des récepteurs mono-fréquence à faible coût travaillant de manière autonome, en raison des perturbations atmosphériques sur les signaux des satellites et les erreurs d’orbites. Un récepteur GNSS peut également être affecté par de fortes perturbations locales qui sont principalement dues aux multi-trajets. Cette thèse étudie des modèles formeurs de biais d’erreur GNSS qui sont utilisés dans le solveur de localisation en augmentant le vecteur d'état. Une variation brutale due à multi-trajet est considérée comme une valeur aberrante qui doit être rejetée par le filtre. Selon le flux d'informations entre le récepteur GNSS et les autres composants du système de localisation, les architectures de fusion de données sont communément appelées « couplage lâche » (positions et vitesses GNSS) ou « couplage serré » (pseudo-distance et Doppler sur les satellites en vue). Cette thèse étudie les deux approches. En particulier, une approche invariante selon la route est proposée pour gérer une modélisation raffinée de l'erreur GNSS dans l'approche par couplage lâche puisque la caméra ne peut améliorer la performance de localisation que dans la direction latérale de la route. / Estimating the pose (position and attitude) in real-time is a key function for road autonomous vehicles. This thesis aims at studying vehicle localization performance using low cost automotive sensors. Three kinds of sensors are considered : dead reckoning (DR) sensors that already exist in modern vehicles, mono-frequency GNSS (Global navigation satellite system) receivers with patch antennas and a frontlooking lane detection camera. Highly accurate maps enhanced with road features are also key components for autonomous vehicle navigation. In this work, a lane marking map with decimeter-level accuracy is considered. The localization problem is studied in a local East-North-Up (ENU) working frame. Indeed, the localization outputs are used in real-time as inputs to a path planner and a motion generator to make a valet vehicle able to drive autonomously at low speed with nobody on-board the car. The use of a lane detection camera makes possible to exploit lane marking information stored in the georeferenced map. A lane marking detection module detects the vehicle’s host lane and provides the lateral distance between the detected lane marking and the vehicle. The camera is also able to identify the type of the detected lane markings (e.g., solid or dashed). Since the camera gives relative measurements, the important step is to link the measures with the vehicle’s state. A refined camera observation model is proposed. It expresses the camera metric measurements as a function of the vehicle’s state vector and the parameters of the detected lane markings. However, the use of a camera alone has some limitations. For example, lane markings can be missing in some parts of the navigation area and the camera sometimes fails to detect the lane markings in particular at cross-roads. GNSS, which is mandatory for cold start initialization, can be used also continuously in the multi-sensor localization system as done often when GNSS compensates for the DR drift. GNSS positioning errors can’t be modeled as white noises in particular with low cost mono-frequency receivers working in a standalone way, due to the unknown delays when the satellites signals cross the atmosphere and real-time satellites orbits errors. GNSS can also be affected by strong biases which are mainly due to multipath effect. This thesis studies GNSS biases shaping models that are used in the localization solver by augmenting the state vector. An abrupt bias due to multipath is seen as an outlier that has to be rejected by the filter. Depending on the information flows between the GNSS receiver and the other components of the localization system, data-fusion architectures are commonly referred to as loosely coupled (GNSS fixes and velocities) and tightly coupled (raw pseudoranges and Dopplers for the satellites in view). This thesis investigates both approaches. In particular, a road-invariant approach is proposed to handle a refined modeling of the GNSS error in the loosely coupled approach since the camera can only improve the localization performance in the lateral direction of the road. Finally, this research discusses some map-matching issues for instance when the uncertainty domain of the vehicle state becomes large if the camera is blind. It is challenging in this case to distinguish between different lanes when the camera retrieves lane marking measurements.As many outdoor experiments have been carried out with equipped vehicles, every problem addressed in this thesis is evaluated with real data. The different studied approaches that perform the data fusion of DR, GNSS, camera and lane marking map are compared and several conclusions are drawn on the fusion architecture choice.
|
208 |
Sensor fusion and fault diagnosticsin non-linear dynamical systems.Nilsson, Albin January 2020 (has links)
Sensors are highly essential components in most modern control systems and are used in increasingly complex ways to improve system precision and reliability. Since they are generally susceptible to faults it is common to perform on-line fault diagnostics on sensor data to verify nominal behavior. This is especially important for safety critical systems where it can be imperative to identify, and react to, a fault before it increases in severity. An example of such a safety critical system is the propulsion control of a vehicle. In this thesis, three different model-based methods for Fault Detection and Isolation (FDI) are developed and tested with the aim of detecting and isolating sensor faults in the powertrain of an electric, center articulated, four-wheel-drive vehicle. First, kinematic models are derived that combine sensor data from all sensors related to propulsion. Second, the kinematic models are implemented in system observers to produce fault sensitive zero-mean residuals. Finally, fault isolation algorithms are derived, which detect and indicate different types of faults via evaluation of the observer residuals. The results show that all FDI methods can detect and isolate stochastic faults with high certainty, but that offset-type faults are hard to distinguish from modeling errors and are therefore easily attenuated by the system observers. Faults in accelerometer sensors need extra measures to be detectable, owing to the environment where the vehicle is typically operated. A nonlinear system model shows good conformity to the vehicle system, lending confidence to its further use as a driver for propulsion control.
|
209 |
[pt] AUTO LOCALIZAÇÃO DE ROBÔS MÓVEIS POR FUSÃO DE SENSORES NA PRESENÇA DE INTERFERÊNCIA ELETROMAGNÉTICA / [en] SELF-LOCALIZATION OF MOBILE ROBOTS THROUGH SENSOR FUSION IN THE PRESENCE OF ELECTROMAGNETIC INTERFERENCE18 March 2021 (has links)
[pt] A inspeção interna de tanques de armazenamento pode ser uma tarefa longa, custosa e até nociva à saúde do inspetor. Uma alternativa à inspeção humana é a utilização de sistemas robóticos. Esses sistemas podem ser teleoperados de fora dos tanques, permitindo realizar a inspeção de maneira mais segura, rápida, e em alguns casos, sem que seja necessário esvaziá-lo. Para poder fornecer a localização de eventuais defeitos no tanque, o robô móvel precisa ser capaz de conhecer sua posição relativa dentro dele. Auto-localização é de grande importância para a navegação de robôs móveis. Robôs de inspeção são, na sua maioria, veículos de rodas ou esteiras magnéticas fixas. Esta configuração adiciona duas dificuldades que precisam ser abordadas na tarefa de localização. Devido à sua configuração, neste tipo de veículo, deslizamento das rodas é intrínseco ao seu funcionamento, sendo essencial levar em conta seu efeito para modelar seu comportamento adequadamente. Outra dificuldade está no uso de rodas magnéticas, devido ao forte campo magnético gerado por estes elementos, que interferem nas medições de sensores magnéticos, como por exemplo bússolas. Neste trabalho, um filtro de Kalman foi desenvolvido e implementado para a localização de um robô de quatro rodas magnéticas fixas, a partir da fusão de sensores inerciais e odometria. Na modelagem do veículo, foi utilizado um modelo cinemático como base para um modelo dinâmico, o que permitiu considerar o deslizamento intrínseco do sistema. Na fusão de sensores, foram dispensadas as medições do magnetômetro embarcado, devido à grande interferência produzida pelas rodas e à grande distância que seria necessária entre eles para não ser afetado pelo ruído. Simulações e experimentos comprovaram a eficiência do filtro implementado. / [en] Internal inspection of storage tanks can be long, costly and even detrimental to the health of the inspector. An alternative to human inspection is the use of robotic systems. These systems can be teleoperated from outside the tanks, making it possible to carry out the inspection more safely, quickly and in some cases without having to empty it. In order to provide the location of any defects in the tank, the mobile robot must be able to know its relative position within it. Self-localization is of great importance for mobile robot navigation. Inspection robots are, for the most part, vehicles with wheels or tracks. This configuration adds two difficulties that need to be addressed in the localization task. Due to its configuration, in this type of vehicle, wheel slip is intrinsic to its operation, being essential to take into account its effect to model its behavior properly. Another difficulty is the use of magnetic wheels, due to the strong magnetic field generated by these elements, which interfere with the measurements of magnetic sensors, such as compasses. In this work, a Kalman filter was developed and implemented for the localization of a four-wheel fixed magnetic robot, from the fusion of inertial sensors and odometry. In the modeling of the vehicle, a kinematic model was used as the basis for a dynamic model, which allowed to consider the intrinsic slippage of the system. In the sensor fusion, measurements of the magnetometer on board were discarded, due to the great interference produced by the wheels and the great distance that would be necessary between them to be unaffected by noise. Simulations and experiments have proven the efficiency of the implemented filter.
|
210 |
Indoor 5G Positioning using Multipath MeasurementsLidström, Andreas, Andersson, Martin January 2022 (has links)
Positioning with high precision and reliability is considered as an important feature of new wireless radio networks such as 5G. In areas where satellite positioning is not available or is not reliable enough, 5G can work as an alternative. An example is inside factories where autonomous vehicles might need to be positioned in complex environments. This work aims to investigate if multipath propagation of radio signals can be exploited to improve indoor positioning. A 5G simulator that simulates the propagation of a reference signal in a factory environment is used. Distances corresponding to several paths between the user equipment (UE) and the transmission/reception point (TRP) can be estimated given the received reference signal. These distance estimates are used together with a partially known map of the environment to develop and evaluate the algorithms in this thesis. The developed multipath-assisted algorithms are based on two different target tracking methods, an extended Kalman filter (EKF) and a particle filter (PF). Both alternatives use a data association algorithm to determine how measurements should be paired with propagation paths. Both filters that exploit multipath propagation are shown to greatly improve positioning accuracy compared to a line-of-sight (LOS) based alternative. The multipath-assisted algorithms can achieve an accuracy below 0.9 m in 90 % of all cases in a complex environment, which is more than tenfold better than the LOS based alternative considered here. The PF also shows an ability to track a UE in a complex environment using very few TRPs, while the EKF and LOS based methods do not succeed in this case. / Positionering med hög precision och tillförlitlighet anses vara en viktigt funktion i nya trådlösa radionätverk som 5G. I områden där satellitpositionering inte är tillgängligt eller inte är tillräckligt pålitligt, kan 5G fungera som ett alternativ. Ett exempel är inuti fabriker där autonoma fordon kan behöva positionera sig i komplexa miljöer. Det här arbetet syftar till att undersöka om flervägsutbredning av radiosignaler kan utnyttjas för att förbättra positionering i inomhusmiljöer. En 5G-simulator som simulerar utbredningen av en referenssignal i en fabriksmiljö används. Avstånden för flertalet vägar från användarenheten till basstationen kan estimeras givet den mottagna referenssignalen. Dessa avståndsestimat används tillsammans med en delvis känd karta av miljön för att utveckla och utvärdera algoritmer i det här arbetet. De utvecklade flervägsutbredningsassisterade algoritmerna baseras på två olika målföljningsmetoder, ett utökat Kalmanfilter och ett partikelfilter. Båda alternativen använder en associeringsalgoritm för att bestämma hur avståndsmätningar ska paras ihop med utbredningsvägar. De två filtren som studeras i detta arbete ger en stor förbättring av positioneringen jämfört med ett alternativ som inte använder flervägsutbredning. De flervägsutbredningsassisterade algoritmerna uppnår en precision på under 0,9 m i 90 % av fallen i en komplex miljö, vilket är mer än tio gånger bättre än alternativet utan flervägsutbredning. Partikelfiltret visar också en förmåga till positionering med väldigt få basstationer, vilket de andra metoderna inte klarar av i den komplexa miljön.
|
Page generated in 0.0598 seconds