Spelling suggestions: "subject:"[een] SENSOR FUSION"" "subject:"[enn] SENSOR FUSION""
151 |
Filtres de Kalman étendus reposant sur une variable d'erreur non linéaire avec applications à la navigation / Non-linear state error based extended Kalman filters with applications to navigationBarrau, Axel 15 September 2015 (has links)
Cette thèse étudie l'utilisation de variables d'erreurs non linéaires dans la conception de filtres de Kalman étendus (EKF). La théorie des observateurs invariants sur les groupes de Lie sert de point de départ au développement d'un cadre plus général mais aussi plus simple, fournissant des variables d'erreur non linéaires assurant la propriété nouvelle et surprenante de suivre une équation différentielle (partiellement) linéaire. Ce résultat est mis à profit pour prouver, sous des hypothèses naturelles d'observabilité, la stabilité de l'EKF invariant (IEKF) une fois adapapté à la classe de systèmes (non-invariants) introduite. Le gain de performance remarquable par rapport à l'EKF classique est illustré par des applications à des problèmes industriels réels, réalisées en partenariat avec l'entreprise SAGEM.Dans une seconde approche, les variables d'erreurs sont étudiées en tant que processus stochastiques. Pour les observateurs convergeant globalement si les bruits sont ignorés, on montre que les ajouter conduit la variable d'erreur à converger en loi vers une distribution limite indépendante de l'initialisation. Ceci permet de choisir des gains à l'avance en optimisant la densité asymptotique. La dernière approche adoptée consiste à prendre un peu de recul vis-à-vis des groupes de Lie, et à étudier les EKF utilisant des variables d'erreur non linéaires de façon générale. Des propriété globales nouvelles sont obtenues. En particulier, on montre que ces méthodes permettent de résoudre le célèbre problème de fausse observabilité créé par l'EKF s'il est appliqué aux questions de localisation et cartographie simultanées (SLAM). / The present thesis explores the use of non-linear state errors to devise extended Kalman filters (EKFs). First we depart from the theory of invariant observers on Lie groups and propose a more general yet simpler framework allowing to obtain non-linear error variables having the novel unexpected property of being governed by a (partially) linear differential equation. This result is leveraged to ensure local stability of the invariant EKF (IEKF) under standard observability assumptions, when extended to this class of (non-invariant) systems. Real applications to some industrial problems in partnership with the company SAGEM illustrate the remarkable performance gap over the conventional EKF. A second route we investigate is to turn the noise on and consider the invariant errors as stochastic processes. Convergence in law of the error to a fixed probability distribution, independent of the initialization, is obtained if the error with noise turned off is globally convergent, which in turn allows to assess gains in advance that minimize the error's asymptotic dispersion. The last route consists in stepping back a little and exploring general EKFs (beyond the Lie group case) relying on a non-linear state error. Novel mathematical (global) properties are derived. In particular, these methods are shown to remedy the famous problem of false observability created by the EKF if applied to simultaneous localization and mapping (SLAM), which is a novel result.
|
152 |
Gaining Depth : Time-of-Flight Sensor Fusion for Three-Dimensional Video Content CreationSchwarz, Sebastian January 2014 (has links)
The successful revival of three-dimensional (3D) cinema has generated a great deal of interest in 3D video. However, contemporary eyewear-assisted displaying technologies are not well suited for the less restricted scenarios outside movie theaters. The next generation of 3D displays, autostereoscopic multiview displays, overcome the restrictions of traditional stereoscopic 3D and can provide an important boost for 3D television (3DTV). Then again, such displays require scene depth information in order to reduce the amount of necessary input data. Acquiring this information is quite complex and challenging, thus restricting content creators and limiting the amount of available 3D video content. Nonetheless, without broad and innovative 3D television programs, even next-generation 3DTV will lack customer appeal. Therefore simplified 3D video content generation is essential for the medium's success. This dissertation surveys the advantages and limitations of contemporary 3D video acquisition. Based on these findings, a combination of dedicated depth sensors, so-called Time-of-Flight (ToF) cameras, and video cameras, is investigated with the aim of simplifying 3D video content generation. The concept of Time-of-Flight sensor fusion is analyzed in order to identify suitable courses of action for high quality 3D video acquisition. In order to overcome the main drawback of current Time-of-Flight technology, namely the high sensor noise and low spatial resolution, a weighted optimization approach for Time-of-Flight super-resolution is proposed. This approach incorporates video texture, measurement noise and temporal information for high quality 3D video acquisition from a single video plus Time-of-Flight camera combination. Objective evaluations show benefits with respect to state-of-the-art depth upsampling solutions. Subjective visual quality assessment confirms the objective results, with a significant increase in viewer preference by a factor of four. Furthermore, the presented super-resolution approach can be applied to other applications, such as depth video compression, providing bit rate savings of approximately 10 percent compared to competing depth upsampling solutions. The work presented in this dissertation has been published in two scientific journals and five peer-reviewed conference proceedings. In conclusion, Time-of-Flight sensor fusion can help to simplify 3D video content generation, consequently supporting a larger variety of available content. Thus, this dissertation provides important inputs towards broad and innovative 3D video content, hopefully contributing to the future success of next-generation 3DTV.
|
153 |
Étude de stratégies de diagnostic embarqué des réseaux filaires complexes / Study of embedded diagnosis strategies in complex wired networksBen Hassen, Wafa 20 October 2014 (has links)
Cette étude s’inscrit dans le cadre du diagnostic embarqué des réseaux filaires complexes. Elle vise à détecter et localiser les défauts électriques avec précision. En effet, l’intégration du diagnostic par réflectométrie dans un système embarqué fait apparaître des problèmes d’interférence qui s’aggravent dans le cas d’un réseau complexe où plusieurs réflectomètres sont placés en différents points du réseau. L’objectif est de développer de nouvelles stratégies de diagnostic embarqué des réseaux filaires complexes pour résoudre les problèmes d’interférence d’une part et l’ambiguïté de localisation du défaut d’autre part. La première contribution concerne le développement d’une nouvelle méthode de réflectométrie baptisée OMTDR (Orthogonal Multi-tone Time Domain Reflectometry). Elle utilise des signaux numériques modulés et orthogonaux pour éliminer les interférences. Pour davantage de couverture, la deuxième contribution propose d’intégrer la communication entre les réflectomètres. Elle vise à fusionner les données afin de faciliter la prise de décision. La troisième contribution adresse la problématique de la stratégie de diagnostic, c’est-à-dire, de l’optimisation des performances du diagnostic d’un réseau complexe sous contraintes opérationnelles d’utilisation. L’utilisation des Réseaux Bayésiens permet d’étudier l’impact des différents facteurs et d’obtenir une estimation de la confiance et donc, de la fiabilité du résultat du diagnostic. / This study addresses embedded diagnosis of complex wired networks. Based on the reflectometry method, it aims at detecting and locating accurately electrical faults. Increasing demand for on-line diagnosis has imposed serious challenges on interference mitigation. It aims at making diagnosis while the target system is running. The interference becomes more critical in the case of complex networks where several reflectometers are injecting their test signals simultaneously. The objective is to develop new embedded diagnosis strategies in complex wired networks that would resolve interference problems and eliminate ambiguity related to the fault location. The first contribution is the development of a new method called OMTDR (Orthogonal Multi-tone Time Domain Reflectometry). It uses orthogonal modulated digital signals for interference mitigation and thereby on-line diagnosis. For better coverage of the network, the second contribution proposes to integrate communication between reflectometers. It uses sensors data fusion to facilitate decision making. The third contribution addresses the problem of the diagnosis strategy, i.e. the optimization of diagnosis performance of a complex network under operational constraints. The use of Bayesian Networks allows us to study the impact of different factors and estimate the confidence level and thereby the reliability of the diagnosis results.
|
154 |
Development of an autonomous unmanned aerial vehicle specification of a fixed-wing vertical takeoff and landing aircraft / Desenvolvimento de um veículo aéreo não tripulado autônomo especificação de uma aeronave asa-fixa capaz de decolar e aterrissar verticalmenteNatássya Barlate Floro da Silva 29 March 2018 (has links)
Several configurations of Unmanned Aerial Vehicles (UAVs) were proposed to support different applications. One of them is the tailsitter, a fixed-wing aircraft that takes off and lands on its own tail, with the high endurance advantage from fixed-wing aircraft and, as helicopters and multicopters, not requiring a runway during takeoff and landing. However, a tailsitter has a complex operation with multiple flight stages, each one with its own particularities and requirements, which emphasises the necessity of a reliable autopilot for its use as a UAV. The literature already introduces tailsitter UAVs with complex mechanisms or with multiple counter-rotating propellers, but not one with only one propeller and without auxiliary structures to assist in the takeoff and landing. This thesis presents a tailsitter UAV, named AVALON (Autonomous VerticAL takeOff and laNding), and its autopilot, composed of 3 main units: Sensor Unit, Navigation Unit and Control Unit. In order to choose the most appropriate techniques for the autopilot, different solutions are evaluated. For Sensor Unit, Extended Kalman Filter and Unscented Kalman Filter estimate spatial information from multiple sensors data. Lookahead, Pure Pursuit and Line-of-Sight, Nonlinear Guidance Law and Vector Field path-following algorithms are extended to incorporate altitude information for Navigation Unit. In addition, a structure based on classical methods with decoupled Proportional-Integral-Derivative controllers is compared to a new control structure based on dynamic inversion. Together, all these techniques show the efficacy of AVALONs autopilot. Therefore, AVALON results in a small electric tailsitter UAV with a simple design, with only one propeller and without auxiliary structures to assist in the takeoff and landing, capable of executing all flight stages. / Diversas configurações de Veículos Aéreos Não Tripulados (VANTs) foram propostas para serem utilizadas em diferentes aplicações. Uma delas é o tailsitter, uma aeronave de asa fixa capaz de decolar e pousar sobre a própria cauda. Esse tipo de aeronave apresenta a vantagem de aeronaves de asa fixa de voar sobre grandes áreas com pouco tempo e bateria e, como helicópteros e multicópteros, não necessita de pista para decolar e pousar. Porém, um tailsitter possui uma operação complexa, com múltiplos estágios de voo, cada um com suas peculiaridades e requisitos, o que enfatiza a necessidade de um piloto automático confiável para seu uso como um VANT. A literatura já introduz VANTs tailsitters com mecanismos complexos ou múltiplos motores contra-rotativos, mas não com apenas um motor e sem estruturas para auxiliar no pouso e na decolagem. Essa tese apresenta um VANT tailsitter, chamado AVALON (Autonomous VerticAL takeOff and laNding), e seu piloto automático, composto por 3 unidades principais: Unidade Sensorial, Unidade de Navegação e Unidade de Controle. Diferentes soluções são avaliadas para a escolha das técnicas mais apropriadas para o piloto automático. Para a Unidade Sensorial, Extended Kalman Filter e Unscented Kalman Filter estimam a informação espacial de múltiplos dados de diversos sensores. Os algoritmos de seguimento de trajetória Lookahead, Pure Pursuit and Line-of-Sight, Nonlinear Guidance Law e Vector Field são estendidos para considerar a informação da altitude para a Unidade de Navegação. Além do mais, uma estrutura baseada em métodos clássicos com controladores Proporcional- Integral-Derivativo desacoplados é comparada a uma nova estrutura de controle baseada em dinâmica inversa. Juntas, todas essas técnicas demonstram a eficácia do piloto automático do AVALON. Portanto, AVALON resulta em um VANT tailsitter pequeno e elétrico, com um design simples, apenas um motor e sem estruturas para auxiliar o pouso e a decolagem, capaz de executar todos os estágios de voo.
|
155 |
Tecnologia assistiva para detecção de quedas : desenvolvimento de sensor vestível integrado ao sistema de casa inteligenteTorres, Guilherme Gerzson January 2018 (has links)
O uso de tecnologias assistivas objetivando proporcionar melhor qualidade de vida a idosos está em franca ascensão. Uma das linhas de pesquisa nessa área é o uso de dispositivos para detecção de quedas de idosos, um problema cuja ocorrência é cada vez maior devido a diversos fatores, incluindo maior longevidade, maior número de pessoas vivendo sozinhas na velhice, entre outros. Este trabalho apresenta o desenvolvimento de um dispositivo vestível, um nó sensor de redes de sensores sem fio de ultra-baixo consumo. Também descreve a expansão de um sistema KNX, ao qual o dispositivo é integrado. O dispositivo é capaz de identificar quedas, auxiliando no monitoramento de idosos e, por sua vez, aumentando a segurança dos mesmos. O monitoramento é realizado através de acelerômetro e giroscópio de 3 eixos, acoplados ao peito do usuário, capaz de detectar quedas através de um algoritmo de análise de limites determinados a partir da fusão dos dados dos sensores. O sensor vestível utiliza tecnologia EnOcean, que propicia conexão sem fio com um sistema de automação de casas inteligentes, de acordo com a norma KNX, através da plataforma Home Assistant. Telegramas de alarmes são automaticamente enviados no caso de detecção de quedas, e acionam um atuador pertencente ao sistema KNX. Além de validar a tecnologia EnOcean para uso em dispositivos vestíveis, o protótipo desenvolvido não indicou nenhum falso positivo através de testes realizados com dois usuários de características corporais diferentes, onde foram reproduzidos 100 vezes cada um dos oito tipos de movimentos (quatro movimentos de quedas e quatro de não quedas). Os testes realizados com o dispositivo revelaram sensibilidade e de especificidade de até 96% e 100%, respectivamente. / The use of assistive technologies to provide quality of life for elderly is increasing. One of the research lines of this area is the use of devices for fall detection, which is an increasing problem due to many factors, including greater longevity, more elders living alone, among others. This work presents the development of a wearable device, a sensor node for ultra-low power networks. Also, describes the expansion of a KNX system, which the device is integrated. The device is able to detect falls which can aid the monitoring of the elderly people and improve security. The monitoring is done through a 3-axis accelerometer and gyroscope attached on the user’s chest. The fall detection is done by a threshold algorithm based on data fusion of the sensors. The wearable sensor is an EnOcean node, which includes a wireless connection with a smart home system, according to the KNX standard, through the Home Assistant platform. Alarm telegrams are automatically sent in case of fall detection, and fires an actuator that is part of the KNX system to alarm. In addition to validating the EnOcean’s Technology for use on wearable devices, the developed prototype didn’t indicated any false positives through tests performed with two users of different body characteristics, where each of the eight types of movements (four movements of falls and four of non-falls) were reproduced 100 times. The tests done with the device revealed sensitivity and specificity of up to 96% and 100%, respectively.
|
156 |
Guaranteed Localization and Mapping for Autonomous Vehicles / Localisation et cartographie garanties pour les véhicules autonomesWang, Zhan 19 October 2018 (has links)
Avec le développement rapide et les applications étendues de la technologie de robot, la recherche sur le robot mobile intelligent a été programmée dans le plan de développement de haute technologie dans beaucoup de pays. La navigation autonome joue un rôle de plus en plus important dans le domaine de recherche du robot mobile intelligent. La localisation et la construction de cartes sont les principaux problèmes à résoudre par le robot pour réaliser une navigation autonome. Les techniques probabilistes (telles que le filtre étendu de Kalman et le filtre de particules) ont longtemps été utilisées pour résoudre le problème de localisation et de cartographie robotisées. Malgré leurs bonnes performances dans les applications pratiques, ils pourraient souffrir du problème d'incohérence dans les scénarios non linéaires, non gaussiens. Cette thèse se concentre sur l'étude des méthodes basées sur l'analyse par intervalles appliquées pour résoudre le problème de localisation et de cartographie robotisées. Au lieu de faire des hypothèses sur la distribution de probabilité, tous les bruits de capteurs sont supposés être bornés dans des limites connues. Sur la base d'une telle base, cette thèse formule le problème de localisation et de cartographie dans le cadre du problème de satisfaction de contraintes d'intervalle et applique des techniques d'intervalles cohérentes pour les résoudre de manière garantie. Pour traiter le problème du "lacet non corrigé" rencontré par les approches de localisation par ICP (Interval Constraint Propagation), cette thèse propose un nouvel algorithme ICP traitant de la localisation en temps réel du véhicule. L'algorithme proposé utilise un algorithme de cohérence de bas niveau et est capable de diriger la correction d'incertitude. Par la suite, la thèse présente un algorithme SLAM basé sur l'analyse d'intervalle (IA-SLAM) dédié à la caméra monoculaire. Une paramétrisation d'erreur liée et une initialisation non retardée pour un point de repère naturel sont proposées. Le problème SLAM est formé comme ICSP et résolu par des techniques de propagation par contrainte d'intervalle. Une méthode de rasage pour la contraction de l'incertitude historique et une méthode d'optimisation basée sur un graphique ICSP sont proposées pour améliorer le résultat obtenu. L'analyse théorique de la cohérence de la cartographie est également fournie pour illustrer la force de IA-SLAM. De plus, sur la base de l'algorithme IA-SLAM proposé, la thèse présente une approche cohérente et peu coûteuse pour la localisation de véhicules en extérieur. Il fonctionne dans un cadre en deux étapes (enseignement visuel et répétition) et est validé avec un véhicule de type voiture équipé de capteurs de navigation à l'estime et d'une caméra monoculaire. / With the rapid development and extensive applications of robot technology, the research on intelligent mobile robot has been scheduled in high technology development plan in many countries. Autonomous navigation plays a more and more important role in the research field of intelligent mobile robot. Localization and map building are the core problems to be solved by the robot to realize autonomous navigation. Probabilistic techniques (such as Extented Kalman Filter and Particle Filter) have long been used to solve the robotic localization and mapping problem. Despite their good performance in practical applications, they could suffer the inconsistency problem in the non linear, non Gaussian scenarios. This thesis focus on study the interval analysis based methods applied to solve the robotic localization and mapping problem. Instead of making hypothesis on the probability distribution, all the sensor noises are assumed to be bounded within known limits. Based on such foundation, this thesis formulates the localization and mapping problem in the framework of Interval Constraint Satisfaction Problem and applied consistent interval techniques to solve them in a guaranteed way. To deal with the “uncorrected yaw” problem encountered by Interval Constraint Propagation (ICP) based localization approaches, this thesis proposes a new ICP algorithm dealing with the real-time vehicle localization. The proposed algorithm employs a low-level consistency algorithm and is capable of heading uncertainty correction. Afterwards, the thesis presents an interval analysis based SLAM algorithm (IA-SLAM) dedicates for monocular camera. Bound-error parameterization and undelayed initialization for nature landmark are proposed. The SLAM problem is formed as ICSP and solved via interval constraint propagation techniques. A shaving method for landmark uncertainty contraction and an ICSP graph based optimization method are put forward to improve the obtaining result. Theoretical analysis of mapping consistency is also provided to illustrated the strength of IA-SLAM. Moreover, based on the proposed IA-SLAM algorithm, the thesis presents a low cost and consistent approach for outdoor vehicle localization. It works in a two-stage framework (visual teach and repeat) and is validated with a car-like vehicle equipped with dead reckoning sensors and monocular camera.
|
157 |
Optimal Information-Weighted Kalman Consensus FilterShiraz Khan (8782250) 30 April 2020 (has links)
<div>Distributed estimation algorithms have received considerable attention lately, owing to the advancements in computing, communication and battery technologies. They offer increased scalability, robustness and efficiency. In applications such as formation flight, where any discrepancies between sensor estimates has severe consequences, it becomes crucial to require consensus of estimates amongst all sensors. The Kalman Consensus Filter (KCF) is a seminal work in the field of distributed consensus-based estimation, which accomplishes this. </div><div><br></div><div>However, the KCF algorithm is mathematically sub-optimal, and does not account for the cross-correlation between the estimates of sensors. Other popular algorithms, such as the Information weighted Consensus Filter (ICF) rely on ad-hoc definitions and approximations, rendering them sub-optimal as well. Another major drawback of KCF is that it utilizes unweighted consensus, i.e., each sensor assigns equal weightage to the estimates of its neighbors. This fact has been shown to cause severely degraded performance of KCF when some sensors cannot observe the target, and can even cause the algorithm to be unstable.</div><div><br></div><div>In this work, we develop a novel algorithm, which we call Optimal Kalman Consensus Filter for Weighted Directed Graphs (OKCF-WDG), which addresses both of these limitations of existing algorithms. OKCF-WDG integrates the KCF formulation with that of matrix-weighted consensus. The algorithm achieves consensus on a weighted digraph, enabling a directed flow of information within the network. This aspect of the algorithm is shown to offer significant performance improvements over KCF, as the information may be directed from well-performing sensors to other sensors which have high estimation error due to environmental factors or sensor limitations. We validate the algorithm through simulations and compare it to existing algorithms. It is shown that the proposed algorithm outperforms existing algorithms by a considerable margin, especially in the case where some sensors are naive (i.e., cannot observe the target).</div>
|
158 |
Multimodal Sensor Fusion with Object Detection Networks for Automated DrivingSchröder, Enrico 07 January 2022 (has links)
Object detection is one of the key tasks of environment perception for highly automated vehicles. To achieve a high level of performance and fault tolerance, automated vehicles are equipped with an array of different sensors to observe their environment. Perception systems for automated vehicles usually rely on Bayesian fusion methods to combine information from different sensors late in the perception pipeline in a highly abstract, low-dimensional representation. Newer research on deep learning object detection proposes fusion of information in higher-dimensional space directly in the convolutional neural networks to significantly increase performance. However, the resulting deep learning architectures violate key non-functional requirements of a real-world safety-critical perception system for a series-production vehicle, notably modularity, fault tolerance and traceability.
This dissertation presents a modular multimodal perception architecture for detecting objects using camera, lidar and radar data that is entirely based on deep learning and that was designed to respect above requirements. The presented method is applicable to any region-based, two-stage object detection architecture (such as Faster R-CNN by Ren et al.). Information is fused in the high-dimensional feature space of a convolutional neural network. The feature map of a convolutional neural network is shown to be a suitable representation in which to fuse multimodal sensor data and to be a suitable interface to combine different parts of object detection networks in a modular fashion. The implementation centers around a novel neural network architecture that learns a transformation of feature maps from one sensor modality and input space to another and can thereby map feature representations into a common feature space. It is shown how transformed feature maps from different sensors can be fused in this common feature space to increase object detection performance by up to 10% compared to the unimodal baseline networks. Feature extraction front ends of the architecture are interchangeable and different sensor modalities can be integrated with little additional training effort. Variants of the presented method are able to predict object distance from monocular camera images and detect objects from radar data.
Results are verified using a large labeled, multimodal automotive dataset created during the course of this dissertation. The processing pipeline and methodology for creating this dataset along with detailed statistics are presented as well.
|
159 |
Estimating Relative Position and Orientation Based on UWB-IMU Fusion for Fixed Wing UAVsSandvall, Daniel, Sevonius, Eric January 2023 (has links)
In recent years, the interest in flying multiple Unmanned Aerial Vehicles (UAVs) in formation has increased. One challenging aspect of achieving this is the relative positioning within the swarm. This thesis evaluates two different methods for estimating the relative position and orientation between two fixed wing UAVs by fusing range measurements from Ultra-wideband (UWB) sensors and orientation estimates from Inertial Measurement Units (IMUs). To investigate the problem of estimating the relative position and orientation using range measurements, the performance of the UWB nodes regarding the accuracy of the measurements is evaluated. The resulting information is then used to develop a simulation environment where two fixed wing UAVs fly in formation. In this environment, the two estimation solutions are developed. The first solution to the estimation problem is based on the Extended Kalman Filter (EKF) and the second solution is based on Factor Graph Optimization (FGO). In addition to evaluating these methods, two additional areas of interest are investigated: the impact of varying the placement and number of UWB sensors, and if using additional sensors can lead to an increased accuracy of the estimates. To evaluate the EKF and the FGO solutions, multiple scenarios are simulated at different distances, with different amounts of changes in the relative position, and with different accuracies of the range measurements. The results from the simulations show that both solutions successfully estimate the relative position and orientation. The FGO-based solution performs better at estimating the relative position, while both algorithms perform similarly when estimating the relative orientation. However, both algorithms perform worse when exposed to more realistic range measurements. The thesis concludes that both solutions work well in simulation, where the Root Mean Square Error (RMSE) of the position estimates are 0.428 m and 0.275 m for the EKF and FGO solutions, respectively, and the RMSE of the orientation estimates are 0.016 radians and 0.013 radians respectively. However, to perform well on hardware, the accuracy of the UWB measurements must be increased. It is also concluded that by adding more sensors and by placing multiple UWB sensors on each UAV, the accuracy of the estimates can be improved. In simulation, the lowest RMSE is achieved by fusing barometer data from both UAVs in the FGO algorithm, resulting in an RMSE of 0.229 m for the estimated relative position.
|
160 |
Building an Efficient Occupancy Grid Map Based on Lidar Data Fusion for Autonomous driving ApplicationsSalem, Marwan January 2019 (has links)
The Localization and Map building module is a core building block for designing an autonomous vehicle. It describes the vehicle ability to create an accurate model of its surroundings and maintain its position in the environment at the same time. In this thesis work, we contribute to the autonomous driving research area by providing a proof-of-concept of integrating SLAM solutions into commercial vehicles; improving the robustness of the Localization and Map building module. The proposed system applies Bayesian inference theory within the occupancy grid mapping framework and utilizes Rao-Blackwellized Particle Filter for estimating the vehicle trajectory. The work has been done at Scania CV where a heavy duty vehicle equipped with multiple-Lidar sensory architecture was used. Low level sensor fusion of the different Lidars was performed and a parallelized implementation of the algorithm was achieved using a GPU. When tested on the frequently used datasets in the community, the implemented algorithm outperformed the scan-matching technique and showed acceptable performance in comparison to another state-of-art RBPF implementation that adapts some improvements on the algorithm. The performance of the complete system was evaluated under a designed set of real scenarios. The proposed system showed a significant improvement in terms of the estimated trajectory and provided accurate occupancy representations of the vehicle surroundings. The fusion module was found to build more informative occupancy grids than the grids obtained form individual sensors. / Modulen som har hand om både lokalisering och byggandet av karta är en av huvudorganen i ett system för autonom körning. Den beskriver bilens förmåga att skapa en modell av omgivningen och att hålla en position i förhållande till omgivningen. I detta examensarbete bidrar vi till forskningen inom autonom bilkörning med ett valideringskoncept genom att integrera SLAM-lösningar i kommersiella fordon, vilket förbättrar robustheten hos lokaliserings-kartbyggarmodulen. Det föreslagna systemet använder sig utav Bayesiansk statistik applicerat i ett ramverk som har hand om att skapa en karta, som består av ett rutnät som används för att beskriva ockuperingsgraden. För att estimera den bana som fordonet kommer att färdas använder ramverket RBPF(Rao-Blackwellized particle filter). Examensarbetet har genomförts hos Scania CV, där ett tungt fordon utrustat med flera lidarsensorer har använts. En lägre nivå av sensor fusion applicerades för de olika lidarsensorerna och en parallelliserad implementation av algoritmen implementerades på GPU. När algoritmen kördes mot data som ofta används av ”allmänheten” kan vi konstatera att den implementerade algoritmen ger ett väldigt mycket bättre resultat än ”scan-matchnings”-tekniken och visar på ett acceptabelt resultat i jämförelse med en annan högpresterande RBPFimplementation, vilken tillför några förbättringar på algoritmen. Prestandan av hela systemet utvärderas med ett antal egendesignade realistiska scenarion. Det föreslagna systemet visar på en tydlig förbättring av uppskattningen av körbanan och bidrar även med en exakt representation av omgivningen. Sensor Fusionen visar på en bättre och mer informativ representation än när man endast utgår från de individuella lidarsensorerna.
|
Page generated in 0.057 seconds