Spelling suggestions: "subject:"[een] SENSOR FUSION"" "subject:"[enn] SENSOR FUSION""
121 |
Pose Estimation and 3D Bounding Box Prediction for Autonomous Vehicles Through Lidar and Monocular Camera Sensor FusionWale, Prajakta Nitin 08 August 2024 (has links)
This thesis investigates the integration of transfer learning with ResNet-101 and compares its performance with VGG-19 for 3D object detection in autonomous vehicles. ResNet-101 is a deep Convolutional Neural Network with 101 layers and VGG-19 is a one with 19 layers. The research emphasizes the fusion of camera and lidar outputs to enhance the accuracy of 3D bounding box estimation, which is critical in occluded environments. Selecting an appropriate backbone for feature extraction is pivotal for achieving high detection accuracy. To address this challenge, we propose a method leveraging transfer learning with ResNet- 101, pretrained on large-scale image datasets, to improve feature extraction capabilities. The averaging technique is used on output of these sensors to get the final bounding box. The experimental results demonstrate that the ResNet-101 based model outperforms the VGG-19 based model in terms of accuracy and robustness. This study provides valuable insights into the effectiveness of transfer learning and multi-sensor fusion in advancing the innovation in 3D object detection for autonomous driving. / Master of Science / In the realm of computer vision, the quest for more accurate and robust 3D object detection pipelines remains an ongoing pursuit. This thesis investigates advanced techniques to im- prove 3D object detection by comparing two popular deep learning models, ResNet-101 and VGG-19. The study focuses on enhancing detection accuracy by combining the outputs from two distinct methods: one that uses a monocular camera to estimate 3D bounding boxes and another that employs lidar's bird's-eye view (BEV) data, converting it to image-based 3D bounding boxes. This fusion of outputs is critical in environments where objects may be partially obscured. By leveraging transfer learning, a method where models that are pre-trained on bigger datasets are finetuned for certain application, the research shows that ResNet-101 significantly outperforms VGG-19 in terms of accuracy and robustness. The approach involves averaging the outputs from both methods to refine the final 3D bound- ing box estimation. This work highlights the effectiveness of combining different detection methodologies and using advanced machine learning techniques to advance 3D object detec- tion technology.
|
122 |
Performance Enhancement Of Intrusion Detection System Using Advances In Sensor FusionThomas, Ciza 04 1900 (has links)
The technique of sensor fusion addresses the issues relating to the optimality of decision-making in the multiple-sensor framework. The advances in sensor fusion enable to perform intrusion detection for both rare and new attacks. This thesis discusses this assertion in detail, and describes the theoretical and experimental work done to show its validity.
The attack-detector relationship is initially modeled and validated to understand the detection scenario. The different metrics available for the evaluation of intrusion detection systems are also introduced. The usefulness of the data set used for experimental evaluation has been demonstrated. The issues connected with intrusion detection systems are analyzed and the need for incorporating multiple detectors and their fusion is established in this work. Sensor fusion provides advantages with respect to reliability and completeness, in addition to intuitive and meaningful results. The goal for this work is to investigate how to combine data from diverse intrusion detection systems in order to improve the detection rate and reduce the false-alarm rate. The primary objective of the proposed thesis work is to develop a theoretical and practical basis for enhancing the performance of intrusion detection systems using advances in sensor fusion with easily available intrusion detection systems. This thesis introduces the mathematical basis for sensor fusion in order to provide enough support for the acceptability of sensor fusion in performance enhancement of intrusion detection systems. The thesis also shows the practical feasibility of performance enhancement using advances in sensor fusion and discusses various sensor fusion algorithms, its characteristics and related design and implementation is-sues. We show that it is possible to build performance enhancement to intrusion detection systems by setting proper threshold bounds and also by rule-based fusion. We introduce an architecture called the data-dependent decision fusion as a framework for building intrusion detection systems using sensor fusion based on data-dependency. Furthermore, we provide information about the types of data, the data skewness problems and the most effective algorithm in detecting different types of attacks. This thesis also proposes and incorporates a modified evidence theory for the fusion unit, which performs very well for the intrusion detection application. The future improvements in individual IDSs can also be easily incorporated in this technique in order to obtain better detection capabilities. Experimental evaluation shows that the proposed methods have the capability of detecting a significant percentage of rare and new attacks. The improved performance of the IDS using the algorithms that has been developed in this thesis, if deployed fully would contribute to an enormous reduction of the successful attacks over a period of time. This has been demonstrated in the thesis and is a right step towards making the cyber space safer.
|
123 |
Machine Learning for Radar in Health Applications : Using machine learning with multiple radars to enhance fall detectionRaskov, Kristoffer, Christiansson, Oliver January 2022 (has links)
Two mm-wave frequency modulated continuous wave (FMCW) radars were combined with a recurrent neural network (RNN) to perform fall detection. The purpose was to find methods to implement a multi-radar setup for healthcare monitoring and to study the resulting models’ resilience to interference and other obstacles, such as re-arranging the radars in the room. Single-board computers (SBCs) controlled the radars to record and transfer data over Ethernet to a PC. The Ethernet connection also allowed synchronization with the network time protocol (NTP), which was necessary to put the data from the two sensors in correspondence. The proposed RNN used two bidirectional long-short term memory (Bi-LSTM) layers with L2-regularization and dropout layers. It had an overall accuracy of 95.15% and 98.11% recall with a test set. Performance in live testing varied with different arrangements, with an accuracy of 98% with the radars along the same wall, 94% with the radars diagonally, and 90% with an alternative arrangement that the RNN model had not seen during training. However, the latter arrangement resulted in a recall of 95.7%, with false alarms reducing the overall performance. In conclusion, the model performed adequately for fall detection, even with different radar arrangements but could still be sensitive to interference. / Två millimetervågs-radarsystem av typen frequency modulated continuous wave (FMCW) kombinerades för att med hjälp av ett recurrent neural network (RNN) utföra falldetektering. Syftet var att finna metoder för att implementera en multiradarplatform för hälsoövervakning samt att studera de resulterande modellernas tolerans mot interferens och andra hinder så som att radarsystemen placeras på olika sätt i rummet. Enkortsdatorer kontrollerade radarsystemen för att kunna spela in och överföra data över Ethernet till en PC. Ethernetanslutningen möjliggjorde även synkronisering över network time protocol (NTP), vilket var nödvändigt för att sammanlänka datan från de båda sensorerna. Det föreslagna RNN:et använde två dubbelriktade (bidirectional) long-short term memory (Bi-LSTM) lager med L2-regularisering och dropout-lager. Det hade en total noggrannhet på 95.15% och 98.11% recall med ett test-set. Prestandan vid testning i drift varierade beroende på olika uppställningar av radarmodulerna, med en noggrannhet på 98% då de placerades längs samma vägg, 94% då de placerades diagonalt och 90% vid en alternativ uppställning som RNN-modellen inte hade sett när den tränades. Det senare resulterade dock i 95.7% recall, där falsklarm var den främsta felkällan. Sammanfattningsvis presterade modellen bra för falldetektering, även med olika uppställningar, men den verkar fortfarande vara känslig för interferens.
|
124 |
Architectures adaptives et reconfigurables de fusion de données dans les sytèmes de positionnement pour la navigation / Adaptive and reconfigurable data fusion architectures in positioning navigation systemsLiu, Guopei January 2008 (has links)
Dans les systèmes de positionnement de véhicules, à tout moment, n'importe lequel des détecteurs peut, temporairement ou de manière permanente, tomber en panne ou cesser d'envoyer des informations. Il s'ensuit alors des répercussions sur la sécurité, la santé, ainsi que des informations financières ou même légales. Bien que les nouvelles pratiques de conception aient tendance à réduire au minimum les défaillances des détecteurs, il est reconnu que de tels évènements peuvent quand même souvenir. Dans un tel cas, le détecteur défectueux doit être identifié et isolé afin d'éviter de corrompre les évaluations globales et, finalement, le système doit être capable de se reconfigurer afin de surmonter le carence causée par la défaillance. En bref, un système de navigation doit être robuste et adaptatif. Cette thèse propose plusieurs architectures de fusion de données capables de s'adapter suite à des défaillances de détecteurs. Les diverses approches utilisent un filtre Kalman en combinaison avec la détection de défauts pour produire des modules de positionnement robuste. Les modules devront être capables de fonctionner dans des situations telles que l'entrée GPS est corrompue ou non disponible, ou bien qu'un plusieurs détecteurs de position sont défectueux ou bloqués. Le principe de travail vise la modification des gains du filtre Kalman en se basant sur les erreurs normalisées entre les états estimés et les observations. Pour évaluer l'architecture proposée, divers défauts de détecteurs et diverses dégradations de performance ont été mis en oeuvre et simulés. Les expériences démontrent que les solutions proposées peuvent compenser la plupart des erreurs associées aux défauts des détecteurs ou aux dégradations de performance, et que l'exactitude de positionnement qui en découle est améliorée significativement.
|
125 |
A Kalman Filter Based Attitude Heading Reference System Using a Low Cost Inertial Measurement UnitLeccadito, Matthew 30 July 2013 (has links)
This paper describes, the development of a sensor fusion algorithm-based Kalman lter ar- chitecture, in combination with a low cost Inertial Measurement Unit (IMU) for an Attitude Heading Reference System (AHRS). A low cost IMU takes advantage of the use of MEMS technology enabling cheap, compact, low grade sensors. The use of low cost IMUs is primar- ily targeted towards Unmanned Aerial Vehicle (UAV) applications due to the requirements for small package size, light weight, and low energy consumption. The high dynamics nature of smaller airframes, coupled with the typical vibration induced noise of UAVs require an e cient, reliable, and robust AHRS for vehicle control. To eliminate the singularities at 90 on the pitch and roll axes, and to keep the computational e ciency high, quaternions are used for state attitude representation.
|
126 |
Development of an autonomous unmanned aerial vehicle specification of a fixed-wing vertical takeoff and landing aircraft / Desenvolvimento de um veículo aéreo não tripulado autônomo especificação de uma aeronave asa-fixa capaz de decolar e aterrissar verticalmenteSilva, Natássya Barlate Floro da 29 March 2018 (has links)
Several configurations of Unmanned Aerial Vehicles (UAVs) were proposed to support different applications. One of them is the tailsitter, a fixed-wing aircraft that takes off and lands on its own tail, with the high endurance advantage from fixed-wing aircraft and, as helicopters and multicopters, not requiring a runway during takeoff and landing. However, a tailsitter has a complex operation with multiple flight stages, each one with its own particularities and requirements, which emphasises the necessity of a reliable autopilot for its use as a UAV. The literature already introduces tailsitter UAVs with complex mechanisms or with multiple counter-rotating propellers, but not one with only one propeller and without auxiliary structures to assist in the takeoff and landing. This thesis presents a tailsitter UAV, named AVALON (Autonomous VerticAL takeOff and laNding), and its autopilot, composed of 3 main units: Sensor Unit, Navigation Unit and Control Unit. In order to choose the most appropriate techniques for the autopilot, different solutions are evaluated. For Sensor Unit, Extended Kalman Filter and Unscented Kalman Filter estimate spatial information from multiple sensors data. Lookahead, Pure Pursuit and Line-of-Sight, Nonlinear Guidance Law and Vector Field path-following algorithms are extended to incorporate altitude information for Navigation Unit. In addition, a structure based on classical methods with decoupled Proportional-Integral-Derivative controllers is compared to a new control structure based on dynamic inversion. Together, all these techniques show the efficacy of AVALONs autopilot. Therefore, AVALON results in a small electric tailsitter UAV with a simple design, with only one propeller and without auxiliary structures to assist in the takeoff and landing, capable of executing all flight stages. / Diversas configurações de Veículos Aéreos Não Tripulados (VANTs) foram propostas para serem utilizadas em diferentes aplicações. Uma delas é o tailsitter, uma aeronave de asa fixa capaz de decolar e pousar sobre a própria cauda. Esse tipo de aeronave apresenta a vantagem de aeronaves de asa fixa de voar sobre grandes áreas com pouco tempo e bateria e, como helicópteros e multicópteros, não necessita de pista para decolar e pousar. Porém, um tailsitter possui uma operação complexa, com múltiplos estágios de voo, cada um com suas peculiaridades e requisitos, o que enfatiza a necessidade de um piloto automático confiável para seu uso como um VANT. A literatura já introduz VANTs tailsitters com mecanismos complexos ou múltiplos motores contra-rotativos, mas não com apenas um motor e sem estruturas para auxiliar no pouso e na decolagem. Essa tese apresenta um VANT tailsitter, chamado AVALON (Autonomous VerticAL takeOff and laNding), e seu piloto automático, composto por 3 unidades principais: Unidade Sensorial, Unidade de Navegação e Unidade de Controle. Diferentes soluções são avaliadas para a escolha das técnicas mais apropriadas para o piloto automático. Para a Unidade Sensorial, Extended Kalman Filter e Unscented Kalman Filter estimam a informação espacial de múltiplos dados de diversos sensores. Os algoritmos de seguimento de trajetória Lookahead, Pure Pursuit and Line-of-Sight, Nonlinear Guidance Law e Vector Field são estendidos para considerar a informação da altitude para a Unidade de Navegação. Além do mais, uma estrutura baseada em métodos clássicos com controladores Proporcional- Integral-Derivativo desacoplados é comparada a uma nova estrutura de controle baseada em dinâmica inversa. Juntas, todas essas técnicas demonstram a eficácia do piloto automático do AVALON. Portanto, AVALON resulta em um VANT tailsitter pequeno e elétrico, com um design simples, apenas um motor e sem estruturas para auxiliar o pouso e a decolagem, capaz de executar todos os estágios de voo.
|
127 |
Localisation précise d'un véhicule par couplage vision/capteurs embarqués/systèmes d'informations géographiques / Localisation of a vehicle through low-cost sensors and geographic information systems fusionSalehi, Achkan 11 April 2018 (has links)
La fusion entre un ensemble de capteurs et de bases de données dont les erreurs sont indépendantes est aujourd’hui la solution la plus fiable et donc la plus répandue de l’état de l’art au problème de la localisation. Les véhicules semi-autonomes et autonomes actuels, ainsi que les applications de réalité augmentée visant les contextes industriels exploitent des graphes de capteurs et de bases de données de tailles considérables, dont la conception, la calibration et la synchronisation n’est, en plus d’être onéreuse, pas triviale. Il est donc important afin de pouvoir démocratiser ces technologies, d’explorer la possibilité de l’exploitation de capteurs et bases de données bas-coûts et aisément accessibles. Cependant, ces sources d’information sont naturellement plus incertaines, et plusieurs obstacles subsistent à leur utilisation efficace en pratique. De plus, les succès récents mais fulgurants des réseaux profonds dans des tâches variées laissent penser que ces méthodes peuvent représenter une alternative peu coûteuse et efficace à certains modules des systèmes de SLAM actuels. Dans cette thèse, nous nous penchons sur la localisation à grande échelle d’un véhicule dans un repère géoréférencé à partir d’un système bas-coût. Celui-ci repose sur la fusion entre le flux vidéo d’une caméra monoculaire, des modèles 3d non-texturés mais géoréférencés de bâtiments,des modèles d’élévation de terrain et des données en provenance soit d’un GPS bas-coût soit de l’odométrie du véhicule. Nos travaux sont consacrés à la résolution de deux problèmes. Le premier survient lors de la fusion par terme barrière entre le VSLAM et l’information de positionnement fournie par un GPS bas-coût. Cette méthode de fusion est à notre connaissance la plus robuste face aux incertitudes du GPS, mais est plus exigeante en matière de ressources que la fusion via des fonctions de coût linéaires. Nous proposons une optimisation algorithmique de cette méthode reposant sur la définition d’un terme barrière particulier. Le deuxième problème est le problème d’associations entre les primitives représentant la géométrie de la scène(e.g. points 3d) et les modèles 3d des bâtiments. Les travaux précédents se basent sur des critères géométriques simples et sont donc très sensibles aux occultations en milieu urbain. Nous exploitons des réseaux convolutionnels profonds afin d’identifier et d’associer les éléments de la carte correspondants aux façades des bâtiments aux modèles 3d. Bien que nos contributions soient en grande partie indépendantes du système de SLAM sous-jacent, nos expériences sont basées sur l’ajustement de faisceaux contraint basé images-clefs. Les solutions que nous proposons sont évaluées sur des séquences de synthèse ainsi que sur des séquence urbaines réelles sur des distances de plusieurs kilomètres. Ces expériences démontrent des gains importants en performance pour la fusion VSLAM/GPS, et une amélioration considérable de la robustesse aux occultations dans la définition des contraintes. / The fusion between sensors and databases whose errors are independant is the most re-liable and therefore most widespread solution to the localization problem. Current autonomousand semi-autonomous vehicles, as well as augmented reality applications targeting industrialcontexts exploit large sensor and database graphs that are difficult and expensive to synchro-nize and calibrate. Thus, the democratization of these technologies requires the exploration ofthe possiblity of exploiting low-cost and easily accessible sensors and databases. These infor-mation sources are naturally tainted by higher uncertainty levels, and many obstacles to theireffective and efficient practical usage persist. Moreover, the recent but dazzling successes ofdeep neural networks in various tasks seem to indicate that they could be a viable and low-costalternative to some components of current SLAM systems.In this thesis, we focused on large-scale localization of a vehicle in a georeferenced co-ordinate frame from a low-cost system, which is based on the fusion between a monocularvideo stream, 3d non-textured but georeferenced building models, terrain elevation models anddata either from a low-cost GPS or from vehicle odometry. Our work targets the resolutionof two problems. The first one is related to the fusion via barrier term optimization of VS-LAM and positioning measurements provided by a low-cost GPS. This method is, to the bestof our knowledge, the most robust against GPS uncertainties, but it is more demanding in termsof computational resources. We propose an algorithmic optimization of that approach basedon the definition of a novel barrier term. The second problem is the data association problembetween the primitives that represent the geometry of the scene (e.g. 3d points) and the 3d buil-ding models. Previous works in that area use simple geometric criteria and are therefore verysensitive to occlusions in urban environments. We exploit deep convolutional neural networksin order to identify and associate elements from the map that correspond to 3d building mo-del façades. Although our contributions are for the most part independant from the underlyingSLAM system, we based our experiments on constrained key-frame based bundle adjustment.The solutions that we propose are evaluated on synthetic sequences as well as on real urbandatasets. These experiments show important performance gains for VSLAM/GPS fusion, andconsiderable improvements in the robustness of building constraints to occlusions.
|
128 |
Safe human-robot interaction based on multi-sensor fusion and dexterous manipulation planningCorrales Ramón, Juan Antonio 21 July 2011 (has links)
This thesis presents several new techniques for developing safe and flexible human-robot interaction tasks where human operators cooperate with robotic manipulators. The contributions of this thesis are divided in two fields: the development of safety strategies which modify the normal behavior of the robotic manipulator when the human operator is near the robot and the development of dexterous manipulation tasks for in-hand manipulation of objects with a multi-fingered robotic hand installed at the end-effector of a robotic manipulator. / Valencian Government by the research project "Infraestructura 05/053". Spanish Ministry of Education and Science by the pre-doctoral grant AP2005-1458 and the research projects DPI2005-06222 and DPI2008-02647, which constitute the research framework of this thesis.
|
129 |
An Alternative Sensor Fusion Method For Object Orientation Using Low-Cost Mems Inertial SensorsBouffard, Joshua Lee 01 January 2016 (has links)
This thesis develops an alternative sensor fusion approach for object orientation using low-cost MEMS inertial sensors. The alternative approach focuses on the unique challenges of small UAVs. Such challenges include the vibrational induced noise onto the accelerometer and bias offset errors of the rate gyroscope. To overcome these challenges, a sensor fusion algorithm combines the measured data from the accelerometer and rate gyroscope to achieve a single output free from vibrational noise and bias offset errors.
One of the most prevalent sensor fusion algorithms used for orientation estimation is the Extended Kalman filter (EKF). The EKF filter performs the fusion process by first creating the process model using the nonlinear equations of motion and then establishing a measurement model. With the process and measurement models established, the filter operates by propagating the mean and covariance of the states through time.
The success of EKF relies on the ability to establish a representative process and measurement model of the system. In most applications, the EKF measurement model utilizes the accelerometer and GPS-derived accelerations to determine an estimate of the orientation. However, if the GPS-derived accelerations are not available then the measurement model becomes less reliable when subjected to harsh vibrational environments. This situation led to the alternative approach, which focuses on the correlation between the rate gyroscope and accelerometer-derived angle. The correlation between the two sensors then determines how much the algorithm will use one sensor over the other. The result is a measurement that does not suffer from the vibrational noise or from bias offset errors.
|
130 |
Multiple Platform Bias Error Estimation / Estimering av Biasfel med Multipla PlattformarWiklund, Åsa January 2004 (has links)
<p>Sensor fusion has long been recognized as a mean to improve target tracking. Sensor fusion deals with the merging of several signals into one to get a better and more reliable result. To get an improved and more reliable result you have to trust the incoming data to be correct and not contain unknown systematic errors. This thesis tries to find and estimate the size of the systematic errors that appear when we have a multi platform environment and data is shared among the units. To be more precise, the error estimated within the scope of this thesis appears when platforms cannot determine their positions correctly and share target tracking data with their own corrupted position as a basis for determining the target's position. The algorithms developed in this thesis use the Kalman filter theory, including the extended Kalman filter and the information filter, to estimate the platform location bias error. Three algorithms are developed with satisfying result. Depending on time constraints and computational demands either one of the algorithms could be preferred.</p>
|
Page generated in 0.0475 seconds