Spelling suggestions: "subject:"[een] SENSOR FUSION"" "subject:"[enn] SENSOR FUSION""
91 |
Amélioration des méthodes de navigation vision-inertiel par exploitation des perturbations magnétiques stationnaires de l’environnement / Improving Visual-Inertial Navigation Using Stationary Environmental Magnetic DisturbancesCaruso, David 01 June 2018 (has links)
Cette thèse s'intéresse au problème du positionnement (position et orientation) dans un contexte de réalité augmentée et aborde spécifiquement les solutions à base de capteurs embarqués. Aujourd'hui, les systèmes de navigation vision-inertiel commencent à combler les besoins spécifiques de cette application. Néanmoins, ces systèmes se basent tous sur des corrections de trajectoire issues des informations visuelles à haute fréquence afin de pallier la rapide dérive des capteurs inertiels bas-coûts. Pour cette raison, ces méthodes sont mises en défaut lorsque l'environnement visuel est défavorable.Parallèlement, des travaux récents menés par la société Sysnav ont démontré qu'il était possible de réduire la dérive de l'intégration inertielle en exploitant le champ magnétique, grâce à un nouveau type d'UMI bas-coût composée – en plus des accéléromètres et gyromètres traditionnels – d'un réseau de magnétomètres. Néanmoins, cette méthode est également mise en défaut si des hypothèses de non-uniformité et de stationnarité du champ magnétique ne sont pas vérifiées localement autour du capteur.Nos travaux portent sur le développement d'une solution de navigation à l'estime robuste combinant toutes ces sources d'information: magnétiques, visuelles et inertielles.Nous présentons plusieurs approches pour la fusion de ces données, basées sur des méthodes de filtrage ou d’optimisation et nous développons un modèle de prédiction du champ magnétique inspiré d'approximation proposées en inertiel et permettant d’intégrer efficacement des termes magnétiques dans les méthodes d’ajustement de faisceaux. Les performances de ces différentes approches sont évaluées sur des données réelles et nous démontrons le bénéfice de la fusion de données comparées aux solutions vision-inertielles ou magnéto-inertielles. Des propriétés théoriques de ces méthodes liées à la théorie de l’invariance des estimateurs sont également étudiées. / This thesis addresses the issue of positioning in 6-DOF that arises from augmented reality applications and focuses on embedded sensors based solutions.Nowadays, the performance reached by visual-inertial navigation systems is starting to be adequate for AR applications. Nonetheless, those systems are based on position correction from visual sensors involved at a relatively high frequency to mitigate the quick drift of low-cost inertial sensors. This is a problem when the visual environment is unfavorable.In parallel, recent works have shown it was feasible to leverage magnetic field to reduce inertial integration drift thanks to a new type of low-cost sensor, which includes – in addition to the accelerometers and gyrometers – a network of magnetometers. Yet, this magnetic approach for dead-reckoning fails if stationarity and non-uniformity hypothesis on the magnetic field are unfulfilled in the vicinity of the sensor.We develop a robust dead-reckoning solution combining simultaneously information from all these sources: magnetic, visual, and inertial sensor. We present several approaches to solve for the fusion problem, using either filtering or non-linear optimization paradigm and we develop an efficient way to use magnetic error term in a classical bundle adjustment that was inspired from already used idea for inertial terms. We evaluate the performance of these estimators on data from real sensors. We demonstrate the benefits of the fusion compared to visual-inertial and magneto-inertial solutions. Finally, we study theoretical properties of the estimators that are linked to invariance theory.
|
92 |
Head impact detection with sensor fusion and machine learningStrandberg, Aron January 2022 (has links)
Head injury is common in many different sports and elsewhere, and is often associated with differentdifficulties. One major problem is to identify and value the injury or the severity. Sometimes there is no sign of head injury, but a serious neck distortion has occurred, causing similar symptoms as head injuries e.g. concussion or mild TBI (traumatic brain injury). This study investigated whether direct and indirect measurements of head kinematics, combined with machine learning and 3D visualization can be used to identify head injury and value the injury. Injury statistics have found that many severe head injuries are caused by oblique impacts. An oblique impact will give rise to both linear and rotational kinematics. Since the human brain is very sensitive to rotational kinematics, many violent rotations of the head can results in large shear strains in the brain. This is when white matter and white matter connections are disrupted in the brain from acceleration and deceleration, or rotational acceleration kinematics which in turn will cause traumatic brain injuries as e.g. diffuse axonal injury (DAI). Lately there has been many studies in this field using different types of new technologies, but the most prevalent is the rise of wearable sensors that have become smaller, faster and more energy efficient where they have been integrated into mouthguards and inertial measurement units (IMUs) the size of a sim-card that measures and reports a body's specific force. It has been shown that a 6-axis IMU (3-axis rotational- and 3-axis acceleration measurements) may improve head injury prediction but more data is needed to confirm with existing head injury criterions and new criterions needs to be developed, that considers directional sensitivity. Today, IMUs are typically used in self-driving cars, aircrafts, spacecrafts, satellites etc. As of today, more and more studies have evaluated and utilized IMUs in new uncharted fields have shown promises, especially in sports, and in the neuroscience and medical field. This study proposed a method to 3D visualize head kinematics during the event of a possible head injury to indirectly identify and value the injury, by medical professionals, as well as, a direct method to identify and also value the severity of head injury with machine learning. An erroneous data collection process of reconstructed head impacts and non-head impacts have been recorded using an open-source 9-axis IMU sensor and a proprietary 6-axis IMU sensor. To value the head injury or the severity, existing head injury criterions as the Abbreviated Injury Scale (AIS), Head Injury Criterion (HIC), Head Impact Power (HIP), Severity Index (SI) and Generalized Acceleration Model for Brain Injury Threshold (GAMBIT) have been introduced. To detect head impact including the severity and non-head impact, a Random Forests (RF) classifier and Support Vector Machine (SVM) classifiers with linear- and radial basis function have been proposed, the prediction results have been promising.
|
93 |
Sensor Fusion of GPS andAccelerometer Data for Estimation of Vehicle Dynamics / Sensorfusion av GPS ochaccelerometerdata för estimering av fordonsdynamikMalmberg, Mats January 2014 (has links)
Connected vehicles is a growing market. There are currently several such services available, but many of them are constrained in the sense that they are bound to recently produced cars and either expensive or strongly limited in the services that they provide. In this master thesis we investigate the possibility to implement a generic platform that is of low cost and simple to install in any vehicle, but that still has the ability to provide a wide range of services. It is proposed that a crucial step in such a system is to reconstruct the vehicle’s kinematics, as this enables the possibility to developed a wide range of services by feature extraction and interpret the result from a dynamics perspective. A mathematical model that describes how the kinematics can be reconstructed is proposed, and a filter that performs such reconstruction is implemented. Based on this reconstruction, two filters that interpret the output are implemented as a proof of concept for the proposed mathematical model. The complete implemented filter solution is tested on measurement data from actual driving scenarios and it is seen that we can identify when the vehicle makes a hard turn, and find where the surrounding road conditions are poor. / Uppkopplade fordon är en växande marknad. I dagsläget finns flera sådana tjänster, men ofta är dessa begränsade i den meningen att de antingen endast finns tillgängliga för nyproducerade fordon eller bara erbjuder ett smalt utbud av tjänster. I detta examensarbete undersöker vi möjligheten att utveckla en generisk plattform för uppkopplade fordon som är billig och enkel att installera, men som också kan erbjuda ett stort urval av tjänster. Det föreslås att ett viktigt steg i en sådan lösning är att rekonstruera fordonets kinematik, då detta möjliggör utvecklandet av ett brett urval av tjänster genom att identifiera karakteristiska egenskaper i kinematiken, samt göra tolkningar utifrån dynamikbetraktelser. En matematisk modell för att beskriva hur kinematiken kan rekonstrueras från givna indata presenteras, och ett filter som utför denna rekonstruktion implementeras. Ytterligare två filter implementeras för att påvisa att den rekonstruerade kinematiken samt den föreslagna matematiska modellen kan användas till att identifiera olika scenarion ur verkligheten. Den kompletta filterlösningen testas på mätdata från faktiska körningar och vi ser att vi kan identifiera när fordonet gör skarpa svängar, samt när vägförhållandena är dåliga.
|
94 |
Grid-Based Multi-Sensor Fusion for On-Road Obstacle Detection: Application to Autonomous Driving / Rutnätsbaserad multisensorfusion för detektering av hinder på vägen: tillämpning på självkörande bilarGálvez del Postigo Fernández, Carlos January 2015 (has links)
Self-driving cars have recently become a challenging research topic, with the aim of making transportation safer and more efficient. Current advanced driving assistance systems (ADAS) allow cars to drive autonomously by following lane markings, identifying road signs and detecting pedestrians and other vehicles. In this thesis work we improve the robustness of autonomous cars by designing an on-road obstacle detection system. The proposed solution consists on the low-level fusion of radar and lidar through the occupancy grid framework. Two inference theories are implemented and evaluated: Bayesian probability theory and Dempster-Shafer theory of evidence. Obstacle detection is performed through image processing of the occupancy grid. Last, the Dempster-Shafer additional features are leveraged by proposing a sensor performance estimation module and performing advanced conflict management. The work has been carried out at Volvo Car Corporation, where real experiments on a test vehicle have been performed under different environmental conditions and types of objects. The system has been evaluated according to the quality of the resulting occupancy grids, detection rate as well as information content in terms of entropy. The results show a significant improvement of the detection rate over single-sensor approaches. Furthermore, the Dempster-Shafer implementation may slightly outperform the Bayesian one when there is conflicting information, although the high computational cost limits its practical application. Last, we demonstrate that the proposed solution is easily scalable to include additional sensors.
|
95 |
NOVEL ENTROPY FUNCTION BASED MULTI-SENSOR FUSION IN SPACE AND TIME DOMAIN: APPLICATION IN AUTONOMOUS AGRICULTURAL ROBOTMd Nazmuzzaman Khan (10581479) 07 May 2021 (has links)
<div><div><div> How can we transform an agricultural vehicle into an autonomous weeding robot? A robot that can run autonomously through a vegetable field, classify multiple types of weeds from real-time video feed and then spray specific herbicides based of previously classified weeds. In this research, we answer some of the theoretical and practical challenges regarding the transformation of an agricultural vehicle into an autonomous weeding robot. How can we transform an agricultural vehicle into an autonomous weeding robot? A robot that can run autonomously through a vegetable field, classify multiple types of weeds from real-time video feed and then spray specific herbicides based of previously classified weeds. In this research, we answer some of the theoretical and practical challenges regarding the transformation of an agricultural vehicle into an autonomous weeding robot. How can we transform an agricultural vehicle into an autonomous weeding robot? A robot that can run autonomously through a vegetable field, classify multiple types of weeds from real-time video feed and then spray specific herbicides based of previously classified weeds. In this research, we answer some of the theoretical and practical challenges regarding the transformation of an agricultural vehicle into an autonomous weeding robot. <br></div></div></div><div><br></div><div> First, we propose a solution for real-time crop row detection from autonomous navigation of agricultural vehicle using domain knowledge and unsupervised machine learning based approach. We implement projective transformation to transform camera image plane to an image plane exactly at the top of the crop rows, so that parallel crop rows remain parallel. Then we use color based segmentation to differentiate crop and weed pixels from background. We implement hierarchical density-based spatial clustering of applications with noise (HDBSCAN) clustering algorithm to differentiate between the crop row clusters and weed clusters. <br></div><div><br></div><div> Finally we use Random sample consensus (RANSAC) for robust line fitting through the detected crop row clusters. We test our algorithm against four different well established methods for crop row detection in-terms of processing time and accuracy. Our proposed method, Clustering Algorithm based RObust LIne Fitting (CAROLIF), shows significantly better accuracy compared to three other methods with average intersect over union (IoU) value of 73%. We also test our algorithm on a video taken from an agricultural vehicle at a corn field in Indiana. CAROLIF shows promising results under lighting variation, vibration and unusual crop-weed growth. <br></div><div><br></div><div><div> Then we propose a robust weed classification system based on convolutional neural network (CNN) and novel decision-level evidence-based multi-sensor fusion algorithm. We create a small dataset of three different weeds (Giant ragweed, Pigweed and Cocklebur) commonly available in corn fields. We train three different CNN architectures on our dataset. Based on classification accuracy and inference time, we choose VGG16 with transfer learning architecture for real-time weed classification.</div><div> </div><div> To create a robust and stable weed classification pipeline, a multi-sensor fusion algorithm based on Dempster-Shafer (DS) evidence theory with a novel entropy function is proposed. The proposed novel entropy function is inspired from Shannon and Deng entropy but it shows better results at understanding uncertainties in certain scenarios, compared to Shannon and Deng entropy, under DS framework. Our proposed algorithm has two advantages compared to other sensor fusion algorithms. First, it can be applied to both space and time domain to fuse results from multiple sensors and create more robust results. Secondly, it can detect which sensor is faulty in the sensors array and compensate for the faulty sensor by giving it lower weight at real-time. Our proposed algorithm calculates the evidence distance from each sensor and determines if one sensor agrees or disagrees with another. Then it rewards the sensors which agrees with another according to their information quality which is calculated using our novel entropy function. The proposed algorithm can combine highly conflicting evidences from multiple sensors and overcomes the limitation of original DS combination rule. After testing our algorithm with real and simulation data, it shows better convergence rate, anti-disturbing ability and transition property compared to other methods available from open literature.</div></div><div><br></div><div><div> Finally, we present a fuzzy-logic based approach to measure the confidence</div><div> of the detected object's bounding-box (BB) position from a CNN detector. The CNN detector gives us the position of BB with percentage accuracy of the object inside the BB on each image plane. But how do we know for sure that the position of the BB is correct? When we are detecting an object using multiple cameras, the position of the BB on the camera image plane may appear in different places based on the detection accuracy and the position of the cameras. But in 3D space, the object is at the exact same position for both cameras. We use this relation between the camera image planes to create a fuzzy-fusion system which will calculate the confidence value of detection. Based on the fuzzy-rules and accuracy of BB position, this system gives us confidence values at three different stages (`Low', `OK' and `High'). This proposed system is successful at giving correct confidence score for scenarios where objects are correctly detected, objects are partially detected and objects are incorrectly detected. </div></div>
|
96 |
Data Acquisition and Processing Pipeline for E-Scooter Tracking Using 3d Lidar and Multi-Camera SetupBetrabet, Siddhant S. 12 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Analyzing behaviors of objects on the road is a complex task that requires data from various sensors and their fusion to recreate the movement of objects with a high degree of accuracy. A data collection and processing system are thus needed to track the objects accurately in order to make an accurate and clear map of the trajectories of objects relative to various coordinate frame(s) of interest in the map. Detection and tracking moving objects (DATMO) and Simultaneous localization and mapping (SLAM) are the tasks that needs to be achieved in conjunction to create a clear map of the road comprising of the moving and static objects.
These computational problems are commonly solved and used to aid scenario reconstruction for the objects of interest. The tracking of objects can be done in various ways, utilizing sensors such as monocular or stereo cameras, Light Detection and Ranging (LIDAR) sensors as well as Inertial Navigation systems (INS) systems. One relatively common method for solving DATMO and SLAM involves utilizing a 3D LIDAR with multiple monocular cameras in conjunction with an inertial measurement unit (IMU) allows for redundancies to maintain object classification and tracking with the help of sensor fusion in cases when sensor specific traditional algorithms prove to be ineffectual when either sensor falls short due to their limitations. The usage of the IMU and sensor fusion methods relatively eliminates the need for having an expensive INS rig. Fusion of these sensors allows for more effectual tracking to utilize the maximum potential of each sensor while allowing for methods to increase perceptional accuracy.
The focus of this thesis will be the dock-less e-scooter and the primary goal will be to track its movements effectively and accurately with respect to cars on the road and the world. Since it is relatively more common to observe a car on the road than e-scooters, we propose a data collection system that can be built on top of an e-scooter and an offline processing pipeline that can be used to collect data in order to understand the behaviors of the e-scooters themselves. In this thesis, we plan to explore a data collection system involving a 3D LIDAR sensor and multiple monocular cameras and an IMU on an e-scooter as well as an offline method for processing the data to generate data to aid scenario reconstruction.
|
97 |
Novel Capacitive Sensors for Chemical and Physical Monitoring in Microfluidic DevicesRajan, Parthiban 12 June 2019 (has links)
No description available.
|
98 |
Quadcopter stabilization based on IMU and Monocamera FusionPérez Rodríguez, Arturo January 2023 (has links)
Unmanned aerial vehicles (UAVs), commonly known as drones, have revolutionized numerous fields ranging from aerial photography to surveillance and logistics. Achieving stable flight is essential for their successful operation, ensuring accurate data acquisition, reliable manoeuvring, and safe operation. This thesis explores the feasibility of employing a frontal mono camera and sensor fusion techniques to enhance drone stability during flight. The objective of this research is to investigate whether a frontal mono camera, combined with sensor fusion algorithms, can be used to effectively stabilize a drone in various flight scenarios. By leveraging machine vision techniques and integrating data from onboard gyroscopes, the proposed approach aims to provide real-time feedback for controlling the drone. The methodology for this study involves the Crazyflie 2.1 drone platform equipped with a frontal camera and an Inertial Measurement Unit (IMU). The drone’s flight data, including position, orientation, and velocity, is continuously monitored and analyzed using Kalman Filter (KF). This algorithm processes the data from the camera and the IMU to estimate the drone’s state accurately. Based on these estimates, corrective commands are generated and sent to the drone’s control system to maintain stability. To evaluate the effectiveness of the proposed system, a series of flight tests are conducted under different environmental conditions and flight manoeuvres. Performance metrics such as drift, level of oscillations, and overall flight stability are analyzed and compared against baseline experiments with conventional stabilization methods. Additional simulated tests are carried out to study the effect of the communication delay. The expected outcomes of this research will contribute to the advancement of drone stability systems. If successful, the implementation of a frontal camera and sensor fusion can provide a cost-effective and lightweight solution for stabilizing drones.
|
99 |
A Vision-Based Relative Navigation Approach for Autonomous Multirotor AircraftLeishman, Robert C. 29 April 2013 (has links) (PDF)
Autonomous flight in unstructured, confined, and unknown GPS-denied environments is a challenging problem. Solutions could be tremendously beneficial for scenarios that require information about areas that are difficult to access and that present a great amount of risk. The goal of this research is to develop a new framework that enables improved solutions to this problem and to validate the approach with experiments using a hardware prototype. In Chapter 2 we examine the consequences and practical aspects of using an improved dynamic model for multirotor state estimation, using only IMU measurements. The improved model correctly explains the measurements available from the accelerometers on a multirotor. We provide hardware results demonstrating the improved attitude, velocity and even position estimates that can be achieved through the use of this model. We propose a new architecture to simplify some of the challenges that constrain GPS-denied aerial flight in Chapter 3. At the core, the approach combines visual graph-SLAM with a multiplicative extended Kalman filter (MEKF). More importantly, we depart from the common practice of estimating global states and instead keep the position and yaw states of the MEKF relative to the current node in the map. This relative navigation approach provides a tremendous benefit compared to maintaining estimates with respect to a single global coordinate frame. We discuss the architecture of this new system and provide important details for each component. We verify the approach with goal-directed autonomous flight-test results. The MEKF is the basis of the new relative navigation approach and is detailed in Chapter 4. We derive the relative filter and show how the states must be augmented and marginalized each time a new node is declared. The relative estimation approach is verified using hardware flight test results accompanied by comparisons to motion capture truth. Additionally, flight results with estimates in the control loop are provided. We believe that the relative, vision-based framework described in this work is an important step in furthering the capabilities of indoor aerial navigation in confined, unknown environments. Current approaches incur challenging problems by requiring globally referenced states. Utilizinga relative approach allows more flexibility as the critical, real-time processes of localization and control do not depend on computationally-demanding optimization and loop-closure processes.
|
100 |
Visual-Inertial Odometry for Autonomous Ground VehiclesBurusa, Akshay Kumar January 2017 (has links)
Monocular cameras are prominently used for estimating motion of Unmanned Aerial Vehicles. With growing interest in autonomous vehicle technology, the use of monocular cameras in ground vehicles is on the rise. This is especially favorable for localization in situations where Global Navigation Satellite System (GNSS) is unreliable, such as open-pit mining environments. However, most monocular camera based approaches suffer due to obscure scale information. Ground vehicles impose a greater difficulty due to high speeds and fast movements. This thesis aims to estimate the scale of monocular vision data by using an inertial sensor in addition to the camera. It is shown that the simultaneous estimation of pose and scale in autonomous ground vehicles is possible by the fusion of visual and inertial sensors in an Extended Kalman Filter (EKF) framework. However, the convergence of scale is sensitive to several factors including the initialization error. An accurate estimation of scale allows the accurate estimation of pose. This facilitates the localization of ground vehicles in the absence of GNSS, providing a reliable fall-back option. / Monokulära kameror används ofta vid rörelseestimering av obemannade flygande farkoster. Med det ökade intresset för autonoma fordon har även användningen av monokulära kameror i fordon ökat. Detta är fram för allt fördelaktigt i situationer där satellitnavigering (Global Navigation Satellite System (GNSS)) äropålitlig, exempelvis i dagbrott. De flesta system som använder sig av monokulära kameror har problem med att estimera skalan. Denna estimering blir ännu svårare på grund av ett fordons större hastigheter och snabbare rörelser. Syftet med detta exjobb är att försöka estimera skalan baserat på bild data från en monokulär kamera, genom att komplettera med data från tröghetssensorer. Det visas att simultan estimering av position och skala för ett fordon är möjligt genom fusion av bild- och tröghetsdata från sensorer med hjälp av ett utökat Kalmanfilter (EKF). Estimeringens konvergens beror på flera faktorer, inklusive initialiseringsfel. En noggrann estimering av skalan möjliggör också en noggrann estimering av positionen. Detta möjliggör lokalisering av fordon vid avsaknad av GNSS och erbjuder därmed en ökad redundans.
|
Page generated in 0.0374 seconds