• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 186
  • 25
  • 13
  • 8
  • 4
  • 3
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 292
  • 292
  • 78
  • 69
  • 64
  • 61
  • 56
  • 48
  • 43
  • 43
  • 42
  • 40
  • 38
  • 35
  • 35
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Multi-agent System Distributed Sensor Fusion Algorithms

Bhattacharya, Shaondip January 2017 (has links)
The concept of consensus filters for sensor fusion is not an entirely new proposition but one with an internally implemented Bayesian fusion is. This work documents a novel state update algorithm for sensor fusion which works using the principle of Bayesian fusion of data with variance implemented on a single integrator consensus algorithm. Comparative demonstrations of how consensus over a pinning network is reached are presented along with a weighted Bayesian Luenberger type observer and a ’Consensus on estimates’ algorithm. This type of a filter is something that is novel and has not been encountered in previous literature related to this topic to the best of our knowledge. In this work, we also extend the proof for a distributed Luenberger type observer design to include the case where the network being considered is a strongly connected digraph.
102

Autonomous Fire Detection Robot Using Modified Voting Logic

Rehman, Adeel ur January 2015 (has links)
Recent developments at Fukushima Nuclear Power Plant in Japan have created urgency in the scientist community to come up with solutions for hostile industrial environment in case of a breakdown or natural disaster. There are many hazardous scenarios in an indoor industrial environment such as risk of fire, failure of high speed rotary machines, chemical leaks, etc. Fire is one of the leading causes for workplace injuries and fatalities. The current fire protection systems available in the market mainly consist of a sprinkler systems and personnel on duty. In the case of a sprinkler system there could be several things that could go wrong, such as spraying water on a fire created by an oil leak may even spread it, the water from the sprinkler system may harm the machinery in use that is not under the fire threat and the water could potentially destroy expensive raw material, finished goods and valuable electronic and printed data. There is a dire need of an inexpensive autonomous system that can detect and approach the source of these hazardous scenarios. This thesis focuses mainly on industrial fires but, using same or similar techniques on different sensors, may allow it to detect and approach other hostile situations in industrial workplace. Autonomous robots can be equipped to detect potential threats of fire and find out the source while avoiding the obstacles during navigation. The proposed system uses Modified Voting Logic Fusion to approach and declare a potential fire source autonomously. The robot follows the increasing gradient of light and heat intensity to identify the threat and approach the source.
103

Smartphone Based Indoor Positioning Using Wi-Fi Round Trip Time and IMU Sensors / Smartphone-baserad inomhuspositionering med Wi-Fi Round-Trip Time och IMU-sensorer

Aaro, Gustav January 2020 (has links)
While GPS long has been an industry standard for localization of an entity or person anywhere in the world, it loses much of its accuracy and value when used indoors. To enable services such as indoor navigation, other methods must be used. A new standard of the Wi-Fi protocol, IEEE 802.11mc (Wi-Fi RTT), enables distance estimation between the transmitter and the receiver based on the Round-Trip Time (RTT) delay of the signal. Using these distance estimations and the known locations of the transmitting Access Points (APs), an estimation of the receiver’s location can be determined. In this thesis, a smartphone Wi-Fi RTT based Indoor Positioning System (IPS) is presented using an Unscented Kalman Filter (UKF). The UKF using only RTT based distance estimations as input, is established as a baseline implementation. Two extensions are then presented to improve the positioning performance; 1) a dead reckoning algorithm using smartphone sensors part of the Inertial Measurement Unit (IMU) as an additional input to the UKF, and 2) a method to detect and adjust distance measurements that have been made in Non-Line-of-Sight (NLoS) conditions. The implemented IPS is evaluated in an office environment in both favorable situations (plenty of Line-of-Sight conditions) and sub-optimal situations (dominant NLoS conditions). Using both extensions, meter level accuracy is achieved in both cases as well as a 90th percentile error of less than 2 meters.
104

Harnessing Multiscale Nonimaging Optics for Automotive Flash LiDAR and Heterogenous Semiconductor Integration

January 2020 (has links)
abstract: Though a single mode of energy transfer, optical radiation meaningfully interacts with its surrounding environment at over a wide range of physical length scales. For this reason, its reconstruction and measurement are of great importance in remote sensing, as these multi-scale interactions encode a great deal of information about distant objects, surfaces, and physical phenomena. For some remote sensing applications, obtaining a desired quantity of interest does not necessitate the explicit mapping of each point in object space to an image space with lenses or mirrors. Instead, only edge rays or physical boundaries of the sensing instrument are considered, while the spatial intensity distribution of optical energy received from a distant object informs its position, optical characteristics, or physical/chemical state. Admittedly specialized, the principals and consequences of non-imaging optics are nevertheless applicable to heterogeneous semiconductor integration and automotive light detection and ranging (LiDAR), two important emerging technologies. Indeed, a review of relevant engineering literature finds two under-addressed remote sensing challenges. The semiconductor industry lacks an optical strain metrology with displacement resolution smaller than 100 nanometers capable of measuring strain fields between high-density interconnect lines. Meanwhile, little attention is paid to the per-meter sensing characteristics of scene-illuminating flash LiDAR in the context of automotive applications, despite the technology’s much lower cost. It is here that non-imaging optics offers intriguing instrument design and explanations of observed sensor performance at vastly different length scales. In this thesis, an effective non-contact technique for mapping nanoscale mechanical strain fields and out-of-plane surface warping via laser diffraction is demonstrated, with application as a novel metrology for next-generation semiconductor packages. Additionally, object detection distance of low-cost automotive flash LiDAR, on the order of tens of meters, is understood though principals of optical energy transfer from the surface of a remote object to an extended multi-segment detector. Such information is of consequence when designing an automotive perception system to recognize various roadway objects in low-light scenarios. / Dissertation/Thesis / Doctoral Dissertation Electrical Engineering 2020
105

Sensor Fusion for 3D Object Detection for Autonomous Vehicles

Massoud, Yahya 14 October 2021 (has links)
Thanks to the major advancements in hardware and computational power, sensor technology, and artificial intelligence, the race for fully autonomous driving systems is heating up. With a countless number of challenging conditions and driving scenarios, researchers are tackling the most challenging problems in driverless cars. One of the most critical components is the perception module, which enables an autonomous vehicle to "see" and "understand" its surrounding environment. Given that modern vehicles can have large number of sensors and available data streams, this thesis presents a deep learning-based framework that leverages multimodal data – i.e. sensor fusion, to perform the task of 3D object detection and localization. We provide an extensive review of the advancements of deep learning-based methods in computer vision, specifically in 2D and 3D object detection tasks. We also study the progress of the literature in both single-sensor and multi-sensor data fusion techniques. Furthermore, we present an in-depth explanation of our proposed approach that performs sensor fusion using input streams from LiDAR and Camera sensors, aiming to simultaneously perform 2D, 3D, and Bird’s Eye View detection. Our experiments highlight the importance of learnable data fusion mechanisms and multi-task learning, the impact of different CNN design decisions, speed-accuracy tradeoffs, and ways to deal with overfitting in multi-sensor data fusion frameworks.
106

A Novel Fusion Technique for 2D LIDAR and Stereo Camera Data Using Fuzzy Logic for Improved Depth Perception

Saksena, Harsh 08 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Obstacle detection, avoidance and path finding for autonomous vehicles requires precise information of the vehicle’s system environment for faultless navigation and decision making. As such vision and depth perception sensors have become an integral part of autonomous vehicles in the current research and development of the autonomous industry. The advancements made in vision sensors such as radars, Light Detection And Ranging (LIDAR) sensors and compact high resolution cameras is encouraging, however individual sensors can be prone to error and misinformation due to environmental factors such as scene illumination, object reflectivity and object transparency. The application of sensor fusion in a system, by the utilization of multiple sensors perceiving similar or relatable information over a network, is implemented to provide a more robust and complete system information and minimize the overall perceived error of the system. 3D LIDAR and monocular camera are the most commonly utilized vision sensors for the implementation of sensor fusion. 3D LIDARs boast a high accuracy and resolution for depth capturing for any given environment and have a broad range of applications such as terrain mapping and 3D reconstruction. Despite 3D LIDAR being the superior sensor for depth, the high cost and sensitivity to its environment make it a poor choice for mid-range application such as autonomous rovers, RC cars and robots. 2D LIDARs are more affordable, easily available and have a wider range of applications than 3D LIDARs, making them the more obvious choice for budget projects. The primary objective of this thesis is to implement a smart and robust sensor fusion system using 2D LIDAR and a stereo depth camera to capture depth and color information of an environment. The depth points generated by the LIDAR are fused with the depth map generated by the stereo camera by a Fuzzy system that implements smart fusion and corrects any gaps in the depth information of the stereo camera. The use of Fuzzy system for sensor fusion of 2D LIDAR and stereo camera is a novel approach to the sensor fusion problem and the output of the fuzzy fusion provides higher depth confidence than the individual sensors provide. In this thesis, we will explore the multiple layers of sensor and data fusion that have been applied to the vision system, both on the camera and lidar data individually and in relation to each other. We will go into detail regarding the development and implementation of fuzzy logic based fusion approach, the fuzzification of input data and the method of selection of the fuzzy system for depth specific fusion for the given vision system and how fuzzy logic can be utilized to provide information which is vastly more reliable than the information provided by the camera and LIDAR separately
107

Towards an improvement of BLE Direction Finding accuracyusing Dead Reckoning with inertial sensors / Mot en förbättring av precisionen hos BLE Direction Finding genom användning av Dead Reckoning

Rumar, Tove, Juelsson Larsen, Ludvig January 2021 (has links)
Whilst GPS positioning has been a well used technology for many years in outdoor environments,a ubiquitous solution for indoor positioning is yet to be found, as GPS positioning is unreliableindoors. This thesis focuses on the combination of Inertial Sensor Dead Reckoning and positionsobtained from the Bluetooth Low Energy (BLE) Direction Finding technique. The main objectiveis to reduce the error rate and size of a BLE Direction Finding system. The positioned object is aMicro-Electrical Mechanical System (MEMS) with an accelerometer and a gyroscope, placed on atrolley. The accelerometer and gyroscope are used to obtain an orientation, velocity vector, andin turn a position which is combined with the BLE Direction Finding position. To further reducethe error rate of the system, a Stationary Detection functionality is implemented. Because of thetrolley movement pattern causing noise in the sensor signals, and the limited sensor setup, it is notpossible to increase the accuracy of the system using the proposed method. However, the StationaryDetection is able to correctly determine a stationary state and thus decreasing error rate and powerconsumption. / GPS är en väl använd teknologi sedan många år, men på grund av dess bristande precision vid inomhuspositionering, behöver en ny teknologi för detta område hittas. Denna studie är fokuserad på Dead Reckoning som ett stöd till ett Bluetooth Direction Finding positioneringssystem. Det främsta målet är att minska felfrekvensen och felstorleken i BLE Direction Finding systemet. Föremålet som positioneras är en Micro-Electrical Mechanical System (MEMS) med en accelerometer och ett gyroskop, placerad på en vagn. Accelerometern och gyroskopet används för att erhålla en orientering, hastighetsvektor och därefter en position som kombineras med den position som ges av BLE Direction Finding. För att minska felfrekvensen ytterligare hos systemet, implementeras en funktionalitet som detekterar om MEMS-enheten är stillastående, kallad Stationary Detection. På grund av vagnens rörelsemönster, som bidrar till brus hos sensorsignalerna, samt den begränsade sensorkonfigurationen, är det inte möjligt att förbättra systemets precision med den föreslagna metoden. Dock kan Stationary Detection korrekt fastställa ett stationärt tillstånd och därmed minska felfrekvensen och energiförbrukningen för enheten.
108

Sensor Integration for Low-Cost Crash Avoidance

Roussel, Stephane M 01 November 2009 (has links)
This report is a summary of the development of sensor integration for low-cost crash avoidance for over-land commercial trucks. The goal of the project was to build and test a system composed of low-cost commercially available sensors arranged on a truck trailer to monitor the environment around the truck. The system combines the data from each sensor to increase the reliability of the sensor using a probabilistic data fusion approach. A combination of ultrasonic and magnetoresistive sensors was used in this study. In addition, Radar and digital imaging were investigated as reference signals and possible candidates for additional sensor integration. However, the primary focus of this work is the integration of the ultrasonic and magnetoresistive sensors. During the investigation the individual sensors were evaluated for their use in the system. This included communication with vendors and lab and field testing. In addition, the sensors were modeled using an analytical mathematical model to help understand and predict the sensor behavior. Next, an algorithm was developed to fuse the data from the individual sensors. A probabilistic approach was used based on Bayesian filtering with a prediction-correction algorithm. Sensor fusion was implemented using joint a probability algorithm. The output of the system is a prediction of the likelihood of the presence of a vehicle in a given region near the host truck trailer. The algorithm was demonstrated on the fusion of an ultrasonic sensor and a magnetic sensor. Testing was conducted using both a light pickup truck and also with a class 8 truck. Various scenarios were evaluated to determine the system performance. These included vehicles passing the host truck from behind and the host truck passing vehicles. Also scenarios were included to test the system at distinguishing other vehicles from objects that are not vehicles such as sign posts, walls or railroads that could produce electronic signals similar to those of vehicles and confuse the system. The test results indicate that the system was successful at predicting the presence and absence of vehicles and also successful at eliminating false positives from objects that are not vehicles with overall accuracy ranging from 90 to 100% depending on the scenario. Some additional improvements in the performance are expected with future improvements in the algorithm discussed in the report. The report includes a discussion of the mapping of the algorithm output with the implementation of current and future safety and crash avoidance technologies based on the level of confidence of the algorithm output and the seriousness of the impending crash scenario. For example, irreversible countermeasures such as firing an airbag or engaging the brakes should only be initiated if the confidence of the signal is very high, while reversible countermeasures such as warnings to the driver or nearby vehicles can be initiated with a relatively lower confidence. The results indicate that the system shows good potential as a low cost alternative to competing systems which require multiple, high cost sensors. Truck fleet operators will likely adopt technology only if the costs are justified by reduced damage and insurance costs, therefore developing an effective crash avoidance system at a low cost is required for the technology to be adopted on a large scale.
109

Amélioration des méthodes de navigation vision-inertiel par exploitation des perturbations magnétiques stationnaires de l’environnement / Improving Visual-Inertial Navigation Using Stationary Environmental Magnetic Disturbances

Caruso, David 01 June 2018 (has links)
Cette thèse s'intéresse au problème du positionnement (position et orientation) dans un contexte de réalité augmentée et aborde spécifiquement les solutions à base de capteurs embarqués. Aujourd'hui, les systèmes de navigation vision-inertiel commencent à combler les besoins spécifiques de cette application. Néanmoins, ces systèmes se basent tous sur des corrections de trajectoire issues des informations visuelles à haute fréquence afin de pallier la rapide dérive des capteurs inertiels bas-coûts. Pour cette raison, ces méthodes sont mises en défaut lorsque l'environnement visuel est défavorable.Parallèlement, des travaux récents menés par la société Sysnav ont démontré qu'il était possible de réduire la dérive de l'intégration inertielle en exploitant le champ magnétique, grâce à un nouveau type d'UMI bas-coût composée – en plus des accéléromètres et gyromètres traditionnels – d'un réseau de magnétomètres. Néanmoins, cette méthode est également mise en défaut si des hypothèses de non-uniformité et de stationnarité du champ magnétique ne sont pas vérifiées localement autour du capteur.Nos travaux portent sur le développement d'une solution de navigation à l'estime robuste combinant toutes ces sources d'information: magnétiques, visuelles et inertielles.Nous présentons plusieurs approches pour la fusion de ces données, basées sur des méthodes de filtrage ou d’optimisation et nous développons un modèle de prédiction du champ magnétique inspiré d'approximation proposées en inertiel et permettant d’intégrer efficacement des termes magnétiques dans les méthodes d’ajustement de faisceaux. Les performances de ces différentes approches sont évaluées sur des données réelles et nous démontrons le bénéfice de la fusion de données comparées aux solutions vision-inertielles ou magnéto-inertielles. Des propriétés théoriques de ces méthodes liées à la théorie de l’invariance des estimateurs sont également étudiées. / This thesis addresses the issue of positioning in 6-DOF that arises from augmented reality applications and focuses on embedded sensors based solutions.Nowadays, the performance reached by visual-inertial navigation systems is starting to be adequate for AR applications. Nonetheless, those systems are based on position correction from visual sensors involved at a relatively high frequency to mitigate the quick drift of low-cost inertial sensors. This is a problem when the visual environment is unfavorable.In parallel, recent works have shown it was feasible to leverage magnetic field to reduce inertial integration drift thanks to a new type of low-cost sensor, which includes – in addition to the accelerometers and gyrometers – a network of magnetometers. Yet, this magnetic approach for dead-reckoning fails if stationarity and non-uniformity hypothesis on the magnetic field are unfulfilled in the vicinity of the sensor.We develop a robust dead-reckoning solution combining simultaneously information from all these sources: magnetic, visual, and inertial sensor. We present several approaches to solve for the fusion problem, using either filtering or non-linear optimization paradigm and we develop an efficient way to use magnetic error term in a classical bundle adjustment that was inspired from already used idea for inertial terms. We evaluate the performance of these estimators on data from real sensors. We demonstrate the benefits of the fusion compared to visual-inertial and magneto-inertial solutions. Finally, we study theoretical properties of the estimators that are linked to invariance theory.
110

Accurate Localization Given Uncertain Sensors

Kramer, Jeffrey A 08 April 2010 (has links)
The necessity of accurate localization in mobile robotics is obvious - if a robot does not know where it is, it cannot navigate accurately to reach goal locations. Robots learn about their environment via sensors. Small robots require small, efficient, and, if they are to be deployed in large numbers, inexpensive sensors. The sensors used by robots to perceive the world are inherently inaccurate, providing noisy, erroneous data or even no data at all. Combined with estimation error due to imperfect modeling of the robot, there are many obstacles to successfully localizing in the world. Sensor fusion is used to overcome these difficulties - combining the available sensor data in order to derive a more accurate pose estimation for the robot. In this thesis, we dissect and analyze a wide variety of sensor fusion algorithms, with the goal of using a set of inexpensive sensors in a suite to provide real-time localization for a robot given unknown sensor errors and malfunctions. The sensor fusion algorithms will fuse GPS, INS, compass and control inputs into a more accurate position. The filters discussed include a SPKF-PF (Sigma-Point Kalman Filter - Particle Filter), a MHSPKF (Multi-hypothesis Sigma-Point Kalman Filter), a FSPKF (Fuzzy Sigma-Point Kalman Filter), a DFSPKF (Double Fuzzy Sigma-Point Kalman Filter), an EKF (Extended Kalman Filter), a MHEKF (Multi-hypothesis Extended Kalman Filter), a FEKF (Fuzzy Extended Kalman Filter), and a standard SIS PF (Sequential Importance Sampling Particle Filter). Our goal in this thesis is to provide a toolbox of algorithms for a researcher, presented in a concise manner. I will also simultaneously provide a solution to a difficult sensor fusion problem - an algorithm that is of low computational complexity (< O(n³)), real-time, accurate (equal in or more accurate than a DGPS (differential GPS) given lower quality sensors), and robust - able to provide a useful localization solution even when sensors are faulty or inaccurate. The goal is to find a locus between power requirements, computational complexity and chip requirements and accuracy/robustness that provides the best of breed for small robots with inaccurate sensors. While other fusion algorithms work well, the Sigma Point Kalman filter solves this problem best, providing accurate localization and fast response, while the Fuzzy EKF is a close second in the shorter sample with less error, and the Sigma-Point Kalman Particle Filter does very well in a longer example with more error. Fuzzy control is also discussed, especially the reason for its applicability and its use in sensor fusion.

Page generated in 0.074 seconds