• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 168
  • 25
  • 11
  • 8
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 269
  • 269
  • 70
  • 64
  • 62
  • 54
  • 50
  • 49
  • 46
  • 40
  • 39
  • 38
  • 35
  • 33
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Fusion de données capteurs étendue pour applications vidéo embarquées / Extended sensor fusion for embedded video applications

Alibay, Manu 18 December 2015 (has links)
Le travail réalisé au cours de cette thèse se concentre sur la fusion des données d'une caméra et de capteurs inertiels afin d'effectuer une estimation robuste de mouvement pour des applications vidéos embarquées. Les appareils visés sont principalement les téléphones intelligents et les tablettes. On propose une nouvelle technique d'estimation de mouvement 2D temps réel, qui combine les mesures visuelles et inertielles. L'approche introduite se base sur le RANSAC préemptif, en l'étendant via l'ajout de capteurs inertiels. L'évaluation des modèles de mouvement se fait selon un score hybride, un lagrangien dynamique permettant une adaptation à différentes conditions et types de mouvements. Ces améliorations sont effectuées à faible coût, afin de permettre une implémentation sur plateforme embarquée. L'approche est comparée aux méthodes visuelles et inertielles. Une nouvelle méthode d'odométrie visuelle-inertielle temps réelle est présentée. L'interaction entre les données visuelles et inertielles est maximisée en effectuant la fusion dans de multiples étapes de l'algorithme. A travers des tests conduits sur des séquences acquises avec la vérité terrain, nous montrons que notre approche produit des résultats supérieurs aux techniques classiques de l'état de l'art. / This thesis deals with sensor fusion between camera and inertial sensors measurements in order to provide a robust motion estimation algorithm for embedded video applications. The targeted platforms are mainly smartphones and tablets. We present a real-time, 2D online camera motion estimation algorithm combining inertial and visual measurements. The proposed algorithm extends the preemptive RANSAC motion estimation procedure with inertial sensors data, introducing a dynamic lagrangian hybrid scoring of the motion models, to make the approach adaptive to various image and motion contents. All these improvements are made with little computational cost, keeping the complexity of the algorithm low enough for embedded platforms. The approach is compared with pure inertial and pure visual procedures. A novel approach to real-time hybrid monocular visual-inertial odometry for embedded platforms is introduced. The interaction between vision and inertial sensors is maximized by performing fusion at multiple levels of the algorithm. Through tests conducted on sequences with ground-truth data specifically acquired, we show that our method outperforms classical hybrid techniques in ego-motion estimation.
82

Multi-sensor Information Fusion for Classification of Driver's Physiological Sensor Data

Barua, Shaibal January 2013 (has links)
Physiological sensor signals analysis is common practice in medical domain for diagnosis andclassification of various physiological conditions. Clinicians’ frequently use physiologicalsensor signals to diagnose individual’s psychophysiological parameters i.e., stress tiredness,and fatigue etc. However, parameters obtained from physiological sensors could vary becauseof individual’s age, gender, physical conditions etc. and analyzing data from a single sensorcould mislead the diagnosis result. Today, one proposition is that sensor signal fusion canprovide more reliable and efficient outcome than using data from single sensor and it is alsobecoming significant in numerous diagnosis fields including medical diagnosis andclassification. Case-Based Reasoning (CBR) is another well established and recognizedmethod in health sciences. Here, an entropy based algorithm, “Multivariate MultiscaleEntropy analysis” has been selected to fuse multiple sensor signals. Other physiologicalsensor signals measurements are also taken into consideration for system evaluation. A CBRsystem is proposed to classify ‘healthy’ and ‘stressed’ persons using both fused features andother physiological i.e. Heart Rate Variability (HRV), Respiratory Sinus Arrhythmia (RSA),Finger Temperature (FT) features. The evaluation and performance analysis of the system have been done and the results ofthe classification based on data fusion and physiological measurements are presented in thisthesis work.
83

Multi-agent System Distributed Sensor Fusion Algorithms

Bhattacharya, Shaondip January 2017 (has links)
The concept of consensus filters for sensor fusion is not an entirely new proposition but one with an internally implemented Bayesian fusion is. This work documents a novel state update algorithm for sensor fusion which works using the principle of Bayesian fusion of data with variance implemented on a single integrator consensus algorithm. Comparative demonstrations of how consensus over a pinning network is reached are presented along with a weighted Bayesian Luenberger type observer and a ’Consensus on estimates’ algorithm. This type of a filter is something that is novel and has not been encountered in previous literature related to this topic to the best of our knowledge. In this work, we also extend the proof for a distributed Luenberger type observer design to include the case where the network being considered is a strongly connected digraph.
84

Autonomous Fire Detection Robot Using Modified Voting Logic

Rehman, Adeel ur January 2015 (has links)
Recent developments at Fukushima Nuclear Power Plant in Japan have created urgency in the scientist community to come up with solutions for hostile industrial environment in case of a breakdown or natural disaster. There are many hazardous scenarios in an indoor industrial environment such as risk of fire, failure of high speed rotary machines, chemical leaks, etc. Fire is one of the leading causes for workplace injuries and fatalities. The current fire protection systems available in the market mainly consist of a sprinkler systems and personnel on duty. In the case of a sprinkler system there could be several things that could go wrong, such as spraying water on a fire created by an oil leak may even spread it, the water from the sprinkler system may harm the machinery in use that is not under the fire threat and the water could potentially destroy expensive raw material, finished goods and valuable electronic and printed data. There is a dire need of an inexpensive autonomous system that can detect and approach the source of these hazardous scenarios. This thesis focuses mainly on industrial fires but, using same or similar techniques on different sensors, may allow it to detect and approach other hostile situations in industrial workplace. Autonomous robots can be equipped to detect potential threats of fire and find out the source while avoiding the obstacles during navigation. The proposed system uses Modified Voting Logic Fusion to approach and declare a potential fire source autonomously. The robot follows the increasing gradient of light and heat intensity to identify the threat and approach the source.
85

Smartphone Based Indoor Positioning Using Wi-Fi Round Trip Time and IMU Sensors / Smartphone-baserad inomhuspositionering med Wi-Fi Round-Trip Time och IMU-sensorer

Aaro, Gustav January 2020 (has links)
While GPS long has been an industry standard for localization of an entity or person anywhere in the world, it loses much of its accuracy and value when used indoors. To enable services such as indoor navigation, other methods must be used. A new standard of the Wi-Fi protocol, IEEE 802.11mc (Wi-Fi RTT), enables distance estimation between the transmitter and the receiver based on the Round-Trip Time (RTT) delay of the signal. Using these distance estimations and the known locations of the transmitting Access Points (APs), an estimation of the receiver’s location can be determined. In this thesis, a smartphone Wi-Fi RTT based Indoor Positioning System (IPS) is presented using an Unscented Kalman Filter (UKF). The UKF using only RTT based distance estimations as input, is established as a baseline implementation. Two extensions are then presented to improve the positioning performance; 1) a dead reckoning algorithm using smartphone sensors part of the Inertial Measurement Unit (IMU) as an additional input to the UKF, and 2) a method to detect and adjust distance measurements that have been made in Non-Line-of-Sight (NLoS) conditions. The implemented IPS is evaluated in an office environment in both favorable situations (plenty of Line-of-Sight conditions) and sub-optimal situations (dominant NLoS conditions). Using both extensions, meter level accuracy is achieved in both cases as well as a 90th percentile error of less than 2 meters.
86

Harnessing Multiscale Nonimaging Optics for Automotive Flash LiDAR and Heterogenous Semiconductor Integration

January 2020 (has links)
abstract: Though a single mode of energy transfer, optical radiation meaningfully interacts with its surrounding environment at over a wide range of physical length scales. For this reason, its reconstruction and measurement are of great importance in remote sensing, as these multi-scale interactions encode a great deal of information about distant objects, surfaces, and physical phenomena. For some remote sensing applications, obtaining a desired quantity of interest does not necessitate the explicit mapping of each point in object space to an image space with lenses or mirrors. Instead, only edge rays or physical boundaries of the sensing instrument are considered, while the spatial intensity distribution of optical energy received from a distant object informs its position, optical characteristics, or physical/chemical state. Admittedly specialized, the principals and consequences of non-imaging optics are nevertheless applicable to heterogeneous semiconductor integration and automotive light detection and ranging (LiDAR), two important emerging technologies. Indeed, a review of relevant engineering literature finds two under-addressed remote sensing challenges. The semiconductor industry lacks an optical strain metrology with displacement resolution smaller than 100 nanometers capable of measuring strain fields between high-density interconnect lines. Meanwhile, little attention is paid to the per-meter sensing characteristics of scene-illuminating flash LiDAR in the context of automotive applications, despite the technology’s much lower cost. It is here that non-imaging optics offers intriguing instrument design and explanations of observed sensor performance at vastly different length scales. In this thesis, an effective non-contact technique for mapping nanoscale mechanical strain fields and out-of-plane surface warping via laser diffraction is demonstrated, with application as a novel metrology for next-generation semiconductor packages. Additionally, object detection distance of low-cost automotive flash LiDAR, on the order of tens of meters, is understood though principals of optical energy transfer from the surface of a remote object to an extended multi-segment detector. Such information is of consequence when designing an automotive perception system to recognize various roadway objects in low-light scenarios. / Dissertation/Thesis / Doctoral Dissertation Electrical Engineering 2020
87

Sensor Fusion for 3D Object Detection for Autonomous Vehicles

Massoud, Yahya 14 October 2021 (has links)
Thanks to the major advancements in hardware and computational power, sensor technology, and artificial intelligence, the race for fully autonomous driving systems is heating up. With a countless number of challenging conditions and driving scenarios, researchers are tackling the most challenging problems in driverless cars. One of the most critical components is the perception module, which enables an autonomous vehicle to "see" and "understand" its surrounding environment. Given that modern vehicles can have large number of sensors and available data streams, this thesis presents a deep learning-based framework that leverages multimodal data – i.e. sensor fusion, to perform the task of 3D object detection and localization. We provide an extensive review of the advancements of deep learning-based methods in computer vision, specifically in 2D and 3D object detection tasks. We also study the progress of the literature in both single-sensor and multi-sensor data fusion techniques. Furthermore, we present an in-depth explanation of our proposed approach that performs sensor fusion using input streams from LiDAR and Camera sensors, aiming to simultaneously perform 2D, 3D, and Bird’s Eye View detection. Our experiments highlight the importance of learnable data fusion mechanisms and multi-task learning, the impact of different CNN design decisions, speed-accuracy tradeoffs, and ways to deal with overfitting in multi-sensor data fusion frameworks.
88

A Novel Fusion Technique for 2D LIDAR and Stereo Camera Data Using Fuzzy Logic for Improved Depth Perception

Saksena, Harsh 08 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Obstacle detection, avoidance and path finding for autonomous vehicles requires precise information of the vehicle’s system environment for faultless navigation and decision making. As such vision and depth perception sensors have become an integral part of autonomous vehicles in the current research and development of the autonomous industry. The advancements made in vision sensors such as radars, Light Detection And Ranging (LIDAR) sensors and compact high resolution cameras is encouraging, however individual sensors can be prone to error and misinformation due to environmental factors such as scene illumination, object reflectivity and object transparency. The application of sensor fusion in a system, by the utilization of multiple sensors perceiving similar or relatable information over a network, is implemented to provide a more robust and complete system information and minimize the overall perceived error of the system. 3D LIDAR and monocular camera are the most commonly utilized vision sensors for the implementation of sensor fusion. 3D LIDARs boast a high accuracy and resolution for depth capturing for any given environment and have a broad range of applications such as terrain mapping and 3D reconstruction. Despite 3D LIDAR being the superior sensor for depth, the high cost and sensitivity to its environment make it a poor choice for mid-range application such as autonomous rovers, RC cars and robots. 2D LIDARs are more affordable, easily available and have a wider range of applications than 3D LIDARs, making them the more obvious choice for budget projects. The primary objective of this thesis is to implement a smart and robust sensor fusion system using 2D LIDAR and a stereo depth camera to capture depth and color information of an environment. The depth points generated by the LIDAR are fused with the depth map generated by the stereo camera by a Fuzzy system that implements smart fusion and corrects any gaps in the depth information of the stereo camera. The use of Fuzzy system for sensor fusion of 2D LIDAR and stereo camera is a novel approach to the sensor fusion problem and the output of the fuzzy fusion provides higher depth confidence than the individual sensors provide. In this thesis, we will explore the multiple layers of sensor and data fusion that have been applied to the vision system, both on the camera and lidar data individually and in relation to each other. We will go into detail regarding the development and implementation of fuzzy logic based fusion approach, the fuzzification of input data and the method of selection of the fuzzy system for depth specific fusion for the given vision system and how fuzzy logic can be utilized to provide information which is vastly more reliable than the information provided by the camera and LIDAR separately
89

Towards an improvement of BLE Direction Finding accuracyusing Dead Reckoning with inertial sensors / Mot en förbättring av precisionen hos BLE Direction Finding genom användning av Dead Reckoning

Rumar, Tove, Juelsson Larsen, Ludvig January 2021 (has links)
Whilst GPS positioning has been a well used technology for many years in outdoor environments,a ubiquitous solution for indoor positioning is yet to be found, as GPS positioning is unreliableindoors. This thesis focuses on the combination of Inertial Sensor Dead Reckoning and positionsobtained from the Bluetooth Low Energy (BLE) Direction Finding technique. The main objectiveis to reduce the error rate and size of a BLE Direction Finding system. The positioned object is aMicro-Electrical Mechanical System (MEMS) with an accelerometer and a gyroscope, placed on atrolley. The accelerometer and gyroscope are used to obtain an orientation, velocity vector, andin turn a position which is combined with the BLE Direction Finding position. To further reducethe error rate of the system, a Stationary Detection functionality is implemented. Because of thetrolley movement pattern causing noise in the sensor signals, and the limited sensor setup, it is notpossible to increase the accuracy of the system using the proposed method. However, the StationaryDetection is able to correctly determine a stationary state and thus decreasing error rate and powerconsumption. / GPS är en väl använd teknologi sedan många år, men på grund av dess bristande precision vid inomhuspositionering, behöver en ny teknologi för detta område hittas. Denna studie är fokuserad på Dead Reckoning som ett stöd till ett Bluetooth Direction Finding positioneringssystem. Det främsta målet är att minska felfrekvensen och felstorleken i BLE Direction Finding systemet. Föremålet som positioneras är en Micro-Electrical Mechanical System (MEMS) med en accelerometer och ett gyroskop, placerad på en vagn. Accelerometern och gyroskopet används för att erhålla en orientering, hastighetsvektor och därefter en position som kombineras med den position som ges av BLE Direction Finding. För att minska felfrekvensen ytterligare hos systemet, implementeras en funktionalitet som detekterar om MEMS-enheten är stillastående, kallad Stationary Detection. På grund av vagnens rörelsemönster, som bidrar till brus hos sensorsignalerna, samt den begränsade sensorkonfigurationen, är det inte möjligt att förbättra systemets precision med den föreslagna metoden. Dock kan Stationary Detection korrekt fastställa ett stationärt tillstånd och därmed minska felfrekvensen och energiförbrukningen för enheten.
90

Sensor Integration for Low-Cost Crash Avoidance

Roussel, Stephane M 01 November 2009 (has links)
This report is a summary of the development of sensor integration for low-cost crash avoidance for over-land commercial trucks. The goal of the project was to build and test a system composed of low-cost commercially available sensors arranged on a truck trailer to monitor the environment around the truck. The system combines the data from each sensor to increase the reliability of the sensor using a probabilistic data fusion approach. A combination of ultrasonic and magnetoresistive sensors was used in this study. In addition, Radar and digital imaging were investigated as reference signals and possible candidates for additional sensor integration. However, the primary focus of this work is the integration of the ultrasonic and magnetoresistive sensors. During the investigation the individual sensors were evaluated for their use in the system. This included communication with vendors and lab and field testing. In addition, the sensors were modeled using an analytical mathematical model to help understand and predict the sensor behavior. Next, an algorithm was developed to fuse the data from the individual sensors. A probabilistic approach was used based on Bayesian filtering with a prediction-correction algorithm. Sensor fusion was implemented using joint a probability algorithm. The output of the system is a prediction of the likelihood of the presence of a vehicle in a given region near the host truck trailer. The algorithm was demonstrated on the fusion of an ultrasonic sensor and a magnetic sensor. Testing was conducted using both a light pickup truck and also with a class 8 truck. Various scenarios were evaluated to determine the system performance. These included vehicles passing the host truck from behind and the host truck passing vehicles. Also scenarios were included to test the system at distinguishing other vehicles from objects that are not vehicles such as sign posts, walls or railroads that could produce electronic signals similar to those of vehicles and confuse the system. The test results indicate that the system was successful at predicting the presence and absence of vehicles and also successful at eliminating false positives from objects that are not vehicles with overall accuracy ranging from 90 to 100% depending on the scenario. Some additional improvements in the performance are expected with future improvements in the algorithm discussed in the report. The report includes a discussion of the mapping of the algorithm output with the implementation of current and future safety and crash avoidance technologies based on the level of confidence of the algorithm output and the seriousness of the impending crash scenario. For example, irreversible countermeasures such as firing an airbag or engaging the brakes should only be initiated if the confidence of the signal is very high, while reversible countermeasures such as warnings to the driver or nearby vehicles can be initiated with a relatively lower confidence. The results indicate that the system shows good potential as a low cost alternative to competing systems which require multiple, high cost sensors. Truck fleet operators will likely adopt technology only if the costs are justified by reduced damage and insurance costs, therefore developing an effective crash avoidance system at a low cost is required for the technology to be adopted on a large scale.

Page generated in 0.0717 seconds