Spelling suggestions: "subject:"sensorfusion"" "subject:"sensorfusions""
141 |
Sensor Fused Scene Reconstruction and Surface InspectionMoodie, Daniel Thien-An 17 April 2014 (has links)
Optical three dimensional (3D) mapping routines are used in inspection robots to detect faults by creating 3D reconstructions of environments. To detect surface faults, sub millimeter depth resolution is required to determine minute differences caused by coating loss and pitting. Sensors that can detect these small depth differences cannot quickly create contextual maps of large environments.
To solve the 3D mapping problem, a sensor fused approach is proposed that can gather contextual information about large environments with one depth sensor and a SLAM routine; while local surface defects can be measured with an actuated optical profilometer. The depth sensor uses a modified Kinect Fusion to create a contextual map of the environment. A custom actuated optical profilometer is created and then calibrated. The two systems are then registered to each other to place local surface scans from the profilometer into a scene context created by Kinect Fusion.
The resulting system can create a contextual map of large scale features (0.4 m) with less than 10% error while the optical profilometer can create surface reconstructions with sub millimeter resolution. The combination of the two allows for the detection and quantification of surface faults with the profilometer placed in a contextual reconstruction. / Master of Science
|
142 |
Non-Field-of-View Acoustic Target EstimationTakami, Kuya 12 October 2015 (has links)
This dissertation proposes a new framework to Non-Field-of-view (NFOV) sound source localization and tracking in indoor environments. The approach takes advantage of sound signal information to localize target position through auditory sensors combination with other sensors within grid-based recursive estimation structure for tracking using nonlinear and non-Gaussian observations.
Three approaches to NFOV target localization are investigated. These techniques estimate target positions within the Recursive Bayesian estimation (RBE) framework. The first proposed technique uses a numerical fingerprinting solution based on acoustic cues of a fixed microphone array in a complex indoor environment. The Interaural level differences (ILDs) of microphone pair from a given environment are constructed as an a priori database, and used for calculating the observation likelihood during estimation. The approach was validated in a parametrically controlled testing environment, and followed by real environment validations. The second approach takes advantage of acoustic sensors in combination with an optical sensor to assist target estimation in NFOV conditions. This hybrid of the two sensors constructs observation likelihood through sensor fusion. The third proposed model-based technique localizes the target by taking advantage of wave propagation physics: the properties of sound diffraction and reflection. This approach allows target localization without an a priori knowledge database which is required for the first two proposed techniques.
To demonstrate the localization performance of the proposed approach, a series of parameterized numerical and experimental studies were conducted. The validity of the formulation and applicability to the actual environment were confirmed. / Ph. D.
|
143 |
From robotics to healthcare: toward clinically-relevant 3-D human pose tracking for lower limb mobility assessmentsMitjans i Coma, Marc 11 September 2024 (has links)
With an increase in age comes an increase in the risk of frailty and mobility decline, which can lead to dangerous falls and can even be a cause of mortality. Despite these serious consequences, healthcare systems remain reactive, highlighting the need for technologies to predict functional mobility decline. In this thesis, we present an end-to-end autonomous functional mobility assessment system that seeks to bridge the gap between robotics research and clinical rehabilitation practices. Unlike many fully integrated black-box models, our approach emphasizes the need for a system that is both reliable as well as transparent to facilitate its endorsement and adoption by healthcare professionals and patients.
Our proposed system is characterized by the sensor fusion of multimodal data using an optimization framework known as factor graphs. This method, widely used in robotics, enables us to obtain visually interpretable 3-D estimations of the human body in recorded footage. These representations are then used to implement autonomous versions of standardized assessments employed by physical therapists for measuring lower-limb mobility, using a combination of custom neural networks and explainable models.
To improve the accuracy of the estimations, we investigate the application of the Koopman operator framework to learn linear representations of human dynamics: We leverage these outputs as prior information to enhance the temporal consistency across entire movement sequences. Furthermore, inspired by the inherent stability of natural human movement, we propose ways to impose stability constraints in the dynamics during the training of linear Koopman models. In this light, we propose a sufficient condition for the stability of discrete-time linear systems that can be represented as a set of convex constraints. Additionally, we demonstrate how it can be seamlessly integrated into larger-scale gradient descent optimization methods.
Lastly, we report the performance of our human pose detection and autonomous mobility assessment systems by evaluating them on outcome mobility datasets collected from controlled laboratory settings and unconstrained real-life home environments. While we acknowledge that further research is still needed, the study results indicate that the system can demonstrate promising performance in assessing mobility in home environments. These findings underscore the significant potential of this and similar technologies to revolutionize physical therapy practices.
|
144 |
Pose Estimation and 3D Bounding Box Prediction for Autonomous Vehicles Through Lidar and Monocular Camera Sensor FusionWale, Prajakta Nitin 08 August 2024 (has links)
This thesis investigates the integration of transfer learning with ResNet-101 and compares its performance with VGG-19 for 3D object detection in autonomous vehicles. ResNet-101 is a deep Convolutional Neural Network with 101 layers and VGG-19 is a one with 19 layers. The research emphasizes the fusion of camera and lidar outputs to enhance the accuracy of 3D bounding box estimation, which is critical in occluded environments. Selecting an appropriate backbone for feature extraction is pivotal for achieving high detection accuracy. To address this challenge, we propose a method leveraging transfer learning with ResNet- 101, pretrained on large-scale image datasets, to improve feature extraction capabilities. The averaging technique is used on output of these sensors to get the final bounding box. The experimental results demonstrate that the ResNet-101 based model outperforms the VGG-19 based model in terms of accuracy and robustness. This study provides valuable insights into the effectiveness of transfer learning and multi-sensor fusion in advancing the innovation in 3D object detection for autonomous driving. / Master of Science / In the realm of computer vision, the quest for more accurate and robust 3D object detection pipelines remains an ongoing pursuit. This thesis investigates advanced techniques to im- prove 3D object detection by comparing two popular deep learning models, ResNet-101 and VGG-19. The study focuses on enhancing detection accuracy by combining the outputs from two distinct methods: one that uses a monocular camera to estimate 3D bounding boxes and another that employs lidar's bird's-eye view (BEV) data, converting it to image-based 3D bounding boxes. This fusion of outputs is critical in environments where objects may be partially obscured. By leveraging transfer learning, a method where models that are pre-trained on bigger datasets are finetuned for certain application, the research shows that ResNet-101 significantly outperforms VGG-19 in terms of accuracy and robustness. The approach involves averaging the outputs from both methods to refine the final 3D bound- ing box estimation. This work highlights the effectiveness of combining different detection methodologies and using advanced machine learning techniques to advance 3D object detec- tion technology.
|
145 |
Enhancing data-driven process quality control in metal additive manufacturing: sensor fusion, physical knowledge integration, and anomaly detectionZamiela, Christian E. 10 May 2024 (has links) (PDF)
This dissertation aims to provide critical methodological advancements for sensor fusion and physics-informed machine learning in metal additive manufacturing (MAM) to assist practitioners in detecting quality control structural anomalies. In MAM, there is an urgent need to improve knowledge of the internal layer fusion process and geometric variation occurring during the directed energy deposition processes. A core challenge lies in the cyclic heating process, which results in various structural abnormalities and deficiencies, reducing the reproducibility of manufactured components. Structural abnormalities include microstructural heterogeneities, porosity, deformation and distortion, and residual stresses. Data-driven monitoring in MAM is needed to capture process variability, but challenges arise due to the inability to capture the thermal history distribution process and structural changes below the surface due to limitations in in-situ data collection capabilities, physical domain knowledge integration, and multi-data and multi-physical data fusion. The research gaps in developing system-based generalizable artificial intelligence (AI) and machine learning (ML) to detect abnormalities are threefold. (1) Limited fusion of various types of sensor data without handcrafted selection of features. (2) There is a lack of physical domain knowledge integration for various systems, geometries, and materials. (3) It is essential to develop sensor and system integration platforms to enable a holistic view to make quality control predictions in the additive manufacturing process. In this dissertation, three studies utilize various data types and ML methodologies for predicting in-process anomalies. First, a complementary sensor fusion methodology joins thermal and ultrasonic image data capturing layer fusion and structural knowledge for layer-wise porosity segmentation. Secondly, a physics-informed data-driven methodology for joining thermal infrared image data with Goldak heat flux improves thermal history simulation and deformation detection. Lastly, a physics-informed machine learning methodology constrained by thermal physical functions utilizes in-process multi-modal monitoring data from a digital twin environment to predict distortion in the weld bead. This dissertation provides current practitioners with data-driven and physics-based interpolation methods, multi-modal sensor fusion, and anomaly detection insights trained and validated with three case studies.
|
146 |
A LIGHTWEIGHT CAMERA-LIDAR FUSION FRAMEWORK FOR TRAFFIC MONITORING APPLICATIONS / A CAMERA-LIDAR FUSION FRAMEWORKSochaniwsky, Adrian January 2024 (has links)
Intelligent Transportation Systems are advanced technologies used to reduce traffic
and increase road safety for vulnerable road users. Real-time traffic monitoring is an
important technology for collecting and reporting the information required to achieve
these goals through the detection and tracking of road users inside an intersection. To
be effective, these systems must be robust to all environmental conditions. This thesis
explores the fusion of camera and Light Detection and Ranging (LiDAR) sensors to
create an accurate and real-time traffic monitoring system. Sensor fusion leverages
complimentary characteristics of the sensors to increase system performance in low-
light and inclement weather conditions. To achieve this, three primary components
are developed: a 3D LiDAR detection pipeline, a camera detection pipeline, and a
decision-level sensor fusion module. The proposed pipeline is lightweight, running
at 46 Hz on modest computer hardware, and accurate, scoring 3% higher than the
camera-only pipeline based on the Higher Order Tracking Accuracy metric. The
camera-LiDAR fusion system is built on the ROS 2 framework, which provides a
well-defined and modular interface for developing and evaluated new detection and
tracking algorithms. Overall, the fusion of camera and LiDAR sensors will enable
future traffic monitoring systems to provide cities with real-time information critical
for increasing safety and convenience for all road-users. / Thesis / Master of Applied Science (MASc) / Accurate traffic monitoring systems are needed to improve the safety of road users.
These systems allow the intersection to “see” vehicles and pedestrians, providing near
instant information to assist future autonomous vehicles, and provide data to city
planers and officials to enable reductions in traffic, emissions, and travel times. This
thesis aims to design, build, and test a traffic monitoring system that uses a camera
and 3D laser-scanner to find and track road users in an intersection. By combining a
camera and 3D laser scanner, this system aims to perform better than either sensor
alone. Furthermore, this thesis will collect test data to prove it is accurate and able
to see vehicles and pedestrians during the day and night, and test if runs fast enough
for “live” use.
|
147 |
Applications of Sensor Fusion to Classification, Localization and MappingAbdelbar, Mahi Othman Helmi Mohamed Helmi Hussein 30 April 2018 (has links)
Sensor Fusion is an essential framework in many Engineering fields. It is a relatively new paradigm for integrating data from multiple sources to synthesize new information that in general would not have been feasible from the individual parts. Within the wireless communications fields, many emerging technologies such as Wireless Sensor Networks (WSN), the Internet of Things (IoT), and spectrum sharing schemes, depend on large numbers of distributed nodes working collaboratively and sharing information. In addition, there is a huge proliferation of smartphones in the world with a growing set of cheap powerful embedded sensors. Smartphone sensors can collectively monitor a diverse range of human activities and the surrounding environment far beyond the scale of what was possible before. Wireless communications open up great opportunities for the application of sensor fusion techniques at multiple levels.
In this dissertation, we identify two key problems in wireless communications that can greatly benefit from sensor fusion algorithms: Automatic Modulation Classification (AMC) and indoor localization and mapping based on smartphone sensors. Automatic Modulation Classification is a key technology in Cognitive Radio (CR) networks, spectrum sharing, and wireless military applications. Although extensively researched, performance of signal classification at a single node is largely bounded by channel conditions which can easily be unreliable. Applying sensor fusion techniques to the signal classification problem within a network of distributed nodes is presented as a means to overcome the detrimental channel effects faced by single nodes and provide more reliable classification performance.
Indoor localization and mapping has gained increasing interest in recent years. Currently-deployed positioning techniques, such as the widely successful Global Positioning System (GPS), are optimized for outdoor operation. Providing indoor location estimates with high accuracy up to the room or suite level is an ongoing challenge. Recently, smartphone sensors, specially accelerometers and gyroscopes, provided attractive solutions to the indoor localization problem through Pedestrian Dead-Reckoning (PDR) frameworks, although still suffering from several challenges. Sensor fusion algorithms can be applied to provide new and efficient solutions to the indoor localization problem at two different levels: fusion of measurements from different sensors in a smartphone, and fusion of measurements from several smartphones within a collaborative framework. / Ph. D. / Sensor Fusion is an essential paradigm in many Engineering fields. Information from different nodes, sensing various phenomena, is integrated to produce a general synthesis of the individual data. Sensor fusion provides a better understanding of the sensed phenomenon, improves the application or system performance, and helps overcome noise in individual measurements. In this dissertation we study some sensor fusion applications in wireless communications: (i) cooperative modulation classification and (ii) indoor localization and mapping at different levels. In cooperative modulation classification, data from different wireless distributed nodes is combined to generate a decision about the modulation scheme of an unknown wireless signal. For indoor localization and mapping, measurement data from smartphone sensors are combined through Pedestrian Dead Reckoning (PDR) to re-create movement trajectories of indoor mobile users, thus providing high-accuracy estimates of user’s locations. In addition, measurements from collaborating users inside buildings are combined to enhance the trajectories’ estimates and overcome limitations in single users’ system performance. The results presented in both parts of this dissertation in different frameworks show that combining data from different collaborative sources greatly enhances systems’ performances, and open the door for new and smart applications of sensor fusion in various wireless communications areas.
|
148 |
Self-Powered Intelligent Traffic Monitoring Using IR Lidar and CameraTian, Yi 06 February 2017 (has links)
This thesis presents a novel self-powered infrastructural traffic monitoring approach that estimates traffic information by combining three detection techniques. The traffic information can be obtained from the presented approach includes vehicle counts, speed estimation and vehicle classification based on size. Two categories of sensors are used including IR Lidar and IR camera. With the two sensors, three detection techniques are used: Time of Flight (ToF) based, vision based and Laser spot flow based. Each technique outputs observations about vehicle location at different time step. By fusing the three observations in the framework of Kalman filter, vehicle location is estimated, based on which other concerned traffic information including vehicle counts, speed and class is obtained. In this process, high reliability is achieved by combing the strength of each techniques. To achieve self-powering, a dynamic power management strategy is developed to reduce system total energy cost and optimize power supply in traffic monitoring based on traffic pattern recognition. The power manager attempts to adjust the power supply by reconfiguring system setup according to its estimation about current traffic condition. A system prototype has been built and multiple field experiments and simulations were conducted to demonstrate traffic monitoring accuracy and power reduction efficacy. / Master of Science / This thesis presents a novel traffic monitoring system that does not require external power source. The traffic monitoring system is able to collect traffic variables including count, speed and vehicle types. The system uses two types of sensors and implements three different measuring techniques. By combining the results from the three techniques, higher accuracy and reliability is achieved. A power management component is also developed for the system to save energy usage. Based on current or predicted system power state, the power manager selectively deactivates or turns off certain part of the system to reduce power consumption. A system prototype has been built and multiple field experiments and simulations were conducted to demonstrate traffic monitoring accuracy and power reduction efficacy. The experiments have shown that the system achieves high accuracy in every variable estimation and large portion of energy is saved by adopting power management.
|
149 |
Performance Enhancement Of Intrusion Detection System Using Advances In Sensor FusionThomas, Ciza 04 1900 (has links)
The technique of sensor fusion addresses the issues relating to the optimality of decision-making in the multiple-sensor framework. The advances in sensor fusion enable to perform intrusion detection for both rare and new attacks. This thesis discusses this assertion in detail, and describes the theoretical and experimental work done to show its validity.
The attack-detector relationship is initially modeled and validated to understand the detection scenario. The different metrics available for the evaluation of intrusion detection systems are also introduced. The usefulness of the data set used for experimental evaluation has been demonstrated. The issues connected with intrusion detection systems are analyzed and the need for incorporating multiple detectors and their fusion is established in this work. Sensor fusion provides advantages with respect to reliability and completeness, in addition to intuitive and meaningful results. The goal for this work is to investigate how to combine data from diverse intrusion detection systems in order to improve the detection rate and reduce the false-alarm rate. The primary objective of the proposed thesis work is to develop a theoretical and practical basis for enhancing the performance of intrusion detection systems using advances in sensor fusion with easily available intrusion detection systems. This thesis introduces the mathematical basis for sensor fusion in order to provide enough support for the acceptability of sensor fusion in performance enhancement of intrusion detection systems. The thesis also shows the practical feasibility of performance enhancement using advances in sensor fusion and discusses various sensor fusion algorithms, its characteristics and related design and implementation is-sues. We show that it is possible to build performance enhancement to intrusion detection systems by setting proper threshold bounds and also by rule-based fusion. We introduce an architecture called the data-dependent decision fusion as a framework for building intrusion detection systems using sensor fusion based on data-dependency. Furthermore, we provide information about the types of data, the data skewness problems and the most effective algorithm in detecting different types of attacks. This thesis also proposes and incorporates a modified evidence theory for the fusion unit, which performs very well for the intrusion detection application. The future improvements in individual IDSs can also be easily incorporated in this technique in order to obtain better detection capabilities. Experimental evaluation shows that the proposed methods have the capability of detecting a significant percentage of rare and new attacks. The improved performance of the IDS using the algorithms that has been developed in this thesis, if deployed fully would contribute to an enormous reduction of the successful attacks over a period of time. This has been demonstrated in the thesis and is a right step towards making the cyber space safer.
|
150 |
Machine Learning for Radar in Health Applications : Using machine learning with multiple radars to enhance fall detectionRaskov, Kristoffer, Christiansson, Oliver January 2022 (has links)
Two mm-wave frequency modulated continuous wave (FMCW) radars were combined with a recurrent neural network (RNN) to perform fall detection. The purpose was to find methods to implement a multi-radar setup for healthcare monitoring and to study the resulting models’ resilience to interference and other obstacles, such as re-arranging the radars in the room. Single-board computers (SBCs) controlled the radars to record and transfer data over Ethernet to a PC. The Ethernet connection also allowed synchronization with the network time protocol (NTP), which was necessary to put the data from the two sensors in correspondence. The proposed RNN used two bidirectional long-short term memory (Bi-LSTM) layers with L2-regularization and dropout layers. It had an overall accuracy of 95.15% and 98.11% recall with a test set. Performance in live testing varied with different arrangements, with an accuracy of 98% with the radars along the same wall, 94% with the radars diagonally, and 90% with an alternative arrangement that the RNN model had not seen during training. However, the latter arrangement resulted in a recall of 95.7%, with false alarms reducing the overall performance. In conclusion, the model performed adequately for fall detection, even with different radar arrangements but could still be sensitive to interference. / Två millimetervågs-radarsystem av typen frequency modulated continuous wave (FMCW) kombinerades för att med hjälp av ett recurrent neural network (RNN) utföra falldetektering. Syftet var att finna metoder för att implementera en multiradarplatform för hälsoövervakning samt att studera de resulterande modellernas tolerans mot interferens och andra hinder så som att radarsystemen placeras på olika sätt i rummet. Enkortsdatorer kontrollerade radarsystemen för att kunna spela in och överföra data över Ethernet till en PC. Ethernetanslutningen möjliggjorde även synkronisering över network time protocol (NTP), vilket var nödvändigt för att sammanlänka datan från de båda sensorerna. Det föreslagna RNN:et använde två dubbelriktade (bidirectional) long-short term memory (Bi-LSTM) lager med L2-regularisering och dropout-lager. Det hade en total noggrannhet på 95.15% och 98.11% recall med ett test-set. Prestandan vid testning i drift varierade beroende på olika uppställningar av radarmodulerna, med en noggrannhet på 98% då de placerades längs samma vägg, 94% då de placerades diagonalt och 90% vid en alternativ uppställning som RNN-modellen inte hade sett när den tränades. Det senare resulterade dock i 95.7% recall, där falsklarm var den främsta felkällan. Sammanfattningsvis presterade modellen bra för falldetektering, även med olika uppställningar, men den verkar fortfarande vara känslig för interferens.
|
Page generated in 0.062 seconds