Spelling suggestions: "subject:"[een] SENSOR FUSION"" "subject:"[enn] SENSOR FUSION""
171 |
Likelihood as a Method of Multi Sensor Data Fusion for Target TrackingGallagher, Jonathan G. 08 September 2009 (has links)
No description available.
|
172 |
Efficient and Adaptive Decentralized Sparse Gaussian Process Regression for Environmental Sampling Using Autonomous VehiclesNorton, Tanner A. 27 June 2022 (has links)
In this thesis, I present a decentralized sparse Gaussian process regression (DSGPR) model with event-triggered, adaptive inducing points. This DSGPR model brings the advantages of sparse Gaussian process regression to a decentralized implementation. Being decentralized and sparse provides advantages that are ideal for multi-agent systems (MASs) performing environmental modeling. In this case, MASs need to model large amounts of information while having potential intermittent communication connections. Additionally, the model needs to correctly perform uncertainty propagation between autonomous agents and ensure high accuracy on the prediction. For the model to meet these requirements, a bounded and efficient real-time sparse Gaussian process regression (SGPR) model is needed. I improve real-time SGPR models in these regards by introducing an adaptation of the mean shift and fixed-width clustering algorithms called radial clustering. Radial clustering enables real-time SGPR models to have an adaptive number of inducing points through an efficient inducing point selection process. I show how this clustering approach scales better than other seminal Gaussian process regression (GPR) and SGPR models for real-time purposes while attaining similar prediction accuracy and uncertainty reduction performance. Furthermore, this thesis addresses common issues inherent in decentralized frameworks such as high computation costs, inter-agent message bandwidth restrictions, and data fusion integrity. These challenges are addressed in part through performing maximum consensus between local agent models which enables the MAS to gain the advantages of decentral- ization while keeping data fusion integrity. The inter-agent communication restrictions are addressed through the contribution of two message passing heuristics called the covariance reduction heuristic and the Bhattacharyya distance heuristic. These heuristics enable user to reduce message passing frequency and message size through the Bhattacharyya distance and properties of spatial kernels. The entire DSGPR framework is evaluated on multiple simulated random vector fields. The results show that this framework effectively estimates vector fields using multiple autonomous agents. This vector field is assumed to be a wind field; however, this framework may be applied to the estimation of other scalar or vector fields (e.g., fluids, magnetic fields, electricity, etc.).
|
173 |
Deep multi-modal U-net fusion methodology of infrared and ultrasonic images for porosity detection in additive manufacturingZamiela, Christian E 10 December 2021 (has links)
We developed a deep fusion methodology of non-destructive (NDT) in-situ infrared and ex- situ ultrasonic images for localization of porosity detection without compromising the integrity of printed components that aims to improve the Laser-based additive manufacturing (LBAM) process. A core challenge with LBAM is that lack of fusion between successive layers of printed metal can lead to porosity and abnormalities in the printed component. We developed a sensor fusion U-Net methodology that fills the gap in fusing in-situ thermal images with ex-situ ultrasonic images by employing a U-Net Convolutional Neural Network (CNN) for feature extraction and two-dimensional object localization. We modify the U-Net framework with the inception and LSTM block layers. We validate the models by comparing our single modality models and fusion models with ground truth X-ray computed tomography images. The inception U-Net fusion model localized porosity with the highest mean intersection over union score of 0.557.
|
174 |
Development of a Low-Cost and Easy-to-Use Wearable Knee Joint Monitoring System / A Wearable Knee Joint Monitoring SystemFaisal, Abu Ilius January 2020 (has links)
The loss of mobility among the elderly has become a significant health and socio-economic concern worldwide. Poor mobility due to gradual deterioration of the musculoskeletal system causes older adults to be more vulnerable to serious health risks such as joint injuries, bone fractures and traumatic brain injury. The costs associated with the treatment and management of these injuries are a huge financial/social burden on the government, society and family. Knee is one of the key joints that bear most of the body weight, so its proper function is essential for good mobility. Further, Continuous monitoring of the knee joint can potentially provide important quantitative information related to knee health and mobility that can be utilized for health assessment and early diagnoses of mobility-related problems.
In this research work, we developed an easy-to-use, low-cost, multi-sensor-based wearable device to monitor and assess the knee joint and proposed an analysis system to characterize and classify an individual’s knee joint features with respect to the baseline characteristics of his/her peer group. The system is composed of a set of different miniaturized sensors (inertial motion, temperature, pressure and galvanic skin response) to obtain linear acceleration, angular velocity, skin temperature, muscle pressure and sweat rate of a knee joint during different daily activities. A database is constructed from 70 healthy adults in the age range from 18 to 86 years using the combination of all signals from our knee joint monitoring system.
In order to extract relevant features from the datasets, we employed computationally efficient methods such as complementary filter and wavelet packet decomposition. Minimum redundancy maximum relevance algorithm and principal component analysis were used to select key features and reduce the dimension of the feature vectors. The obtained features were classified using the support vector machine, forming two distinct clusters in the baseline knee joint characteristics corresponding to gender, age, body mass index and knee/leg health condition. Thus, this simple, easy‐to‐use, cost-effective, non-invasive and unobtrusive knee monitoring system can be used for real-time evaluation and early diagnoses of joint disorders, fall detection, mobility monitoring and rehabilitation. / Thesis / Master of Applied Science (MASc)
|
175 |
A Novel Highly Accurate Wireless Wearable Human Locomotion Tracking and Gait Analysis System via UWB RadiosShaban, Heba Ahmed 09 June 2010 (has links)
Gait analysis is the systematic study of human walking. Clinical gait analysis is the process by which quantitative information is collected for the assessment and decision-making of any gait disorder. Although observational gait analysis is the therapist's primary clinical tool for describing the quality of a patient's walking pattern, it can be very unreliable. Modern gait analysis is facilitated through the use of specialized equipment. Currently, accurate gait analysis requires dedicated laboratories with complex settings and highly skilled operators. Wearable locomotion tracking systems are available, but they are not sufficiently accurate for clinical gait analysis. At the same time, wireless healthcare is evolving. Particularly, ultra wideband (UWB) is a promising technology that has the potential for accurate ranging and positioning in dense multi-path environments. Moreover, impulse-radio UWB (IR-UWB) is suitable for low-power and low-cost implementation, which makes it an attractive candidate for wearable, low-cost, and battery-powered health monitoring systems. The goal of this research is to propose and investigate a full-body wireless wearable human locomotion tracking system using UWB radios. Ultimately, the proposed system should be capable of distinguishing between normal and abnormal gait, making it suitable for accurate clinical gait analysis. / Ph. D.
|
176 |
Transformer enhanced affordance learning for autonomous drivingSankar, Rajasekar 30 October 2024 (has links)
Most existing autonomous driving perception approaches rely on the Direct perception method with camera sensors, yet they often overlook the valuable 3D spatial data provided by other sensors, such as LiDAR. This Master thesis investigates enhancing affordance learning through a multimodal fusion transformer, aiming to refine AV perception and scene interpretation by effectively integrating multi-sensor data. Our approach introduces a two-stage network architecture: the first stage employs a backbone to fuse sensor data and to extract features, while the second stage employs a Taskblock MLP network to predict both classification affordances (junction, red light, pedestrian, and vehicle hazards) and regression affordances (relative angle, lateral distance, and target vehicle distance). We utilized the TransFuser backbone, based on Imitation Learning, to integrate image and LiDAR BEV data using a self-attention mechanism and to extract the feature map. Our results are compared against image-only architectures like Latent TransFuser and other sensor fusion backbones. Integration with the OmniOpt 2 tool, developed by ScaDS.AI, facilitates hyperparameter optimization, enhancing the model performance. We assessed our model's effectiveness using the CARLA Town02 and as well as the real-world KITTI-360 datasets, demonstrating significant improvements in affordance prediction accuracy and reliability. This advancement underscores the potential of combining LiDAR and image data via transformer-based fusion to create safer and more efficient autonomous driving systems.:List of Figures ix
List of Tables xi
Abbreviations xiii
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Autonmous Driving: Overview . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.1 From highly automated to autonomous . . . . . . . . . . . . . . 1
1.1.2 Autonomy levels . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.3 Perception systems . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Three Paradigms for autonomous driving . . . . . . . . . . . . . . . . . 4
1.3 Sensor Fusion: Global context capture . . . . . . . . . . . . . . . . . . . 5
1.4 Research Questions and Methods . . . . . . . . . . . . . . . . . . . . . . 5
1.4.1 Research Questions (RQ) . . . . . . . . . . . . . . . . . . . . . . 5
1.4.2 Research Methods (RM) . . . . . . . . . . . . . . . . . . . . . . . 6
1.5 Structure of the work . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2 Research Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.1 Affordance Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2 Multi-Modal Autonomous Driving . . . . . . . . . . . . . . . . . . . . . 9
2.3 Sensor Fusion Methods for Object Detection and Motion Forecasting . . 10
2.4 Attention for Autonomous Driving . . . . . . . . . . . . . . . . . . . . . 11
3 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.1 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.1.1 Problem setting A . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.1.2 Problem setting B . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.2 Input and Output parametrization . . . . . . . . . . . . . . . . . . . . . 15
3.2.1 Input Representation . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.2.2 Output Representation . . . . . . . . . . . . . . . . . . . . . . . . 18
3.3 Definition of affordances . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.4 Proposed Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.5 Detailed overview of the Proposed Architecture . . . . . . . . . . . . . . 20
3.5.1 Stage1: TransFuser Backbone - Multimodal fusion transformer . 21
3.5.2 Fused Feature extraction . . . . . . . . . . . . . . . . . . . . . . 23
3.5.3 Annotations extraction . . . . . . . . . . . . . . . . . . . . . . . 24
3.5.4 Stage2: Task-Block MLP Network architecture . . . . . . . . . . 29
3.6 Loss Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.6.1 Stage1: Loss Function . . . . . . . . . . . . . . . . . . . . . . . . 30
3.6.2 Stage2: Loss Function . . . . . . . . . . . . . . . . . . . . . . . . 31
3.6.3 Total Loss Function . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.7 Other Backbone Architectures . . . . . . . . . . . . . . . . . . . . . . . . 32
3.7.1 Latent TransFuser . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.7.2 Geometric Fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.7.3 Late Fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.8 Hyperparameter Optimization: OmniOpt 2 . . . . . . . . . . . . . . . . 34
4 Training and Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.1 Dataset definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.1.1 Types of Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.1.2 Overview of Dataset Distribution . . . . . . . . . . . . . . . . . . 36
4.2 Implementation Details . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
4.3 Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.3.1 Stage 1: Backbone architecture training . . . . . . . . . . . . . . 38
4.3.2 Stage 2: TaskBlock MLP training . . . . . . . . . . . . . . . . . 39
4.3.3 Traning Parameter Study . . . . . . . . . . . . . . . . . . . . . . 41
4.4 Loss curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.4.1 Stage 1 Loss curve . . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.4.2 Stage 2 Loss curve . . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.5 Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.5.1 Preparation of a optimization project . . . . . . . . . . . . . . . 43
5 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
5.1 Quantitative Insights into Regression-Based Affordance Predictions . . . 45
5.1.1 Comparative Analysis of Error Metrics against each Backbone . 45
5.1.2 Graphical Analysis of error metrics performance for Transfuser . 47
5.2 Quantitative Insights into Classification-Based Affordance Predictions . 48
5.2.1 Comparative Analysis of Classification Performance Metrics against
each Backbone . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
5.2.2 Graphical Analysis of classification performance for TransFuser . 50
5.3 OmniOpt2 Hyper-optimization results . . . . . . . . . . . . . . . . . . . 52
5.4 Affordance Prediction Dashboard . . . . . . . . . . . . . . . . . . . . . . 53
6 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
6.1 Evaluation with CARLA Test dataset . . . . . . . . . . . . . . . . . . . 55
6.1.1 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
6.2 Evaluation with real world: The KITTI Dataset . . . . . . . . . . . . . 56
6.2.1 Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
6.2.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
A Ablation Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
A.1 Latent Transfuser with MLP . . . . . . . . . . . . . . . . . . . . . . . . 61
A.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
A.2.1 Comparative Analysis of Error Metrics in Latent Transfuser with
Transformer and MLP . . . . . . . . . . . . . . . . . . . . . . . . 61
A.2.2 Comparative Analysis of Classification Performance Metrics in
Latent Transfuser with Transformer and MLP . . . . . . . . . . 62
|
177 |
Hybrid marker-less camera pose tracking with integrated sensor fusionMoemeni, Armaghan January 2014 (has links)
This thesis presents a framework for a hybrid model-free marker-less inertial-visual camera pose tracking with an integrated sensor fusion mechanism. The proposed solution addresses the fundamental problem of pose recovery in computer vision and robotics and provides an improved solution for wide-area pose tracking that can be used on mobile platforms and in real-time applications. In order to arrive at a suitable pose tracking algorithm, an in-depth investigation was conducted into current methods and sensors used for pose tracking. Preliminary experiments were then carried out on hybrid GPS-Visual as well as wireless micro-location tracking in order to evaluate their suitability for camera tracking in wide-area or GPS-denied environments. As a result of this investigation a combination of an inertial measurement unit and a camera was chosen as the primary sensory inputs for a hybrid camera tracking system. After following a thorough modelling and mathematical formulation process, a novel and improved hybrid tracking framework was designed, developed and evaluated. The resulting system incorporates an inertial system, a vision-based system and a recursive particle filtering-based stochastic data fusion and state estimation algorithm. The core of the algorithm is a state-space model for motion kinematics which, combined with the principles of multi-view camera geometry and the properties of optical flow and focus of expansion, form the main components of the proposed framework. The proposed solution incorporates a monitoring system, which decides on the best method of tracking at any given time based on the reliability of the fresh vision data provided by the vision-based system, and automatically switches between visual and inertial tracking as and when necessary. The system also includes a novel and effective self-adjusting mechanism, which detects when the newly captured sensory data can be reliably used to correct the past pose estimates. The corrected state is then propagated through to the current time in order to prevent sudden pose estimation errors manifesting as a permanent drift in the tracking output. Following the design stage, the complete system was fully developed and then evaluated using both synthetic and real data. The outcome shows an improved performance compared to existing techniques, such as PTAM and SLAM. The low computational cost of the algorithm enables its application on mobile devices, while the integrated self-monitoring, self-adjusting mechanisms allow for its potential use in wide-area tracking applications.
|
178 |
Development of distributed control system for SSL soccer robotsHoltzhausen, David Schalk 03 1900 (has links)
Thesis (MScEng)--Stellenbosch University, 2013. / ENGLISH ABSTRACT: This thesis describes the development of a distributed control system for SSL
soccer robots. The project continues on work done to develop a robotics research
platform at Stellenbosch University. The wireless communication system
is implemented using Player middleware. This enables high level programming
of the robot drivers and communication clients, resulting in an easily
modifiable system. The system is developed to be used as either a centralised
or decentralised control system. The software of the robot’s motor controller
unit is updated to ensure optimal movement. Slippage of the robot’s wheels
restricts the robot’s movement capabilities. Trajectory tracking software is developed
to ensure that the robot follows the desired trajectory while operating
within its physical limits.
The distributed control architecture reduces the robots dependency on the
wireless network and the off-field computer. The robots are given some autonomy
by integrating the navigation and control on the robot self. Kalman filters
are designed to estimate the robots translational and rotational velocities. The
Kalman filters fuse vision data from an overhead vision system with inertial
measurements of an on-board IMU. This ensures reliable and accurate position,
orientation and velocity information on the robot. Test results show an
improvement in the controller performance as a result of the proposed system. / AFRIKAANSE OPSOMMING: Hierdie tesis beskryf die ontwikkeling van ’n verspreidebeheerstelsel vir SSL
sokker robotte. Die projek gaan voort op vorige werk wat gedoen is om ’n
robotika navorsingsplatform aan die Universiteit van Stellenbosch te ontwikkel.
Die kommunikasiestelsel is geïmplementeer met behulp van Player middelware.
Dit stel die robotbeheerders en kommunikasiekliënte in staat om in hoë vlak
tale geprogrameer te word. Dit lei tot ’n maklik veranderbare stelsel. Die
stelsel is so ontwikkel dat dit gebruik kan word as óf ’n gesentraliseerde of verspreidebeheerstelsel.
Die sagteware van die motorbeheer eenheid is opgedateer
om optimale robot beweging te verseker. As die robot se wiele gly beperk dit
die robot se bewegingsvermoëns. Trajekvolgings sagteware is ontwikkel om
te verseker dat die robot die gewenste pad volg, terwyl dit binne sy fisiese
operasionele grense bly.
Die verspreibeheerargitektuur verminder die robot se afhanklikheid op die
kommunikasienetwerk en die sentrale rekenaar. Die robot is ’n mate van outonomie
gegee deur die integrasie van die navigasie en beheer op die robot self te
doen. Kalman filters is ontwerp om die robot se translasie en rotasie snelhede
te beraam. Die Kalman filters kombineer visuele data van ’n oorhoofse visiestelsel
met inertia metings van ’n IMU op die robot. Dit verseker betroubare
en akkurate posisie, oriëntasie en snelheids inligting. Toetsresultate toon ’n
verbetering in die beheervermoë as ’n gevolg van die voorgestelde stelsel.
|
179 |
Sensor Fusion for Closed-loop Control of Upper-limb ProsthesesMarkovic, Marko 18 April 2016 (has links)
No description available.
|
180 |
Stabilization, Sensor Fusion and Path Following for Autonomous Reversing of a Full-Scale Truck and Trailer SystemNyberg, Patrik January 2016 (has links)
This thesis investigates and implements the sensor fusion necessary to autonomously reverse a full size truck and trailer system. This is done using a LiDAR mounted on the rear of the truck along with a RTK-GPS. It is shown that the relative angles between truck-dolly and dolly-trailer can be estimated, along with global position and global heading of the trailer. This is then implemented in one of Scania's test vehicles, giving it the ability to continuously estimate these states. A controller is then implemented, showing that the full scale system can be stabilised in reverse motion. The controller is tested both on a static reference path and a reference path received from a motion planner. In these tests, the controller is able to stabilise the system well, allowing the truck to do complex manoeuvres backwards. A small lateral tracking error is present, which needs to be further investigated.
|
Page generated in 0.0334 seconds