Spelling suggestions: "subject:"sensorfusion"" "subject:"sensorfusions""
271 |
Smartphone sensors are sufficient to measure smoothness of car driving / Smartphonesensorer är tillräckliga för att mäta mjukhet i bilkörningBränn, Jesper January 2017 (has links)
This study aims to look at whether or not it is sufficient to only use smartphone sensors to judge if someone who is driving a car is driving aggressively or smoothly. To determine this, data were first collected from the accelerometer, gyroscope, magnetometer and GPS sensors in the smartphone as well as values based on these sensors from the iOS operating system. After this the data, together with synthesized data based on the collected data, were used to train an artificial neural network.The results indicate that it is possible to give a binary judgment on aggressive or smooth driving with a 97% accuracy, with little model overfitting. The conclusion of this study is that it is sufficient to only use smartphone sensors to make a judgment on the drive. / Den här studien ämnar till att bedöma huruvida smartphonesensorer är tillräckliga för att avgöra om någon kör en bil aggressivt eller mjukt. För att kunna avgöra detta så samlades först data in från accelerometer, gyroskop, magnetometer och GPS-sensorerna i en smartphone, tillsammans med värden baserade på dessa data från iOS-operativ-systemet. Efter den datan var insamlad tränades ett artificiellt neuronnät med datan.Resultaten indikerar att det är möjligt att ge ett binärt utlåtande om aggressiv kontra mjuk körning med 97% säkerhet, och med liten överanpassning. Detta innebär att det är tillräckligt att enbart använda smartphonesensorer för att avgörande om körningen var mjuk eller aggressiv.
|
272 |
Formation Control of UAVs for Positioning and Tracking of a Moving TargetCarsk, Robert, Jeremic, Alexander January 2023 (has links)
The potential of Unmanned Aerial Vehicles (UAVs) for surveillance and military applications is significant — with continued technical advances in the field. The number of incidents where UAVs have intruded into unauthorized areas has increased in recent years and armed drones are commonly used in modern warfare. It is therefore of great interest to investigate methods for UAVs to locate and track intruder drones to prevent and counter surveillance of unauthorized areas and attacks from intruder UAVs. This master’s thesis studied how two autonomous seeker UAVs can be used cooperatively to track and pursue a target UAV. To locate the target UAV, simulated measurements from received Radio Frequency (RF) signals were used by extracting bearing and Received Signal Strength (RSS) data. To track the target and predict its future position, the study employed an Extended Kalman Filter (EKF) on each seeker UAV, which acted together as a Mobile Wireless Sensor Network (MWSN). The thesis explored two formation control methods to keep the seeker UAVs in formation while pursuing the target drone. The formation methods used the predicted position of the target to produce reference positions and/or reference distances for a controller to follow. A Distributed Model Predictive Controller (DMPC) was implemented on the seeker UAVs to pursue the target while maintaining formation and avoiding collisions. The EKF, MPC, and formation methods were first evaluated individually in simulation to assess their performance and for parameter tuning. The respective modules were then combined into the complete system and tuned to achieve improved pursuit and formation in simulation. The results showed that, with the chosen parameters and with a high level of measurement noise, the seeker UAVs were able to pursue the target with a combined average distance error of less than 2 m when the target drone flew in a square pattern with a velocity of 2 m/s. The quality of the pursuit was highly affected by the increase in velocity of the target and the initial positions of the seekers, where a high velocity and a large initial deviation from the reference positions/distances resulted in poorer quality.
|
273 |
Movement Estimation with SLAM through Multimodal Sensor FusionCedervall Lamin, Jimmy January 2024 (has links)
In the field of robotics and self-navigation, Simultaneous Localization and Mapping (SLAM) is a technique crucial for estimating poses while concurrently creating a map of the environment. Robotics applications often rely on various sensors for pose estimation, including cameras, inertial measurement units (IMUs), and more. Traditional discrete SLAM, utilizing stereo camera pairs and inertial measurement units, faces challenges such as time offsets between sensors. A solution to this issue is the utilization of continuous-time models for pose estimation. This thesis delves into the exploration and implementation of a continuous-time SLAM system, investigating the advantages of multi-modal sensor fusion over discrete stereo vision models. The findings indicate that incorporating an IMU into the system enhances pose estimation, providing greater robustness and accuracy compared to relying solely on visual SLAM. Furthermore, leveraging the continuous model's derivative and smoothness allows for decent pose estimation with fewer measurements, reducing the required quantity of measurements and computational resources.
|
274 |
Ultra-wideband location tracking of mobile devicesHietarinta, Daniel January 2022 (has links)
Today’s most widely used tracking solutions for consumers involve the Global Positioning System (GPS) which meets most needs when it comes to rough estimation of location. GPS however is limited in accuracy with a horizontal error of around 5 meters and cannot be used in the areas where the satellites cannot provide a strong enough signal e.g. indoor areas or near mountains and other sources of blockage. Ultra-wideband (UWB) abolishes these two issues, providing an accuracy at centimeter level and works great in indoor areas. This thesis dives into the theory behind tracking devices with UWB and includes an implementation of the tracking as well as covers noteworthy issues, shortcomings, and future work. The app that is developed within this thesis runs on Android mobile devices and can locate and track another Android mobile device running the same app. Results were clear that the concept works, but more filtering needs to be done in order to remove the remaining noise.
|
275 |
Extended Kalman Filter as Observer for a Hydrofoiling Watercraft : Modelling of a new hydrofoiling concept, based on the Spherical Inverted Pendulum ModelThålin, Adam January 2022 (has links)
Hydrofoiling in general has the potential to revolutionize watercraft in the future since it allows smoother and faster transport on water with less energy consumption than traditional planning hulls. Even if the concept of hydrofoiling has been around since the last century, development in control theory and material science together with increased computing power has led to a growing interest for the technology. Especially in water sports such as speed sailing and surfing due to its superiority in speed and comfort. Researchers and students at the Engineering Mechanics Department at KTH, Royal Institute of Technology, Stockholm are working on a new type of watercraft, utilizing only one single hydrofoil with the intention to minimize drag for faster and smoother rides in various wave and weather conditions. The difficulties lie in understanding the relationship between actuators and the mechanics. This thesis is a continuation work from a previous thesis which designed a control strategy based on a model with 4 degrees of freedom (DOF). Due to simplifications and linearizations, the 4 DOF model was not rich enough to meet the performance requirements. This thesis presents a 6 DOF model by deriving the mechanical equations for the spherical inverted pendulum and actuation from the hydrofoiling module. The inverted pendulum model is a well-known control problem that can be solved with different strategies. By showing that the hydrofoiling concept can be modelled as an inverted pendulum, it is also shown that it can be controlled as an inverted pendulum. The derived model is used together with an Extended Kalman Filter to create an observer. The observer is validated with a spherical inverted pendulum model in Matlab and the block diagram environment, Simulink. Simulation results show that the 6 DOF model is able to produce accurate state estimation of the watercraft even in the presence of stochastic measurement noise. It is also concluded that viscous forces, that arise from the watercraft being partly surrounded by water and partly by air, need further investigation. / Principen för bärplan är att generera lyftkraft från vattnet på samma sätt som flygplansvingar genererar lyftkraft från luften för att lyfta farkostens skrov ur vattnet. Detta minskar motståndet från friktionen mellan skrov och vatten vilket möjliggör snabbare och jämnare transport på vatten med en lägre energiförbrukning än traditionella planande skrov. På senare år har tekniken fått ett uppsving i och med framsteg inom strömningsmekanik, reglerteknik och materiallära. Detta i takt med datorers ökande beräkningskraft har lett till att bärplanskonstruktioner har kunnat uppvisa en överlägsenhet i vattensporter som kappsegling och surfing när det kommer till fart och komfort. Forskare och studenter på avdelningen för farkostteknik och solidmekanik vid Kungliga Tekniska Högskolan, Stockholm arbetar med att ta fram en ny typ av farkost med en minimal bärplansdesign, FoilCart. Dess utformning gör att det mekaniska beteendet kan liknas vid en inverterad pendel, vilket är ett välkänt, olinjärt reglerproblem som kan lösas på flera sätt. Denna avhandling är ett vidarearbete som bygger på en modell med fyra frihetsgrader från en tidigare avhandling kring FoilCart-projektet. Modellen med fyra frihetsgrader var, på grund av förenklingar och linjärisering av systemdynamiken, bristfällig och kunde inte garantera en robust balansering av farkosten förutom i linjäriseringspunkten. Modellen som presenteras i denna avhandling har sex frihetsgrader. Mekaniken och systemdynamiken härleds från den sfäriska inverterade pendeln tillsammans med styrningen från bärplansmodulen, utan förenklingar och linjärisering. Modellen används i ett Kalmanfilter för att konstruera en observatör för tillståndsrekonstruktion. Den framtagna modellen valideras med en FoilCart-modell i Simulink. Resultaten visar att observatören kan ge en noggrann tillståndsrekonstruktion även vid simulerat mätbrus i mätsignalen. Avhandlingen syftar till att visa hur den inverterade pendelmodellen kan användas vid framtida implementation av rekonstruerad tillståndsåterkoppling. I och med avgränsningar i avhandlingen finns det också en del strömningsmekaniska aspekter som inte tagits med vid framtagningen av denna modell. Eftersom farkosten delvis är omgiven av vatten och delvis av luft skulle det vara intressant att undersöka om noggrannheten i tillståndsrekonstruktionen kan förbättras genom att använda avancerad strömningsmekanik.
|
276 |
Sensor fusion between positioning system and mixed reality / Sensorfusion mellan positioneringssystem och mixed realityLifwergren, Anton, Jonatan, Jonsson January 2022 (has links)
In situations where we want to use mixed reality systems over larger areas, it is necessary for these systems to maintain a correct orientation with respect to the real world. A solution for synchronizing the mixed reality and the real world over time is therefore essential to provide a good user experience. This thesis proposes such a solution, utilizing both a local positioning system named WISPR using Ultra Wide Band technology and an internal positioning system based on Google ARCore utilizing feature tracking. This is done by presenting a prototype mobile application utilizing the positions from these two positioning systems to align the physical environment with a corresponding virtual 3D-model. This enables increased environmental awareness by displaying virtual objects in accurately placed locations in the environment that otherwise are difficult or impossible to observe. Two transformation algorithms were implemented to align the physical environment with the corresponding virtual 3D-model: Singular Value Decomposition and Orthonormal Matrices. The choice of algorithm showed minimal effect on both positional accuracy and computational cost. The most significant factor influencing the positional accuracy was found to be the quality of sampled position pairs from the two positioning systems. The parameters used to ensure high quality for the sampled position pairs were the LPS accuracy threshold, sampling frequency, sampling distance, and sample limit. A fine-tuning process of these parameters is presented and resulted in a mean Euclidean distance error of less than 10 cm to a predetermined path in a sub-optimal environment. The aim of this thesis was not only to achieve high positional accuracy but also to make the application usable in environments such as mines, which are prone to worse conditions than those able to be evaluated in the available test environment. The design of the application, therefore, focuses on robustness and being able to handle connection losses from either positioning system. The resulting implementation can detect a connection loss, determine if the loss is destructive enough through performing quality checking of the transformation, and with this can apply both essential recovery actions and identify when such a recovery is deemed unnecessary.
|
277 |
Machine Learning for Spatial Positioning for XR EnvironmentsAlraas, Khaled January 2024 (has links)
This bachelor's thesis explores the integration of machine learning (ML) with sensor fusion techniques to enhance spatial data accuracy in Extended Reality (XR) environments. With XR's revolutionary impact across various sectors, accurate localization in virtual environments becomes imperative. The thesis conducts a comprehensive literature review, highlighting advancements in indoor positioning technologies and the pivotal role of machine learning in refining sensor fusion for precise localization. It underscores the challenges in the XR field, such as signal interference, device heterogeneity, and data processing complexities. Through critical analysis, this study aims to bridge the gap in practical application of ML, offering insights into developing scalable solutions for immersive virtual productions. It offers insights into the practical integration of advanced machine learning techniques in XR applications, thereby providing valuable implications for technology development and user experience in XR. This contribution is not merely theoretical; it showcases practical applications and advancements in real-time processing and adaptability in complex environments, aligning well with existing research and extending it by addressing scalability and practical implementation challenges in XR environments. This study identifies key themes in the integration of ML with sensor fusion for XR, such as the enhancement of spatial data accuracy, challenges in real-time processing, and the need for scalable solutions. It concludes that the fusion of ML and sensor technologies not only enhances the accuracy of XR environments but also paves the way for more immersive and realistic virtual experiences.
|
278 |
Deep Learning for Sensor FusionHoward, Shaun Michael 30 August 2017 (has links)
No description available.
|
279 |
Benchmarking VisualInertial Odometry Filterbased Methods for VehiclesZahid, Muhammad January 2021 (has links)
Autonomous navigation has the opportunity to make roads safer and help perform search and rescue missions by reducing human error. Odometry methods are essential to allow for autonomous navigation because they estimate how the robot will move based on the available sensors. This thesis aims to compare and evaluate the Cubature Kalman filter (CKF) based approach for visual-inertial odometry (VIO) to traditional Extended Kalman Filter (EKF) based methods on criteria such as the accuracy of the results. VIO methods use camera and IMU sensor for the predictions. The Multi-State-Constraint Kalman filter (MSCKF) was utilized as the foundation VIO approach to evaluate the underlying filter between EKF and CKF while maintaining the background conditions like visual tracking pipeline, IMU model, and measurement model constant. Evaluation metrics of absolute trajectory error (ATE) and relative error (RE) was used after tuning the filters on EuRoC and KAIST datasets. It is shown that, based on the existing implementation, the filters have no statistically significant difference in performance when predicting motion estimates, despite the fact that the absolute trajectory error of position for EKF estimation is lower. It is further shown that as the length of the trajectory increases, the estimation error for both filters rises unboundedly. Under the visual inertial framework of MSCKF, the CKF filter, which does not linearize the system, works equally as well as the well-established EKF filter and has the potential to perform better with more accurate nonlinear system and measurement models. / Autonom navigering har möjlighet att göra vägar säkrare och hjälpa till att utföra räddningsuppdrag genom att minska mänskliga fel. Odometrimetoder är viktiga för att möjliggöra autonom navigering eftersom de skattar hur roboten rör sig baserat på tillgängliga sensorer. Detta examensarbete syftar till att utvärdera Cubature Kalman filter (CKF) för visuell tröghetsodometri (VIO) och jämföra med traditionella Extended Kalman Filter (EKF) gällande bland annat noggrannhet. VIO-metoder använder kamera och IMU-sensor för skattningarna. MultiState Constraint Kalmanfiltret (MSCKF) användes som grund VIO-metoden för att utvärdera filteralgoritmerna EKF och CKF, samtidigt som de VIO-specifika delarna så som IMU-modell och mätmodell kunde förbli desamma. Utvärderingen gjordes baserat på absolut banfel (ATE) och relativa fel (RE) på EuRoC- och KAIST-datauppsättningar. Det visas att, baserat på den befintliga implementeringen, har filtren ingen statistiskt signifikant skillnad i prestanda när de förutsäger rörelsen, trots att det absoluta banafelet för positionen för EKF-uppskattning är lägre. Det visas vidare att när längden på banan ökar, ökar uppskattningsfelet för båda filtren obegränsat. Under MSCKFs visuella tröghetsramverk fungerar CKF-filtret, som inte linjäriserar systemet, lika bra som det väletablerade EKF-filtret och har potential att prestera bättre med mer exakta olinjära system och mätmodeller.
|
280 |
Enhanced 3D Object Detection And Tracking In Autonomous Vehicles: An Efficient Multi-modal Deep Fusion ApproachPriyank Kalgaonkar (10911822) 03 September 2024 (has links)
<p dir="ltr">This dissertation delves into a significant challenge for Autonomous Vehicles (AVs): achieving efficient and robust perception under adverse weather and lighting conditions. Systems that rely solely on cameras face difficulties with visibility over long distances, while radar-only systems struggle to recognize features like stop signs, which are crucial for safe navigation in such scenarios.</p><p dir="ltr">To overcome this limitation, this research introduces a novel deep camera-radar fusion approach using neural networks. This method ensures reliable AV perception regardless of weather or lighting conditions. Cameras, similar to human vision, are adept at capturing rich semantic information, whereas radars can penetrate obstacles like fog and darkness, similar to X-ray vision.</p><p dir="ltr">The thesis presents NeXtFusion, an innovative and efficient camera-radar fusion network designed specifically for robust AV perception. Building on the efficient single-sensor NeXtDet neural network, NeXtFusion significantly enhances object detection accuracy and tracking. A notable feature of NeXtFusion is its attention module, which refines critical feature representation for object detection, minimizing information loss when processing data from both cameras and radars.</p><p dir="ltr">Extensive experiments conducted on large-scale datasets such as Argoverse, Microsoft COCO, and nuScenes thoroughly evaluate the capabilities of NeXtDet and NeXtFusion. The results show that NeXtFusion excels in detecting small and distant objects compared to existing methods. Notably, NeXtFusion achieves a state-of-the-art mAP score of 0.473 on the nuScenes validation set, outperforming competitors like OFT by 35.1% and MonoDIS by 9.5%.</p><p dir="ltr">NeXtFusion’s excellence extends beyond mAP scores. It also performs well in other crucial metrics, including mATE (0.449) and mAOE (0.534), highlighting its overall effectiveness in 3D object detection. Visualizations of real-world scenarios from the nuScenes dataset processed by NeXtFusion provide compelling evidence of its capability to handle diverse and challenging environments.</p>
|
Page generated in 0.0722 seconds