• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 186
  • 25
  • 13
  • 8
  • 4
  • 3
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 292
  • 292
  • 78
  • 69
  • 64
  • 61
  • 56
  • 48
  • 43
  • 43
  • 42
  • 40
  • 38
  • 35
  • 35
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

A Deep-learning based Approach for Foot Placement Prediction

Lee, Sung-Wook 24 May 2023 (has links)
Foot placement prediction can be important for exoskeleton and prosthesis controllers, human-robot interaction, or body-worn systems to prevent slips or trips. Previous studies investigating foot placement prediction have been limited to predicting foot placement during the swing phase, and do not fully consider contextual information such as the preceding step or the stance phase before push-off. In this study, a deep learning-based foot placement prediction approach was proposed, where the deep learning models were designed to sequentially process data from three IMU sensors mounted on pelvis and feet. The raw sensor data are pre-processed to generate multi-variable time-series data for training two deep learning models, where the first model estimates the gait progression and the second model subsequently predicts the next foot placement. The ground truth gait phase data and foot placement data are acquired from a motion capture system. Ten healthy subjects were invited to walk naturally at different speeds on a treadmill. In cross-subject learning, the trained models had a mean distance error of 5.93 cm for foot placement prediction. In single-subject learning, the prediction accuracy improved with additional training data, and a mean distance error of 2.60 cm was achieved by fine-tuning the cross-subject validated models with the target subject data. Even from 25-81% in the gait cycle, mean distance errors were only 6.99 cm and 3.22 cm for cross-subject learning and single-subject learning, respectively / Master of Science / This study proposes a new approach for predicting where a person's foot will land during walking, which could be useful in controlling robots and wearable devices that work with humans to prevent events such as slips and falls and allow for more smooth human-robot interactions. Although foot placement prediction has great potential in various domains, current works in this area are limited in terms of practicality and accuracy. The proposed approach uses data from inertial sensors attached to the pelvis and feet, and two deep learning models are trained to estimate the person's walking pattern and predict their next foot placement. The approach was tested on ten healthy individuals walking at different speeds on a treadmill, and achieved state-of-the-arts results. The results suggest that this approach could be a promising method when sufficient data from multiple people are available.
52

Multi-Sensor, Fused Airspace Monitoring Systems for Automated Collision Avoidance between UAS and Crewed Aircraft

Post, Alberto Martin 07 January 2022 (has links)
The autonomous operation of Uncrewed Aircraft Systems (UAS) beyond the pilot in command's visual line of sight is currently restricted due to a lack of cost-effective surveillance sensors robust enough to operate in low-level airspace. The current sensors available either have have high accuracy of locating targets but are too short of a range to be usable or have long ranges but have gaps in coverage due to varying terrain. Sensor fusion is one possible method of combining the strengths of different sensors to increase the overall airspace surveillance quality to allow for robust detect and avoid (DAA) capabilities; enabling beyond visual line of sight operations. This thesis explores some of the current techniques and challenges to use sensor fusion for collision avoidance between crewed aircraft and UAS. It demonstrates an example method of sensor fusion using data from two radars and an ADS-B receiver. In this thesis, a test bed for ground-based airspace monitoring surveillance is proposed for a low cost method of long-term sensor evaluation. Lastly, an potential method of a heterogeneous, score-based, sensor fusion is presented and simulated. / Master of Science / Long range operations of Uncrewed Aircraft Systems (UAS) are currently restricted due to a lack of cost-effective surveillance sensors that work well enough near the ground in the presence changing terrain. The current sensors available either have have high accuracy of locating targets but are too short of a range to be usable or have long ranges but have gaps in coverage due to varying terrain. Sensor fusion is a solution to this problem by combining the strengths of different sensors to allow for better collision avoidance capabilities; enabling these long range operations. This thesis explores some of the current techniques and challenges to use sensor fusion for collision avoidance between crewed aircraft and UAS. It demonstrates an example method of sensor fusion using data from two radars and an ADS-B receiver. In this thesis, a test bed for ground-based airspace monitoring surveillance is proposed for long-term sensor testing. Lastly, an potential method of a sensor fusion using different types of sensors is presented and simulated.
53

Fusion of Laser Range-Finding and Computer Vision Data for Traffic Detection by Autonomous Vehicles

Cacciola, Stephen J. 21 January 2008 (has links)
The DARPA Challenges were created in response to a Congressional and Department of Defense (DoD) mandate that one-third of US operational ground combat vehicles be unmanned by the year 2015. The Urban Challenge is the latest competition that tasks industry, academia, and inventors with designing an autonomous vehicle that can safely operate in an urban environment. A basic and important capability needed in a successful competition vehicle is the ability to detect and classify objects. The most important objects to classify are other vehicles on the road. Navigating traffic, which includes other autonomous vehicles, is critical in the obstacle avoidance and decision making processes. This thesis provides an overview of the algorithms and software designed to detect and locate these vehicles. By combining the individual strengths of laser range-finding and vision processing, the two sensors are able to more accurately detect and locate vehicles than either sensor acting alone. The range-finding module uses the built-in object detection capabilities of IBEO Alasca laser rangefinders to detect the location, size, and velocity of nearby objects. The Alasca units are designed for automotive use, and so they alone are able to identify nearby obstacles as vehicles with a high level of certainty. After some basic filtering, an object detected by the Alasca scanner is given an initial classification based on its location, size, and velocity. The vision module uses the location of these objects as determined by the ranger finder to extract regions of interest from large images through perspective transformation. These regions of the image are then examined for distinct characteristics common to all vehicles such as tail lights and tires. Checking multiple characteristics helps reduce the number of false-negative detections. Since the entire image is never processed, the image size and resolution can be maximized to ensure the characteristics are as clear as possible. The existence of these characteristics is then used to modify the certainty level from the IBEO and determine if a given object is a vehicle. / Master of Science
54

Environment Mapping in Larger Spaces

Ciambrone, Andrew James 09 February 2017 (has links)
Spatial mapping or environment mapping is the process of exploring a real world environment and creating its digital representation. To create convincing mixed reality programs, an environment mapping device must be able to detect a user's position and map the user's environment. Currently available commercial spatial mapping devices mostly use infrared camera to obtain a depth map which is effective only for short to medium distances (3-4 meters). This work describes an extension to the existing environment mapping devices and techniques to enable mapping of larger architectural environments using a combination of a camera, Inertial Measurement Unit (IMU), and Light Detection and Ranging (LIDAR) devices supported by sensor fusion and computer vision techniques. There are three main parts to the proposed system. The first part is data collection and data fusion using embedded hardware, the second part is data processing (segmentation) and the third part is creating a geometry mesh of the environment. The developed system was evaluated against its ability to determine the dimension of the room and of objects within the room. This low cost system can significantly expand the mapping range of the existing mixed reality devices such as Microsoft HoloLens device. / Master of Science
55

Perception and Planning of Connected and Automated Vehicles

Mangette, Clayton John 09 June 2020 (has links)
Connected and Automated Vehicles (CAVs) represent a growing area of study in robotics and automotive research. Their potential benefits of increased traffic flow, reduced on-road accident, and improved fuel economy make them an attractive option. While some autonomous features such as Adaptive Cruise Control and Lane Keep Assist are already integrated into consumer vehicles, they are limited in scope and require innovation to realize fully autonomous vehicles. This work addresses the design problems of perception and planning in CAVs. A decentralized sensor fusion system is designed using Multi-target tracking to identify targets within a vehicle's field of view, enumerate each target with the lane it occupies, and highlight the most important object (MIO) for Adaptive cruise control. Its performance is tested using the Optimal Sub-pattern Assignment (OSPA) metric and correct assignment rate of the MIO. The system has an average accuracy assigning the MIO of 98%. The rest of this work considers the coordination of multiple CAVs from a multi-agent motion planning perspective. A centralized planning algorithm is applied to a space similar to a traffic intersection and is demonstrated empirically to be twice as fast as existing multi-agent planners., making it suitable for real-time planning environments. / Master of Science / Connected and Automated Vehicles are an emerging area of research that involve integrating computational components to enable autonomous driving. This work considers two of the major challenges in this area of research. The first half of this thesis considers how to design a perception system in the vehicle that can correctly track other vehicles and assess their relative importance in the environment. A sensor fusion system is designed which incorporates information from different sensor types to form a list of relevant target objects. The rest of this work considers the high-level problem of coordination between autonomous vehicles. A planning algorithm which plans the paths of multiple autonomous vehicles that is guaranteed to prevent collisions and is empirically faster than existing planning methods is demonstrated.
56

Investigation of integrated waterlevel sensor solution forsubmersible pumps : A study of how sensors can be combined towithstand build-up materials and improvereliability in harsh environment / Undersökning av integrerad vattennivåsensorlösningför dränkbar pump

Abelin, Sarah January 2017 (has links)
Monitoring water level in harsh environment in order to handle the start and stop function of drainage pumps has been a major issue. Several environmental factors are present, which affect and disturb sensor measurements. Current solutions with mechanical float switches, mounted outside of pumps, wear out, get entangled and account for more than half of all the emergency call outs to pumping stations. Since pumps are frequently moved around, a new sensor solution is needed which can be integrated within the pump house and is able to continuously monitor water level to optimize the operation of the pump and to decrease wear, cost and energy consumption. This thesis presents an investigation how different sensor techniques can be combined to improve reliability for monitoring water level and handle the start and stop function of drainage pumps in harsh environment. The main focus has been to identify suitable water level sensing techniques and to investigate how sensors are affected by build-up materials building up on the pump surface and covering the sensor probes. A support vector machine algorithm is implemented to fuse sensor data in order to increase reliability of the sensor solution in contaminated condition. Results show that a combination of a pressure sensor and a capacitive sensor is the most suitable combination for withstanding build-up materials. For operating conditions when sensors are covered with soft or viscous build-ups, sensors were able to monitor water level through the build-up materials. No solution was found that could satisfactorily monitor water level through solidified build-up materials. / Att övervaka vattennivån i extrema miljöer för att hantera start- och stoppfunktion av dräneringspumpar har varit ett stort problem. Flera påverkande faktorer från pumpomgivningen influerar och stör sensormätningarna. Nuvarande lösningar med mekaniskt rörliga nivåvippor som är monterade utanför pumparna slits ut, trasslar in sig och står för mer än hälften av alla jourutryckningar till pumpstationerna. Eftersom pumpar ofta flyttas runt, behövs en ny sensorlösning som kan integreras i pumpen och som kontinuerligt kan övervaka vattennivån för att optimera pumpdriften och minska  slitage, kostnad och energiförbrukning. Den här masteruppsatsen presenterar en undersökning av hur olika givartekniker kan kombineras för att förbättra tillförlitligheten för övervakning av vattennivån och hantera start- och stoppfunktionen av dräneringspumpar i extrema miljöer. Fokus har legat på att identifiera lämpliga givartekniker för att mäta vattennivå och undersöka hur givare påverkas av beläggningar som byggs upp på pumpytan och täcker givarna. En support vector machine algoritm har implementerats för att kombinera givardata i syfte att öka tillförlitligheten hos givarlösningen i kontaminerat skick. Resultaten visar att en kombination av en tryckgivare och en kapacitiv givare är den mest lämpliga kombinationen för att motstå beläggningsmaterial. För driftsförhållanden när givarna är täckta med mjuka beläggningar kunde givarna mäta vattennivån genom beläggningarna. Ingen lösning identifierades som på ett tillfredsställande sätt kunde mäta vattennivå genom stelnade, solida beläggningsmaterial.
57

Template-basierte Klassifikation planarer Gesten

Schmidt, Michael 09 July 2014 (has links) (PDF)
Pervasion of mobile devices led to a growing interest in touch-based interactions. However, multi-touch input is still restricted to direct manipulations. In current applications, gestural commands - if used at all - are only exploiting single-touch. The underlying motive for the work at hand is the conviction that a realization of advanced interaction techniques requires handy tools for supporting their interpretation. Barriers for own implementations of procedures are dismantled by providing proof of concept regarding manifold interactions, therefore, making benefits calculable to developers. Within this thesis, a recognition routine for planar, symbolic gestures is developed that can be trained by specifications of templates and does not imply restrictions to the versatility of input. To provide a flexible tool, the interpretation of a gesture is independent of its natural variances, i.e., translation, scale, rotation, and speed. Additionally, the essential number of specified templates per class is required to be small and classifications are subject to real-time criteria common in the context of typical user interactions. The gesture recognizer is based on the integration of a nearest neighbor approach into a Bayesian classification method. Gestures are split into meaningful, elementary tokens to retrieve a set of local features that are merged by a sensor fusion process to form a global maximum-likelihood representation. Flexibility and high accuracy of the approach is empirically proven in thorough tests. Retaining all requirements, the method is extended to support the prediction of partially entered gestures. Besides more efficient input, the possible specification of direct manipulation interactions by templates is beneficial. Suitability for practical use of all provided concepts is demonstrated on the basis of two applications developed for this purpose and providing versatile options of multi-finger input. In addition to a trainable recognizer for domain-independent sketches, a multi-touch text input system is created and tested with users. It is established that multi-touch input is utilized in sketching if it is available as an alternative. Furthermore, a constructed multi-touch gesture alphabet allows for more efficient text input in comparison to its single-touch pendant. The concepts presented in this work can be of equal benefit to UI designers, usability experts, and developers of feedforward-mechanisms for dynamic training methods of gestural interactions. Likewise, a decomposition of input into tokens and its interpretation by a maximum-likelihood matching with templates is transferable to other application areas as the offline recognition of symbols. / Obwohl berührungsbasierte Interaktionen mit dem Aufkommen mobiler Geräte zunehmend Verbreitung fanden, beschränken sich Multi-Touch Eingaben größtenteils auf direkte Manipulationen. Im Bereich gestischer Kommandos finden, wenn überhaupt, nur Single-Touch Symbole Anwendung. Der vorliegenden Arbeit liegt der Gedanke zugrunde, dass die Umsetzung von Interaktionstechniken mit der Verfügbarkeit einfach zu handhabender Werkzeuge für deren Interpretation zusammenhängt. Auch kann die Hürde, eigene Techniken zu implementieren, verringert werden, wenn vielfältige Interaktionen erprobt sind und ihr Nutzen für Anwendungsentwickler abschätzbar wird. In der verfassten Dissertation wird ein Erkenner für planare, symbolische Gesten entwickelt, der über die Angabe von Templates trainiert werden kann und keine Beschränkung der Vielfalt von Eingaben auf berührungsempfindlichen Oberflächen voraussetzt. Um eine möglichst flexible Einsetzbarkeit zu gewährleisten, soll die Interpretation einer Geste unabhängig von natürlichen Varianzen - ihrer Translation, Skalierung, Rotation und Geschwindigkeit - und unter wenig spezifizierten Templates pro Klasse möglich sein. Weiterhin sind für Nutzerinteraktionen im Anwendungskontext übliche Echtzeit-Kriterien einzuhalten. Der vorgestellte Gestenerkenner basiert auf der Integration eines Nächste-Nachbar-Verfahrens in einen Ansatz der Bayes\'schen Klassifikation. Gesten werden in elementare, bedeutungstragende Einheiten zerlegt, aus deren lokalen Merkmalen mittels eines Sensor-Fusion Prozesses eine Maximum-Likelihood-Repräsentation abgeleitet wird. Die Flexibilität und hohe Genauigkeit des statistischen Verfahrens wird in ausführlichen Tests nachgewiesen. Unter gleichbleibenden Anforderungen wird eine Erweiterung vorgestellt, die eine Prädiktion von Gesten bei partiellen Eingaben ermöglicht. Deren Nutzen liegt - neben effizienteren Eingaben - in der nachgewiesenen Möglichkeit, per Templates spezifizierte direkte Manipulationen zu interpretieren. Zur Demonstration der Praxistauglichkeit der präsentierten Konzepte werden exemplarisch zwei Anwendungen entwickelt und mit Nutzern getestet, die eine vielseitige Verwendung von Mehr-Finger-Eingaben vorsehen. Neben einem Erkenner trainierbarer, domänenunabhängiger Skizzen wird ein System für die Texteingabe mit den Fingern bereitgestellt. Anhand von Nutzerstudien wird gezeigt, dass Multi-Touch beim Skizzieren verwendet wird, wenn es als Alternative zur Verfügung steht und die Verwendung eines Multi-Touch Gestenalphabetes im Vergleich zur Texteingabe per Single-Touch effizienteres Schreiben zulässt. Von den vorgestellten Konzepten können UI-Designer, Usability-Experten und Entwickler von Feedforward-Mechanismen zum dynamischen Lehren gestischer Eingaben gleichermaßen profitieren. Die Zerlegung einer Eingabe in Token und ihre Interpretation anhand der Zuordnung zu spezifizierten Templates lässt sich weiterhin auf benachbarte Gebiete, etwa die Offline-Erkennung von Symbolen, übertragen.
58

Template-basierte Klassifikation planarer Gesten

Schmidt, Michael 25 April 2014 (has links)
Pervasion of mobile devices led to a growing interest in touch-based interactions. However, multi-touch input is still restricted to direct manipulations. In current applications, gestural commands - if used at all - are only exploiting single-touch. The underlying motive for the work at hand is the conviction that a realization of advanced interaction techniques requires handy tools for supporting their interpretation. Barriers for own implementations of procedures are dismantled by providing proof of concept regarding manifold interactions, therefore, making benefits calculable to developers. Within this thesis, a recognition routine for planar, symbolic gestures is developed that can be trained by specifications of templates and does not imply restrictions to the versatility of input. To provide a flexible tool, the interpretation of a gesture is independent of its natural variances, i.e., translation, scale, rotation, and speed. Additionally, the essential number of specified templates per class is required to be small and classifications are subject to real-time criteria common in the context of typical user interactions. The gesture recognizer is based on the integration of a nearest neighbor approach into a Bayesian classification method. Gestures are split into meaningful, elementary tokens to retrieve a set of local features that are merged by a sensor fusion process to form a global maximum-likelihood representation. Flexibility and high accuracy of the approach is empirically proven in thorough tests. Retaining all requirements, the method is extended to support the prediction of partially entered gestures. Besides more efficient input, the possible specification of direct manipulation interactions by templates is beneficial. Suitability for practical use of all provided concepts is demonstrated on the basis of two applications developed for this purpose and providing versatile options of multi-finger input. In addition to a trainable recognizer for domain-independent sketches, a multi-touch text input system is created and tested with users. It is established that multi-touch input is utilized in sketching if it is available as an alternative. Furthermore, a constructed multi-touch gesture alphabet allows for more efficient text input in comparison to its single-touch pendant. The concepts presented in this work can be of equal benefit to UI designers, usability experts, and developers of feedforward-mechanisms for dynamic training methods of gestural interactions. Likewise, a decomposition of input into tokens and its interpretation by a maximum-likelihood matching with templates is transferable to other application areas as the offline recognition of symbols. / Obwohl berührungsbasierte Interaktionen mit dem Aufkommen mobiler Geräte zunehmend Verbreitung fanden, beschränken sich Multi-Touch Eingaben größtenteils auf direkte Manipulationen. Im Bereich gestischer Kommandos finden, wenn überhaupt, nur Single-Touch Symbole Anwendung. Der vorliegenden Arbeit liegt der Gedanke zugrunde, dass die Umsetzung von Interaktionstechniken mit der Verfügbarkeit einfach zu handhabender Werkzeuge für deren Interpretation zusammenhängt. Auch kann die Hürde, eigene Techniken zu implementieren, verringert werden, wenn vielfältige Interaktionen erprobt sind und ihr Nutzen für Anwendungsentwickler abschätzbar wird. In der verfassten Dissertation wird ein Erkenner für planare, symbolische Gesten entwickelt, der über die Angabe von Templates trainiert werden kann und keine Beschränkung der Vielfalt von Eingaben auf berührungsempfindlichen Oberflächen voraussetzt. Um eine möglichst flexible Einsetzbarkeit zu gewährleisten, soll die Interpretation einer Geste unabhängig von natürlichen Varianzen - ihrer Translation, Skalierung, Rotation und Geschwindigkeit - und unter wenig spezifizierten Templates pro Klasse möglich sein. Weiterhin sind für Nutzerinteraktionen im Anwendungskontext übliche Echtzeit-Kriterien einzuhalten. Der vorgestellte Gestenerkenner basiert auf der Integration eines Nächste-Nachbar-Verfahrens in einen Ansatz der Bayes\'schen Klassifikation. Gesten werden in elementare, bedeutungstragende Einheiten zerlegt, aus deren lokalen Merkmalen mittels eines Sensor-Fusion Prozesses eine Maximum-Likelihood-Repräsentation abgeleitet wird. Die Flexibilität und hohe Genauigkeit des statistischen Verfahrens wird in ausführlichen Tests nachgewiesen. Unter gleichbleibenden Anforderungen wird eine Erweiterung vorgestellt, die eine Prädiktion von Gesten bei partiellen Eingaben ermöglicht. Deren Nutzen liegt - neben effizienteren Eingaben - in der nachgewiesenen Möglichkeit, per Templates spezifizierte direkte Manipulationen zu interpretieren. Zur Demonstration der Praxistauglichkeit der präsentierten Konzepte werden exemplarisch zwei Anwendungen entwickelt und mit Nutzern getestet, die eine vielseitige Verwendung von Mehr-Finger-Eingaben vorsehen. Neben einem Erkenner trainierbarer, domänenunabhängiger Skizzen wird ein System für die Texteingabe mit den Fingern bereitgestellt. Anhand von Nutzerstudien wird gezeigt, dass Multi-Touch beim Skizzieren verwendet wird, wenn es als Alternative zur Verfügung steht und die Verwendung eines Multi-Touch Gestenalphabetes im Vergleich zur Texteingabe per Single-Touch effizienteres Schreiben zulässt. Von den vorgestellten Konzepten können UI-Designer, Usability-Experten und Entwickler von Feedforward-Mechanismen zum dynamischen Lehren gestischer Eingaben gleichermaßen profitieren. Die Zerlegung einer Eingabe in Token und ihre Interpretation anhand der Zuordnung zu spezifizierten Templates lässt sich weiterhin auf benachbarte Gebiete, etwa die Offline-Erkennung von Symbolen, übertragen.
59

Detection and Analysis of Anomalies in Tactical Sensor Systems through Structured Hypothesis Testing / Detektion och analys av avikelser i taktiska sensorsystem genom strukturerad hypotesprövning

Ohlson, Fredrik January 2023 (has links)
The project explores the domain of tactical sensor systems, focusing on SAAB Gripen’s sensor technologies such as radar, RWR (Radar Warning Receiver), and IRST (InfraRed Search and Track). The study employs structured hypothesis testing and model based diagnostics to examine the effectiveness of identifying and isolating deviations within these systems. The central question addressed is whether structured hypothesis testing reliably detects and isolates anomalies in a tactical sensor system. The research employs a framework involving sensor modeling of radar, RWR, and IRST, alongside a sensor fusion model, applied on a linear target tracking model as well as a real target flight track obtained from SAAB Test Flight and Verification. Test quantities are derived from the modeled data, and synthetic faults are intentionally introduced into the system. These test quantities are then compared to predefined thresholds, thereby facilitating structured hypothesis testing. The robustness and reliability of the diagnostics model are established through a series of simulations. Multiple scenarios with varied fault introductions across different sensor measurements are examined. Key results include the successful creation of a tactical sensor model and sensor fusion environment, showcasing the ability to introduce and detect faults. The thesis provides arguments supporting the advantages of model based diagnosis through structured hypothesis testing for assessing sensor fusion data. The results of this research are applicable beyond this specific context, facilitating improved sensor data analysis across diverse tracking scenarios, including applications beyond SAAB Gripen. As sensor technologies continue to evolve, the insights gained from this thesis could offer guidance for refining sensor models and hypothesis testing techniques, ultimately enhancing the efficiency and accuracy of sensor data analysis in various domains. / Denna rapport undersöker området inom taktiska sensorsystem och fokuserar på SAAB Gripens sensorteknik, såsom radar, RWR (Radar Warning Receiver) och IRST (InfraRed Search- and Track). Studien använder strukturerad hypotesprövning och modellbaserad diagnostik för att undersöka effektiviteten av att identifiera och isolera avvikelser inom dessa system. Den centrala frågan som behandlas är om strukturerad hypotesprövning tillförlitligt upptäcker och isolerar avvikelser i ett taktiskt sensorsystem. För att tackla denna utmaning används sensormodellering av radar, RWR och IRST, tillsammans med en sensorfusionsmodell som appliceras på en linjär målspårningsmodell samt verklig målflygbana erhållen från SAAB. Testkvantiteter härleds från den resulterande datan, och syntetiska fel introduceras avsiktligt i systemet. Dessa testskvantiteter jämförs sedan med fördefinierade trösklar vilket lägger grunden för strukturerad hypotesprövning. Tillförlitligheten och pålitligheten hos diagnostikmodellen etableras genom en serie av simuleringar bestående av flera scenarier med varierade felintroduktioner över olika sensorinmätningar. Huvudresultat inkluderar skapandet av en taktisk sensormodell och en sensorfusionsmiljö, som visar förmågan att introducera och upptäcka fel på ett effektivt sätt. Avhandlingen ger argument som stödjer fördelarna med modellbaserad diagnostik genom strukturerad hypotestestning för bedömning av sensorfusionsdata. Resultaten av denna forskning är tillämpliga utanför detta specifika sammanhang, vilket underlättar förbättrad sensordataanalys över olika spårningsscenarier, inklusive applikationer bortom SAAB Gripen. I takt med att sensorteknologier fortsätter att utvecklas kan insikterna från denna avhandling ge vägledning för att förbättra sensormodeller och hypotestestningstekniker, vilket i slutändan förbättrar effektiviteten och noggrannheten för sensordataanalys inom olika områden.
60

Investigation of Increased Mapping Quality Generated by a Neural Network for Camera-LiDAR Sensor Fusion / Ökning av kartläggningskvalitet genom att använda ett neuralt natverk för fusion av kamera och LiDAR data

Correa Silva, Joan Li Guisell, Jönsson, Sofia January 2021 (has links)
This study’s aim was to investigate the mapping part of Simultaneous Localisation And Mapping (SLAM) in indoor environments containing error sources relevant to two types of sensors. The sensors used were an Intel Realsense depth camera and an RPlidar Light Detection AndRanging (LiDAR). Both cameras and LiDARs are frequently used as exteroceptive sensors in SLAM. Cameras typically struggle with strong light in the environment, and LiDARs struggle with reflective surfaces. Therefore, this study investigated the possibility of using a neural network to detect an error in either sensors’ data caused by mentioned error sources. The network identified which sensor produced erroneous data. The sensor fusion algorithm momentarily excluded said sensor’s data, consequently, improving the mapping quality when possible. The quantitative results showed no significant difference in the measured mean squared error and structural similarity between the final maps generated with and without the network, when compared to the ground truth. However, the qualitative analysis showed some advantages with using the network. Many of the camera’s errors were filtered out with the neural network, and led to a more accurate continuous mapping than without the network implemented. The conclusion was that a neural network can to a limited extent recognise the sensors’ data errors, but only the camera data benefited from the proposed solution. The study also produced important findings from the implementation which are presented. Future work recommendations include neural network optimisation, sensor selection, and sensor fusion implementation. / Denna studie undersökte kartläggningen i Simultaneous Localisation And Mapping (SLAM) problem, i kontexten av två sensorers felkällor. Sensorerna som användes var en Intel Realsense djupseende kamera samt en LiDAR fran RPlidar. Både kameror och LiDARs är vanliga sensorer i SLAM system, och båda har olika typer av felkällor. Kameror är typiskt känsliga för mycket starkt ljus, medan LiDARs har svårt med reflekterande ytor. Med detta som bakgrund har denna studie undersökt möjligheten att implementera ett neuralt nätverk för att detektera när varje sensor är utsatt för en felkälla (och därmed ger fel data). Nätverkets klassificering används sedan för att i varje tidssteg exkludera den sensors data som det är fel på för att förbättra kartläggningen. De qvantitativa resultaten visade ingen signifikant skillnad mellan kartorna genererade med nätverket och de utan nätverket. Dock visade den kvalitativa analysen att det finns vissa fördelar med att använda det neutrala nätverket. Manga av kamerans fel blev korrigerade när nätverket var implementerat, vilket ledde till mer korrekta kartor under kontinuerlig körning. Slutsatsen blev att ett nätverk kan bli tränat för att identifiera fel i datan, men att kameran drar mest nytta av det. Studien producerade även sekundara resultat som också redovisas. Slutligen rekommenderas optimering av nätverket, val av sensorer, samt uppdaterad algoritm för sensor fusionen som möjliga områden till fortsatt forskning inom området.

Page generated in 0.4891 seconds