Spelling suggestions: "subject:"cotensor fusion"" "subject:"condensor fusion""
51 |
Perception and Planning of Connected and Automated VehiclesMangette, Clayton John 09 June 2020 (has links)
Connected and Automated Vehicles (CAVs) represent a growing area of study in robotics and automotive research. Their potential benefits of increased traffic flow, reduced on-road accident, and improved fuel economy make them an attractive option. While some autonomous features such as Adaptive Cruise Control and Lane Keep Assist are already integrated into consumer vehicles, they are limited in scope and require innovation to realize fully autonomous vehicles. This work addresses the design problems of perception and planning in CAVs. A decentralized sensor fusion system is designed using Multi-target tracking to identify targets within a vehicle's field of view, enumerate each target with the lane it occupies, and highlight the most important object (MIO) for Adaptive cruise control. Its performance is tested using the Optimal Sub-pattern Assignment (OSPA) metric and correct assignment rate of the MIO. The system has an average accuracy assigning the MIO of 98%. The rest of this work considers the coordination of multiple CAVs from a multi-agent motion planning perspective. A centralized planning algorithm is applied to a space similar to a traffic intersection and is demonstrated empirically to be twice as fast as existing multi-agent planners., making it suitable for real-time planning environments. / Master of Science / Connected and Automated Vehicles are an emerging area of research that involve integrating computational components to enable autonomous driving. This work considers two of the major challenges in this area of research. The first half of this thesis considers how to design a perception system in the vehicle that can correctly track other vehicles and assess their relative importance in the environment. A sensor fusion system is designed which incorporates information from different sensor types to form a list of relevant target objects. The rest of this work considers the high-level problem of coordination between autonomous vehicles. A planning algorithm which plans the paths of multiple autonomous vehicles that is guaranteed to prevent collisions and is empirically faster than existing planning methods is demonstrated.
|
52 |
Multi-Sensor, Fused Airspace Monitoring Systems for Automated Collision Avoidance between UAS and Crewed AircraftPost, Alberto Martin 07 January 2022 (has links)
The autonomous operation of Uncrewed Aircraft Systems (UAS) beyond the pilot in command's visual line of sight is currently restricted due to a lack of cost-effective surveillance sensors robust enough to operate in low-level airspace. The current sensors available either have have high accuracy of locating targets but are too short of a range to be usable or have long ranges but have gaps in coverage due to varying terrain. Sensor fusion is one possible method of combining the strengths of different sensors to increase the overall airspace surveillance quality to allow for robust detect and avoid (DAA) capabilities; enabling beyond visual line of sight operations.
This thesis explores some of the current techniques and challenges to use sensor fusion for collision avoidance between crewed aircraft and UAS. It demonstrates an example method of sensor fusion using data from two radars and an ADS-B receiver. In this thesis, a test bed for ground-based airspace monitoring surveillance is proposed for a low cost method of long-term sensor evaluation. Lastly, an potential method of a heterogeneous, score-based, sensor fusion is presented and simulated. / Master of Science / Long range operations of Uncrewed Aircraft Systems (UAS) are currently restricted due to a lack of cost-effective surveillance sensors that work well enough near the ground in the presence changing terrain. The current sensors available either have have high accuracy of locating targets but are too short of a range to be usable or have long ranges but have gaps in coverage due to varying terrain. Sensor fusion is a solution to this problem by combining the strengths of different sensors to allow for better collision avoidance capabilities; enabling these long range operations.
This thesis explores some of the current techniques and challenges to use sensor fusion for collision avoidance between crewed aircraft and UAS. It demonstrates an example method of sensor fusion using data from two radars and an ADS-B receiver. In this thesis, a test bed for ground-based airspace monitoring surveillance is proposed for long-term sensor testing. Lastly, an potential method of a sensor fusion using different types of sensors is presented and simulated.
|
53 |
Fusion of Laser Range-Finding and Computer Vision Data for Traffic Detection by Autonomous VehiclesCacciola, Stephen J. 21 January 2008 (has links)
The DARPA Challenges were created in response to a Congressional and Department of Defense (DoD) mandate that one-third of US operational ground combat vehicles be unmanned by the year 2015. The Urban Challenge is the latest competition that tasks industry, academia, and inventors with designing an autonomous vehicle that can safely operate in an urban environment.
A basic and important capability needed in a successful competition vehicle is the ability to detect and classify objects. The most important objects to classify are other vehicles on the road. Navigating traffic, which includes other autonomous vehicles, is critical in the obstacle avoidance and decision making processes. This thesis provides an overview of the algorithms and software designed to detect and locate these vehicles. By combining the individual strengths of laser range-finding and vision processing, the two sensors are able to more accurately detect and locate vehicles than either sensor acting alone.
The range-finding module uses the built-in object detection capabilities of IBEO Alasca laser rangefinders to detect the location, size, and velocity of nearby objects. The Alasca units are designed for automotive use, and so they alone are able to identify nearby obstacles as vehicles with a high level of certainty. After some basic filtering, an object detected by the Alasca scanner is given an initial classification based on its location, size, and velocity. The vision module uses the location of these objects as determined by the ranger finder to extract regions of interest from large images through perspective transformation. These regions of the image are then examined for distinct characteristics common to all vehicles such as tail lights and tires. Checking multiple characteristics helps reduce the number of false-negative detections. Since the entire image is never processed, the image size and resolution can be maximized to ensure the characteristics are as clear as possible. The existence of these characteristics is then used to modify the certainty level from the IBEO and determine if a given object is a vehicle. / Master of Science
|
54 |
Sensor-fusion of hydraulic data for burst detection and location in a treated water distribution systemMounce, Steve R., Khan, Asar, Day, Andrew J., Wood, Alastair S., Widdop, Peter D., Machell, James January 2003 (has links)
No
|
55 |
Deep Learning Using Vision And LiDAR For Global Robot LocalizationGowling, Brett E 01 May 2024 (has links) (PDF)
As the field of mobile robotics rapidly expands, precise understanding of a robot’s position and orientation becomes critical for autonomous navigation and efficient task performance. In this thesis, we present a snapshot-based global localization machine learning model for a mobile robot, the e-puck, in a simulated environment. Our model uses multimodal data to predict both position and orientation using the robot’s on-board cameras and LiDAR sensor. In an effort to minimize localization error, we explore different sensor configurations by varying the number of cameras and LiDAR layers used. Additionally, we investigate the performance benefits of different multimodal fusion strategies while leveraging the EfficientNet CNN architecture as our model’s foundation. Data collection and testing is conducted using Webots simulation software, and our results show that, when tested in a 12m x 12m simulated apartment environment, our model is able to achieve positional accuracy within 0.2m for each of the x and y coordinates and orientation accuracy within 2°, all without the need for sequential data history. Our results demonstrate the potential for accurate global localization of mobile robots in simulated environments without the need for existing maps or temporal data.
|
56 |
Environment Mapping in Larger SpacesCiambrone, Andrew James 09 February 2017 (has links)
Spatial mapping or environment mapping is the process of exploring a real world environment and creating its digital representation. To create convincing mixed reality programs, an environment mapping device must be able to detect a user's position and map the user's environment. Currently available commercial spatial mapping devices mostly use infrared camera to obtain a depth map which is effective only for short to medium distances (3-4 meters).
This work describes an extension to the existing environment mapping devices and techniques to enable mapping of larger architectural environments using a combination of a camera, Inertial Measurement Unit (IMU), and Light Detection and Ranging (LIDAR) devices supported by sensor fusion and computer vision techniques.
There are three main parts to the proposed system.
The first part is data collection and data fusion using embedded hardware, the second part is data processing (segmentation) and the third part is creating a geometry mesh of the environment. The developed system was evaluated against its ability to determine the dimension of the room and of objects within the room. This low cost system can significantly expand the mapping range of the existing mixed reality devices such as Microsoft HoloLens device. / Master of Science / Mixed reality is the mixing of computer generated graphics and real world objects together to create an augmented view of the space. Environmental mapping, the process of creating a digital representation of an environment, is used in mixed reality applications so that its virtual objects can logically interact with the physical environment. Most of the current approaches to this problem work only for short to medium distances. This work describes an extension to the existing devices and techniques to enable mapping of larger architectural spaces. The developed system was evaluated against its ability to determine the dimension of the room and of objects within the room. With certain conditions the system was able to evaluate the dimensions of a room with an error less than twenty percent and is capable of determining the dimensions of objects with an error less than five percent in adequate conditions. This low cost system can significantly expand the mapping range of the existing mixed reality devices such as the Microsoft HoloLens device, allowing for more diverse mixed reality applications to be developed and used.
|
57 |
Investigation of integrated waterlevel sensor solution forsubmersible pumps : A study of how sensors can be combined towithstand build-up materials and improvereliability in harsh environment / Undersökning av integrerad vattennivåsensorlösningför dränkbar pumpAbelin, Sarah January 2017 (has links)
Monitoring water level in harsh environment in order to handle the start and stop function of drainage pumps has been a major issue. Several environmental factors are present, which affect and disturb sensor measurements. Current solutions with mechanical float switches, mounted outside of pumps, wear out, get entangled and account for more than half of all the emergency call outs to pumping stations. Since pumps are frequently moved around, a new sensor solution is needed which can be integrated within the pump house and is able to continuously monitor water level to optimize the operation of the pump and to decrease wear, cost and energy consumption. This thesis presents an investigation how different sensor techniques can be combined to improve reliability for monitoring water level and handle the start and stop function of drainage pumps in harsh environment. The main focus has been to identify suitable water level sensing techniques and to investigate how sensors are affected by build-up materials building up on the pump surface and covering the sensor probes. A support vector machine algorithm is implemented to fuse sensor data in order to increase reliability of the sensor solution in contaminated condition. Results show that a combination of a pressure sensor and a capacitive sensor is the most suitable combination for withstanding build-up materials. For operating conditions when sensors are covered with soft or viscous build-ups, sensors were able to monitor water level through the build-up materials. No solution was found that could satisfactorily monitor water level through solidified build-up materials. / Att övervaka vattennivån i extrema miljöer för att hantera start- och stoppfunktion av dräneringspumpar har varit ett stort problem. Flera påverkande faktorer från pumpomgivningen influerar och stör sensormätningarna. Nuvarande lösningar med mekaniskt rörliga nivåvippor som är monterade utanför pumparna slits ut, trasslar in sig och står för mer än hälften av alla jourutryckningar till pumpstationerna. Eftersom pumpar ofta flyttas runt, behövs en ny sensorlösning som kan integreras i pumpen och som kontinuerligt kan övervaka vattennivån för att optimera pumpdriften och minska slitage, kostnad och energiförbrukning. Den här masteruppsatsen presenterar en undersökning av hur olika givartekniker kan kombineras för att förbättra tillförlitligheten för övervakning av vattennivån och hantera start- och stoppfunktionen av dräneringspumpar i extrema miljöer. Fokus har legat på att identifiera lämpliga givartekniker för att mäta vattennivå och undersöka hur givare påverkas av beläggningar som byggs upp på pumpytan och täcker givarna. En support vector machine algoritm har implementerats för att kombinera givardata i syfte att öka tillförlitligheten hos givarlösningen i kontaminerat skick. Resultaten visar att en kombination av en tryckgivare och en kapacitiv givare är den mest lämpliga kombinationen för att motstå beläggningsmaterial. För driftsförhållanden när givarna är täckta med mjuka beläggningar kunde givarna mäta vattennivån genom beläggningarna. Ingen lösning identifierades som på ett tillfredsställande sätt kunde mäta vattennivå genom stelnade, solida beläggningsmaterial.
|
58 |
Template-basierte Klassifikation planarer GestenSchmidt, Michael 09 July 2014 (has links) (PDF)
Pervasion of mobile devices led to a growing interest in touch-based interactions. However, multi-touch input is still restricted to direct manipulations. In current applications, gestural commands - if used at all - are only exploiting single-touch. The underlying motive for the work at hand is the conviction that a realization of advanced interaction techniques requires handy tools for supporting their interpretation. Barriers for own implementations of procedures are dismantled by providing proof of concept regarding manifold interactions, therefore, making benefits calculable to developers. Within this thesis, a recognition routine for planar, symbolic gestures is developed that can be trained by specifications of templates and does not imply restrictions to the versatility of input. To provide a flexible tool, the interpretation of a gesture is independent of its natural variances, i.e., translation, scale, rotation, and speed. Additionally, the essential number of specified templates per class is required to be small and classifications are subject to real-time criteria common in the context of typical user interactions. The gesture recognizer is based on the integration of a nearest neighbor approach into a Bayesian classification method.
Gestures are split into meaningful, elementary tokens to retrieve a set of local features that are merged by a sensor fusion process to form a global maximum-likelihood representation. Flexibility and high accuracy of the approach is empirically proven in thorough tests. Retaining all requirements, the method is extended to support the prediction of partially entered gestures. Besides more efficient input, the possible specification of direct manipulation interactions by templates is beneficial. Suitability for practical use of all provided concepts is demonstrated on the basis of two applications developed for this purpose and providing versatile options of multi-finger input. In addition to a trainable recognizer for domain-independent sketches, a multi-touch text input system is created and tested with users. It is established that multi-touch input is utilized in sketching if it is available as an alternative. Furthermore, a constructed multi-touch gesture alphabet allows for more efficient text input in comparison to its single-touch pendant. The concepts presented in this work can be of equal benefit to UI designers, usability experts, and developers of feedforward-mechanisms for dynamic training methods of gestural interactions. Likewise, a decomposition of input into tokens and its interpretation by a maximum-likelihood matching with templates is transferable to other application areas as the offline recognition of symbols. / Obwohl berührungsbasierte Interaktionen mit dem Aufkommen mobiler Geräte zunehmend Verbreitung fanden, beschränken sich Multi-Touch Eingaben größtenteils auf direkte Manipulationen. Im Bereich gestischer Kommandos finden, wenn überhaupt, nur Single-Touch Symbole Anwendung. Der vorliegenden Arbeit liegt der Gedanke zugrunde, dass die Umsetzung von Interaktionstechniken mit der Verfügbarkeit einfach zu handhabender Werkzeuge für deren Interpretation zusammenhängt. Auch kann die Hürde, eigene Techniken zu implementieren, verringert werden, wenn vielfältige Interaktionen erprobt sind und ihr Nutzen für Anwendungsentwickler abschätzbar wird. In der verfassten Dissertation wird ein Erkenner für planare, symbolische Gesten entwickelt, der über die Angabe von Templates trainiert werden kann und keine Beschränkung der Vielfalt von Eingaben auf berührungsempfindlichen Oberflächen voraussetzt. Um eine möglichst flexible Einsetzbarkeit zu gewährleisten, soll die Interpretation einer Geste unabhängig von natürlichen Varianzen - ihrer Translation, Skalierung, Rotation und Geschwindigkeit - und unter wenig spezifizierten Templates pro Klasse möglich sein. Weiterhin sind für Nutzerinteraktionen im Anwendungskontext übliche Echtzeit-Kriterien einzuhalten. Der vorgestellte Gestenerkenner basiert auf der Integration eines Nächste-Nachbar-Verfahrens in einen Ansatz der Bayes\'schen Klassifikation.
Gesten werden in elementare, bedeutungstragende Einheiten zerlegt, aus deren lokalen Merkmalen mittels eines Sensor-Fusion Prozesses eine Maximum-Likelihood-Repräsentation abgeleitet wird. Die Flexibilität und hohe Genauigkeit des statistischen Verfahrens wird in ausführlichen Tests nachgewiesen. Unter gleichbleibenden Anforderungen wird eine Erweiterung vorgestellt, die eine Prädiktion von Gesten bei partiellen Eingaben ermöglicht. Deren Nutzen liegt - neben effizienteren Eingaben - in der nachgewiesenen Möglichkeit, per Templates spezifizierte direkte Manipulationen zu interpretieren. Zur Demonstration der Praxistauglichkeit der präsentierten Konzepte werden exemplarisch zwei Anwendungen entwickelt und mit Nutzern getestet, die eine vielseitige Verwendung von Mehr-Finger-Eingaben vorsehen. Neben einem Erkenner trainierbarer, domänenunabhängiger Skizzen wird ein System für die Texteingabe mit den Fingern bereitgestellt. Anhand von Nutzerstudien wird gezeigt, dass Multi-Touch beim Skizzieren verwendet wird, wenn es als Alternative zur Verfügung steht und die Verwendung eines Multi-Touch Gestenalphabetes im Vergleich zur Texteingabe per Single-Touch effizienteres Schreiben zulässt. Von den vorgestellten Konzepten können UI-Designer, Usability-Experten und Entwickler von Feedforward-Mechanismen zum dynamischen Lehren gestischer Eingaben gleichermaßen profitieren. Die Zerlegung einer Eingabe in Token und ihre Interpretation anhand der Zuordnung zu spezifizierten Templates lässt sich weiterhin auf benachbarte Gebiete, etwa die Offline-Erkennung von Symbolen, übertragen.
|
59 |
Template-basierte Klassifikation planarer GestenSchmidt, Michael 25 April 2014 (has links)
Pervasion of mobile devices led to a growing interest in touch-based interactions. However, multi-touch input is still restricted to direct manipulations. In current applications, gestural commands - if used at all - are only exploiting single-touch. The underlying motive for the work at hand is the conviction that a realization of advanced interaction techniques requires handy tools for supporting their interpretation. Barriers for own implementations of procedures are dismantled by providing proof of concept regarding manifold interactions, therefore, making benefits calculable to developers. Within this thesis, a recognition routine for planar, symbolic gestures is developed that can be trained by specifications of templates and does not imply restrictions to the versatility of input. To provide a flexible tool, the interpretation of a gesture is independent of its natural variances, i.e., translation, scale, rotation, and speed. Additionally, the essential number of specified templates per class is required to be small and classifications are subject to real-time criteria common in the context of typical user interactions. The gesture recognizer is based on the integration of a nearest neighbor approach into a Bayesian classification method.
Gestures are split into meaningful, elementary tokens to retrieve a set of local features that are merged by a sensor fusion process to form a global maximum-likelihood representation. Flexibility and high accuracy of the approach is empirically proven in thorough tests. Retaining all requirements, the method is extended to support the prediction of partially entered gestures. Besides more efficient input, the possible specification of direct manipulation interactions by templates is beneficial. Suitability for practical use of all provided concepts is demonstrated on the basis of two applications developed for this purpose and providing versatile options of multi-finger input. In addition to a trainable recognizer for domain-independent sketches, a multi-touch text input system is created and tested with users. It is established that multi-touch input is utilized in sketching if it is available as an alternative. Furthermore, a constructed multi-touch gesture alphabet allows for more efficient text input in comparison to its single-touch pendant. The concepts presented in this work can be of equal benefit to UI designers, usability experts, and developers of feedforward-mechanisms for dynamic training methods of gestural interactions. Likewise, a decomposition of input into tokens and its interpretation by a maximum-likelihood matching with templates is transferable to other application areas as the offline recognition of symbols. / Obwohl berührungsbasierte Interaktionen mit dem Aufkommen mobiler Geräte zunehmend Verbreitung fanden, beschränken sich Multi-Touch Eingaben größtenteils auf direkte Manipulationen. Im Bereich gestischer Kommandos finden, wenn überhaupt, nur Single-Touch Symbole Anwendung. Der vorliegenden Arbeit liegt der Gedanke zugrunde, dass die Umsetzung von Interaktionstechniken mit der Verfügbarkeit einfach zu handhabender Werkzeuge für deren Interpretation zusammenhängt. Auch kann die Hürde, eigene Techniken zu implementieren, verringert werden, wenn vielfältige Interaktionen erprobt sind und ihr Nutzen für Anwendungsentwickler abschätzbar wird. In der verfassten Dissertation wird ein Erkenner für planare, symbolische Gesten entwickelt, der über die Angabe von Templates trainiert werden kann und keine Beschränkung der Vielfalt von Eingaben auf berührungsempfindlichen Oberflächen voraussetzt. Um eine möglichst flexible Einsetzbarkeit zu gewährleisten, soll die Interpretation einer Geste unabhängig von natürlichen Varianzen - ihrer Translation, Skalierung, Rotation und Geschwindigkeit - und unter wenig spezifizierten Templates pro Klasse möglich sein. Weiterhin sind für Nutzerinteraktionen im Anwendungskontext übliche Echtzeit-Kriterien einzuhalten. Der vorgestellte Gestenerkenner basiert auf der Integration eines Nächste-Nachbar-Verfahrens in einen Ansatz der Bayes\'schen Klassifikation.
Gesten werden in elementare, bedeutungstragende Einheiten zerlegt, aus deren lokalen Merkmalen mittels eines Sensor-Fusion Prozesses eine Maximum-Likelihood-Repräsentation abgeleitet wird. Die Flexibilität und hohe Genauigkeit des statistischen Verfahrens wird in ausführlichen Tests nachgewiesen. Unter gleichbleibenden Anforderungen wird eine Erweiterung vorgestellt, die eine Prädiktion von Gesten bei partiellen Eingaben ermöglicht. Deren Nutzen liegt - neben effizienteren Eingaben - in der nachgewiesenen Möglichkeit, per Templates spezifizierte direkte Manipulationen zu interpretieren. Zur Demonstration der Praxistauglichkeit der präsentierten Konzepte werden exemplarisch zwei Anwendungen entwickelt und mit Nutzern getestet, die eine vielseitige Verwendung von Mehr-Finger-Eingaben vorsehen. Neben einem Erkenner trainierbarer, domänenunabhängiger Skizzen wird ein System für die Texteingabe mit den Fingern bereitgestellt. Anhand von Nutzerstudien wird gezeigt, dass Multi-Touch beim Skizzieren verwendet wird, wenn es als Alternative zur Verfügung steht und die Verwendung eines Multi-Touch Gestenalphabetes im Vergleich zur Texteingabe per Single-Touch effizienteres Schreiben zulässt. Von den vorgestellten Konzepten können UI-Designer, Usability-Experten und Entwickler von Feedforward-Mechanismen zum dynamischen Lehren gestischer Eingaben gleichermaßen profitieren. Die Zerlegung einer Eingabe in Token und ihre Interpretation anhand der Zuordnung zu spezifizierten Templates lässt sich weiterhin auf benachbarte Gebiete, etwa die Offline-Erkennung von Symbolen, übertragen.
|
60 |
Detection and Analysis of Anomalies in Tactical Sensor Systems through Structured Hypothesis Testing / Detektion och analys av avikelser i taktiska sensorsystem genom strukturerad hypotesprövningOhlson, Fredrik January 2023 (has links)
The project explores the domain of tactical sensor systems, focusing on SAAB Gripen’s sensor technologies such as radar, RWR (Radar Warning Receiver), and IRST (InfraRed Search and Track). The study employs structured hypothesis testing and model based diagnostics to examine the effectiveness of identifying and isolating deviations within these systems. The central question addressed is whether structured hypothesis testing reliably detects and isolates anomalies in a tactical sensor system. The research employs a framework involving sensor modeling of radar, RWR, and IRST, alongside a sensor fusion model, applied on a linear target tracking model as well as a real target flight track obtained from SAAB Test Flight and Verification. Test quantities are derived from the modeled data, and synthetic faults are intentionally introduced into the system. These test quantities are then compared to predefined thresholds, thereby facilitating structured hypothesis testing. The robustness and reliability of the diagnostics model are established through a series of simulations. Multiple scenarios with varied fault introductions across different sensor measurements are examined. Key results include the successful creation of a tactical sensor model and sensor fusion environment, showcasing the ability to introduce and detect faults. The thesis provides arguments supporting the advantages of model based diagnosis through structured hypothesis testing for assessing sensor fusion data. The results of this research are applicable beyond this specific context, facilitating improved sensor data analysis across diverse tracking scenarios, including applications beyond SAAB Gripen. As sensor technologies continue to evolve, the insights gained from this thesis could offer guidance for refining sensor models and hypothesis testing techniques, ultimately enhancing the efficiency and accuracy of sensor data analysis in various domains. / Denna rapport undersöker området inom taktiska sensorsystem och fokuserar på SAAB Gripens sensorteknik, såsom radar, RWR (Radar Warning Receiver) och IRST (InfraRed Search- and Track). Studien använder strukturerad hypotesprövning och modellbaserad diagnostik för att undersöka effektiviteten av att identifiera och isolera avvikelser inom dessa system. Den centrala frågan som behandlas är om strukturerad hypotesprövning tillförlitligt upptäcker och isolerar avvikelser i ett taktiskt sensorsystem. För att tackla denna utmaning används sensormodellering av radar, RWR och IRST, tillsammans med en sensorfusionsmodell som appliceras på en linjär målspårningsmodell samt verklig målflygbana erhållen från SAAB. Testkvantiteter härleds från den resulterande datan, och syntetiska fel introduceras avsiktligt i systemet. Dessa testskvantiteter jämförs sedan med fördefinierade trösklar vilket lägger grunden för strukturerad hypotesprövning. Tillförlitligheten och pålitligheten hos diagnostikmodellen etableras genom en serie av simuleringar bestående av flera scenarier med varierade felintroduktioner över olika sensorinmätningar. Huvudresultat inkluderar skapandet av en taktisk sensormodell och en sensorfusionsmiljö, som visar förmågan att introducera och upptäcka fel på ett effektivt sätt. Avhandlingen ger argument som stödjer fördelarna med modellbaserad diagnostik genom strukturerad hypotestestning för bedömning av sensorfusionsdata. Resultaten av denna forskning är tillämpliga utanför detta specifika sammanhang, vilket underlättar förbättrad sensordataanalys över olika spårningsscenarier, inklusive applikationer bortom SAAB Gripen. I takt med att sensorteknologier fortsätter att utvecklas kan insikterna från denna avhandling ge vägledning för att förbättra sensormodeller och hypotestestningstekniker, vilket i slutändan förbättrar effektiviteten och noggrannheten för sensordataanalys inom olika områden.
|
Page generated in 0.0757 seconds