Spelling suggestions: "subject:"multisensor"" "subject:"multissensor""
51 |
Multi-sources fusion based vehicle localization in urban environments under a loosely coupled probabilistic framework / Localisation de véhicules intelligents par fusion de données multi-capteurs en milieu urbainWei, Lijun 17 July 2013 (has links)
Afin d’améliorer la précision des systèmes de navigation ainsi que de garantir la sécurité et la continuité du service, il est essentiel de connaitre la position et l’orientation du véhicule en tout temps. La localisation absolue utilisant des systèmes satellitaires tels que le GPS est souvent utilisée `a cette fin. Cependant, en environnement urbain, la localisation `a l’aide d’un récepteur GPS peut s’avérer peu précise voire même indisponible `a cause des phénomènes de réflexion des signaux, de multi-trajet ou de la faible visibilité satellitaire. Afin d’assurer une estimation précise et robuste du positionnement, d’autres capteurs et méthodes doivent compléter la mesure. Dans cette thèse, des méthodes de localisation de véhicules sont proposées afin d’améliorer l’estimation de la pose en prenant en compte la redondance et la complémentarité des informations du système multi-capteurs utilisé. Tout d’abord, les mesures GPS sont fusionnées avec des estimations de la localisation relative du véhicule obtenues `a l’aide d’un capteur proprioceptif (gyromètre), d’un système stéréoscopique(Odométrie visuelle) et d’un télémètre laser (recalage de scans télémétriques). Une étape de sélection des capteurs est intégrée pour valider la cohérence des observations provenant des différents capteurs. Seules les informations validées sont combinées dans un formalisme de couplage lâche avec un filtre informationnel. Si l’information GPS est indisponible pendant une longue période, la trajectoire estimée par uniquement les approches relatives tend `a diverger, en raison de l’accumulation de l’erreur. Pour ces raisons, les informations d’une carte numérique (route + bâtiment) ont été intégrées et couplées aux mesures télémétriques de deux télémètres laser montés sur le toit du véhicule (l’un horizontalement, l’autre verticalement). Les façades des immeubles détectées par les télémètres laser sont associées avec les informations_ bâtiment _ de la carte afin de corriger la position du véhicule.Les approches proposées sont testées et évaluées sur des données réelles. Les résultats expérimentaux obtenus montrent que la fusion du système stéréoscopique et du télémètre laser avec le GPS permet d’assurer le service de localisation lors des courtes absences de mesures GPS et de corriger les erreurs GPS de type saut. Par ailleurs, la prise en compte des informations de la carte numérique routière permet d’obtenir une approximation de la position du véhicule en projetant la position du véhicule sur le tronc¸on de route correspondant et enfin l’intégration de la carte numérique des bâtiments couplée aux données télémétriques permet d’affiner cette estimation, en particulier la position latérale. / In some dense urban environments (e.g., a street with tall buildings around), vehicle localization result provided by Global Positioning System (GPS) receiver might not be accurate or even unavailable due to signal reflection (multi-path) or poor satellite visibility. In order to improve the accuracy and robustness of assisted navigation systems so as to guarantee driving security and service continuity on road, a vehicle localization approach is presented in this thesis by taking use of the redundancy and complementarities of multiple sensors. At first, GPS localization method is complemented by onboard dead-reckoning (DR) method (inertial measurement unit, odometer, gyroscope), stereovision based visual odometry method, horizontal laser range finder (LRF) based scan alignment method, and a 2D GIS road network map based map-matching method to provide a coarse vehicle pose estimation. A sensor selection step is applied to validate the coherence of the observations from multiple sensors, only information provided by the validated sensors are combined under a loosely coupled probabilistic framework with an information filter. Then, if GPS receivers encounter long term outages, the accumulated localization error of DR-only method is proposed to be bounded by adding a GIS building map layer. Two onboard LRF systems (a horizontal LRF and a vertical LRF) are mounted on the roof of the vehicle and used to detect building facades in urban environment. The detected building facades are projected onto the 2D ground plane and associated with the GIS building map layer to correct the vehicle pose error, especially for the lateral error. The extracted facade landmarks from the vertical LRF scan are stored in a new GIS map layer. The proposed approach is tested and evaluated with real data sequences. Experimental results with real data show that fusion of the stereoscopic system and LRF can continue to localize the vehicle during GPS outages in short period and to correct the GPS positioning error such as GPS jumps; the road map can help to obtain an approximate estimation of the vehicle position by projecting the vehicle position on the corresponding road segment; and the integration of the building information can help to refine the initial pose estimation when GPS signals are lost for long time.
|
52 |
Multisensor Segmentation-based Noise Suppression for Intelligibility Improvement in MELP CodersDemiroglu, Cenk 18 January 2006 (has links)
This thesis investigates the use of an auxiliary sensor, the GEMS device, for improving the quality of noisy speech and designing noise preprocessors to MELP speech coders. Use of auxiliary sensors for noise-robust
ASR applications is also investigated to develop speech enhancement algorithms that use acoustic-phonetic
properties of the speech signal.
A Bayesian risk minimization framework is developed that can incorporate the acoustic-phonetic properties
of speech sounds and knowledge of human auditory perception into the speech enhancement framework. Two noise suppression
systems are presented using the ideas developed in the mathematical framework. In the first system, an aharmonic
comb filter is proposed for voiced speech where low-energy frequencies are severely suppressed while
high-energy frequencies are suppressed mildly. The proposed
system outperformed an MMSE estimator in subjective listening tests and DRT intelligibility test for MELP-coded noisy speech.
The effect of aharmonic
comb filtering on the linear predictive coding (LPC) parameters is analyzed using a missing data approach.
Suppressing the low-energy frequencies without any modification of the high-energy frequencies is shown to
improve the LPC spectrum using the Itakura-Saito distance measure.
The second system combines the aharmonic comb filter with the acoustic-phonetic properties of speech
to improve the intelligibility of the MELP-coded noisy speech.
Noisy speech signal is segmented into broad level sound classes using a multi-sensor automatic
segmentation/classification tool, and each sound class is enhanced differently based on its
acoustic-phonetic properties. The proposed system is shown to outperform both the MELPe noise preprocessor
and the aharmonic comb filter in intelligibility tests when used in concatenation with the MELP coder.
Since the second noise suppression system uses an automatic segmentation/classification algorithm, exploiting the GEMS signal in an automatic
segmentation/classification task is also addressed using an ASR
approach. Current ASR engines can segment and classify speech utterances
in a single pass; however, they are sensitive to ambient noise.
Features that are extracted from the GEMS signal can be fused with the noisy MFCC features
to improve the noise-robustness of the ASR system. In the first phase, a voicing
feature is extracted from the clean speech signal and fused with the MFCC features.
The actual GEMS signal could not be used in this phase because of insufficient sensor data to train the ASR system.
Tests are done using the Aurora2 noisy digits database. The speech-based voicing
feature is found to be effective at around 10 dB but, below 10 dB, the effectiveness rapidly drops with decreasing SNR
because of the severe distortions in the speech-based features at these SNRs. Hence, a novel system is proposed that treats the
MFCC features in a speech frame as missing data if the global SNR is below 10 dB and the speech frame is
unvoiced. If the global SNR is above 10 dB of the speech frame is voiced, both MFCC features and voicing feature are used. The proposed
system is shown to outperform some of the popular noise-robust techniques at all SNRs.
In the second phase, a new isolated monosyllable database is prepared that contains both speech and GEMS data. ASR experiments conducted
for clean speech showed that the GEMS-based feature, when fused with the MFCC features, decreases the performance.
The reason for this unexpected result is found to be partly related to some of the GEMS data that is severely noisy.
The non-acoustic sensor noise exists in all GEMS data but the severe noise happens rarely. A missing
data technique is proposed to alleviate the effects of severely noisy sensor data. The GEMS-based feature is treated as missing data
when it is detected to be severely noisy. The combined features are shown to outperform the MFCC features for clean
speech when the missing data technique is applied.
|
53 |
Using dynamic time warping for multi-sensor fusionKo, Ming Hsiao January 2009 (has links)
Fusion is a fundamental human process that occurs in some form at all levels of sense organs such as visual and sound information received from eyes and ears respectively, to the highest levels of decision making such as our brain fuses visual and sound information to make decisions. Multi-sensor data fusion is concerned with gaining information from multiple sensors by fusing across raw data, features or decisions. The traditional frameworks for multi-sensor data fusion only concern fusion at specific points in time. However, many real world situations change over time. When the multi-sensor system is used for situation awareness, it is useful not only to know the state or event of the situation at a point in time, but also more importantly, to understand the causalities of those states or events changing over time. / Hence, we proposed a multi-agent framework for temporal fusion, which emphasises the time dimension of the fusion process, that is, fusion of the multi-sensor data or events derived over a period of time. The proposed multi-agent framework has three major layers: hardware, agents, and users. There are three different fusion architectures: centralized, hierarchical, and distributed, for organising the group of agents. The temporal fusion process of the proposed framework is elaborated by using the information graph. Finally, the core of the proposed temporal fusion framework – Dynamic Time Warping (DTW) temporal fusion agent is described in detail. / Fusing multisensory data over a period of time is a challenging task, since the data to be fused consists of complex sequences that are multi–dimensional, multimodal, interacting, and time–varying in nature. Additionally, performing temporal fusion efficiently in real–time is another challenge due to the large amount of data to be fused. To address these issues, we proposed the DTW temporal fusion agent that includes four major modules: data pre-processing, DTW recogniser, class templates, and decision making. The DTW recogniser is extended in various ways to deal with the variability of multimodal sequences acquired from multiple heterogeneous sensors, the problems of unknown start and end points, multimodal sequences of the same class that hence has different lengths locally and/or globally, and the challenges of online temporal fusion. / We evaluate the performance of the proposed DTW temporal fusion agent on two real world datasets: 1) accelerometer data acquired from performing two hand gestures, and 2) a benchmark dataset acquired from carrying a mobile device and performing pre-defined user scenarios. Performance results of the DTW based system are compared with those of a Hidden Markov Model (HMM) based system. The experimental results from both datasets demonstrate that the proposed DTW temporal fusion agent outperforms HMM based systems, and has the capability to perform online temporal fusion efficiently and accurately in real–time.
|
54 |
Multiple sensor fusion for detection, classification and tracking of moving objects in driving environments / Fusion multi-capteur pour la détection, classification et suivi d'objets mobiles en environnement routierChavez Garcia, Ricardo Omar 25 September 2014 (has links)
Les systèmes avancés d'assistance au conducteur (ADAS) aident les conducteurs à effectuer des tâches de conduite complexes et à éviter ou atténuer les situations dangereuses. Le véhicule détecte le monde extérieur au moyen de capteurs, et ensuite construit et met à jour un modèle interne de la configuration de l'environnement. La perception de véhicule consiste à établir des relations spatiales et temporelles entre le véhicule et les obstacles statiques et mobiles dans l'environnement. Cette perception se compose de deux tâches principales : la localisation et cartographie simultanées (SLAM) traite de la modélisation de pièces statiques; et la détection et le suivi d'objets en mouvement (DATMO) est responsable de la modélisation des pièces mobiles dans l'environnement. Afin de réaliser un bon raisonnement et contrôle, le système doit modéliser correctement l'environnement. La détection précise et la classification des objets en mouvement est un aspect essentiel d'un système de suivi d'objets. Classification des objets en mouvement est nécessaire pour déterminer le comportement possible des objets entourant le véhicule, et il est généralement réalisée au niveau de suivi des objets. La connaissance de la classe d'objets en mouvement au niveau de la détection peut aider à améliorer leur suivi. La plupart des solutions de perception actuels considèrent informations de classification seulement comme information additional pour la sortie final de la perception. Aussi, la gestion de l'information incomplète est une exigence importante pour les systèmes de perception. Une information incomplète peut être originaire de raisons liées à la détection, tels que les problèmes d calibrage et les dysfonctionnements des capteurs; ou des perturbations de la scène, comme des occlusions, des problèmes de météo et objet déplacement. Les principales contributions de cette thèse se concentrent sur la scène DATMO. Précisément, nous pensons que l'inclusion de la classe de l'objet comme un élément clé de la représentation de l'objet et la gestion de l'incertitude de plusieurs capteurs de détections, peut améliorer les résultats de la tâche de perception. Par conséquent, nous abordons les problèmes de l'association de données, la fusion de capteurs, la classification et le suivi à différents niveaux au sein de la phase de DATMO. Même si nous nous concentrons sur un ensemble de trois capteurs principaux: radar, lidar, et la caméra, nous proposons une architecture modifiables pour inclure un autre type ou nombre de capteurs. Premièrement, nous définissons une représentation composite de l'objet pour inclure des informations de classe et de l'état d'objet deouis le début de la tâche de perception. Deuxièmement, nous proposons, mettre en œuvre, et comparons deux architectures de perception afin de résoudre le problème de DATMO selon le niveau où l'association des objets, la fusion et la classification des informations sont inclus et appliquées. Nos méthodes de fusion de données sont basées sur la théorie de l'evidence, qui est utilisé pour gérer et inclure l'incertitude de la détection du capteur et de la classification des objets. Troisièmement, nous proposons une approche d'association de données bassée en la théorie de l'evidence pour établir une relation entre deux liste des détections d'objets. Quatrièmement, nous intégrons nos approches de fusion dans le cadre d'une application véhicule en temps réel. Cette intégration a été réalisée dans un réelle démonstrateur de véhicule du projet European InteractIVe. Finalement, nous avons analysé et évalué expérimentalement les performances des méthodes proposées. Nous avons comparé notre fusion rapproche les uns contre les autres et contre une méthode state-of-the-art en utilisant des données réelles de scénarios de conduite différents. Ces comparaisons sont concentrés sur la détection, la classification et le suivi des différents objets en mouvement: piétons, vélos, voitures et camions. / Advanced driver assistance systems (ADAS) help drivers to perform complex driving tasks and to avoid or mitigate dangerous situations. The vehicle senses the external world using sensors and then builds and updates an internal model of the environment configuration. Vehicle perception consists of establishing the spatial and temporal relationships between the vehicle and the static and moving obstacles in the environment. Vehicle perception is composed of two main tasks: simultaneous localization and mapping (SLAM) deals with modelling static parts; and detection and tracking moving objects (DATMO) is responsible for modelling moving parts in the environment. In order to perform a good reasoning and control, the system has to correctly model the surrounding environment. The accurate detection and classification of moving objects is a critical aspect of a moving object tracking system. Therefore, many sensors are part of a common intelligent vehicle system. Classification of moving objects is needed to determine the possible behaviour of the objects surrounding the vehicle, and it is usually performed at tracking level. Knowledge about the class of moving objects at detection level can help improve their tracking. Most of the current perception solutions consider classification information only as aggregate information for the final perception output. Also, management of incomplete information is an important requirement for perception systems. Incomplete information can be originated from sensor-related reasons, such as calibration issues and hardware malfunctions; or from scene perturbations, like occlusions, weather issues and object shifting. It is important to manage these situations by taking them into account in the perception process. The main contributions in this dissertation focus on the DATMO stage of the perception problem. Precisely, we believe that including the object's class as a key element of the object's representation and managing the uncertainty from multiple sensors detections, we can improve the results of the perception task, i.e., a more reliable list of moving objects of interest represented by their dynamic state and appearance information. Therefore, we address the problems of sensor data association, and sensor fusion for object detection, classification, and tracking at different levels within the DATMO stage. Although we focus on a set of three main sensors: radar, lidar, and camera, we propose a modifiable architecture to include other type or number of sensors. First, we define a composite object representation to include class information as a part of the object state from early stages to the final output of the perception task. Second, we propose, implement, and compare two different perception architectures to solve the DATMO problem according to the level where object association, fusion, and classification information is included and performed. Our data fusion approaches are based on the evidential framework, which is used to manage and include the uncertainty from sensor detections and object classifications. Third, we propose an evidential data association approach to establish a relationship between two sources of evidence from object detections. We observe how the class information improves the final result of the DATMO component. Fourth, we integrate the proposed fusion approaches as a part of a real-time vehicle application. This integration has been performed in a real vehicle demonstrator from the interactIVe European project. Finally, we analysed and experimentally evaluated the performance of the proposed methods. We compared our evidential fusion approaches against each other and against a state-of-the-art method using real data from different driving scenarios. These comparisons focused on the detection, classification and tracking of different moving objects: pedestrian, bike, car and truck.
|
55 |
Suivi et classification d'objets multiples : contributions avec la théorie des fonctions de croyance / Multi-object tracking and classification : contributions with belief functions theoryHachour, Samir 05 June 2015 (has links)
Cette thèse aborde le problèeme du suivi et de la classification de plusieurs objets simultanément.Il est montré dans la thèese que les fonctions de croyance permettent d'améliorer les résultatsfournis par des méthodes classiques à base d'approches Bayésiennes. En particulier, une précédenteapproche développée dans le cas d'un seul objet est étendue au cas de plusieurs objets. Il est montréque dans toutes les approches multi-objets, la phase d'association entre observations et objetsconnus est fondamentale. Cette thèse propose également de nouvelles méthodes d'associationcrédales qui apparaissent plus robustes que celles trouvées dans la littérature. Enfin, est abordée laquestion de la classification multi-capteurs qui nécessite une seconde phase d'association. Dans cedernier cas, deux architectures de fusion des données capteurs sont proposées, une dite centraliséeet une autre dite distribuée. De nombreuses comparaisons illustrent l'intérêt de ces travaux, queles classes des objets soient constantes ou variantes dans le temps. / This thesis deals with multi-objet tracking and classification problem. It was shown that belieffunctions allow the results of classical Bayesian methods to be improved. In particular, a recentapproach dedicated to a single object classification which is extended to multi-object framework. Itwas shown that detected observations to known objects assignment is a fundamental issue in multiobjecttracking and classification solutions. New assignment solutions based on belief functionsare proposed in this thesis, they are shown to be more robust than the other credal solutions fromrecent literature. Finally, the issue of multi-sensor classification that requires a second phase ofassignment is addressed. In the latter case, two different multi-sensor architectures are proposed, aso-called centralized one and another said distributed. Many comparisons illustrate the importanceof this work, in both situations of constant and changing objects classes.
|
56 |
Impact of information fusion in complex decision makingAziz, Tariq January 2011 (has links)
In military battlefield domain, decision making plays a very important part because safety and protection depends upon the accurate decisions made by the commanders in complex situations. In military and defense applications, there is a need of such technology that helps leaders to take good decisions in the critical situations with information overload. With the help of multi-sensor information fusion, the amount of information can be reduced as well as uncertainties in the information in the decision making of identifying and tracking targets in the military area. Information fusion refers to the process of getting information from different sources and fusing this information, to supply an enhanced decision support. Decision making is the very core and a vital part in the field of information fusion and better decisions can be obtained by understanding how situation awareness can be enhanced. Situation awareness is about understanding the elements of the situation i.e. circumstances of the surrounding environment, their relations and their future impacts, for better decision making. Efficient situation awareness can be achieved with the effective use of the sensors. Sensors play a very useful role in the multi-sensor fusion technology to collect the data about, for instance, the enemy regarding their movements across the border and finding relationships between different objects in the battlefield that helps the decision makers to enhance situation awareness. The purpose of this thesis is to understand and analyze the critical issue of uncertainties that results information in overload in military battlefield domain and benefits of using multi-sensor information fusion technology to reduce uncertainties by comparing uncertainty management methods of Bayesian and Dempster Shafer theories to enhance decision making and situation awareness for identifying the targets in battlefield domain.
|
57 |
Autonomous road vehicles localization using satellites, lane markings and vision / Localisation de véhicules routiers autonomes en utilisant des mesures de satellites et de caméra sur des marquages au solTao, Zui 29 February 2016 (has links)
L'estimation de la pose (position et l'attitude) en temps réel est une fonction clé pour les véhicules autonomes routiers. Cette thèse vise à étudier des systèmes de localisation pour ces véhicules en utilisant des capteurs automobiles à faible coût. Trois types de capteurs sont considérés : des capteurs à l'estime qui existent déjà dans les automobiles modernes, des récepteurs GNSS mono-fréquence avec antenne patch et une caméra de détection de la voie regardant vers l’avant. Les cartes très précises sont également des composants clés pour la navigation des véhicules autonomes. Dans ce travail, une carte de marquage de voies avec une précision de l’ordre du décimètre est considérée. Le problème de la localisation est étudié dans un repère de travail local Est-Nord-Haut. En effet, les sorties du système de localisation sont utilisées en temps réel comme entrées dans un planificateur de trajectoire et un contrôleur de mouvement pour faire en sorte qu’un véhicule soit capable d'évoluer au volant de façon autonome à faible vitesse avec personne à bord. Ceci permet de développer des applications de voiturier autonome aussi appelées « valet de parking ». L'utilisation d'une caméra de détection de voie rend possible l’exploitation des informations de marquage de voie stockées dans une carte géoréférencée. Un module de détection de marquage détecte la voie hôte du véhicule et fournit la distance latérale entre le marquage de voie détecté et le véhicule. La caméra est également capable d'identifier le type des marquages détectés au sol (par exemple, de type continu ou pointillé). Comme la caméra donne des mesures relatives, une étape importante consiste à relier les mesures à l'état du véhicule. Un modèle d'observation raffiné de la caméra est proposé. Il exprime les mesures métriques de la caméra en fonction du vecteur d'état du véhicule et des paramètres des marquages au sol détectés. Cependant, l'utilisation seule d'une caméra a des limites. Par exemple, les marquages des voies peuvent être absents dans certaines parties de la zone de navigation et la caméra ne parvient pas toujours à détecter les marquages au sol, en particulier, dans les zones d’intersection. Un récepteur GNSS, qui est obligatoire pour le démarrage à froid, peut également être utilisé en continu dans le système de localisation multi-capteur du fait qu’il permet de compenser la dérive de l’estime. Les erreurs de positionnement GNSS ne peuvent pas être modélisées simplement comme des bruits blancs, en particulier avec des récepteurs mono-fréquence à faible coût travaillant de manière autonome, en raison des perturbations atmosphériques sur les signaux des satellites et les erreurs d’orbites. Un récepteur GNSS peut également être affecté par de fortes perturbations locales qui sont principalement dues aux multi-trajets. Cette thèse étudie des modèles formeurs de biais d’erreur GNSS qui sont utilisés dans le solveur de localisation en augmentant le vecteur d'état. Une variation brutale due à multi-trajet est considérée comme une valeur aberrante qui doit être rejetée par le filtre. Selon le flux d'informations entre le récepteur GNSS et les autres composants du système de localisation, les architectures de fusion de données sont communément appelées « couplage lâche » (positions et vitesses GNSS) ou « couplage serré » (pseudo-distance et Doppler sur les satellites en vue). Cette thèse étudie les deux approches. En particulier, une approche invariante selon la route est proposée pour gérer une modélisation raffinée de l'erreur GNSS dans l'approche par couplage lâche puisque la caméra ne peut améliorer la performance de localisation que dans la direction latérale de la route. / Estimating the pose (position and attitude) in real-time is a key function for road autonomous vehicles. This thesis aims at studying vehicle localization performance using low cost automotive sensors. Three kinds of sensors are considered : dead reckoning (DR) sensors that already exist in modern vehicles, mono-frequency GNSS (Global navigation satellite system) receivers with patch antennas and a frontlooking lane detection camera. Highly accurate maps enhanced with road features are also key components for autonomous vehicle navigation. In this work, a lane marking map with decimeter-level accuracy is considered. The localization problem is studied in a local East-North-Up (ENU) working frame. Indeed, the localization outputs are used in real-time as inputs to a path planner and a motion generator to make a valet vehicle able to drive autonomously at low speed with nobody on-board the car. The use of a lane detection camera makes possible to exploit lane marking information stored in the georeferenced map. A lane marking detection module detects the vehicle’s host lane and provides the lateral distance between the detected lane marking and the vehicle. The camera is also able to identify the type of the detected lane markings (e.g., solid or dashed). Since the camera gives relative measurements, the important step is to link the measures with the vehicle’s state. A refined camera observation model is proposed. It expresses the camera metric measurements as a function of the vehicle’s state vector and the parameters of the detected lane markings. However, the use of a camera alone has some limitations. For example, lane markings can be missing in some parts of the navigation area and the camera sometimes fails to detect the lane markings in particular at cross-roads. GNSS, which is mandatory for cold start initialization, can be used also continuously in the multi-sensor localization system as done often when GNSS compensates for the DR drift. GNSS positioning errors can’t be modeled as white noises in particular with low cost mono-frequency receivers working in a standalone way, due to the unknown delays when the satellites signals cross the atmosphere and real-time satellites orbits errors. GNSS can also be affected by strong biases which are mainly due to multipath effect. This thesis studies GNSS biases shaping models that are used in the localization solver by augmenting the state vector. An abrupt bias due to multipath is seen as an outlier that has to be rejected by the filter. Depending on the information flows between the GNSS receiver and the other components of the localization system, data-fusion architectures are commonly referred to as loosely coupled (GNSS fixes and velocities) and tightly coupled (raw pseudoranges and Dopplers for the satellites in view). This thesis investigates both approaches. In particular, a road-invariant approach is proposed to handle a refined modeling of the GNSS error in the loosely coupled approach since the camera can only improve the localization performance in the lateral direction of the road. Finally, this research discusses some map-matching issues for instance when the uncertainty domain of the vehicle state becomes large if the camera is blind. It is challenging in this case to distinguish between different lanes when the camera retrieves lane marking measurements.As many outdoor experiments have been carried out with equipped vehicles, every problem addressed in this thesis is evaluated with real data. The different studied approaches that perform the data fusion of DR, GNSS, camera and lane marking map are compared and several conclusions are drawn on the fusion architecture choice.
|
58 |
A Versatile Sensor Data Processing Framework for Resource TechnologyKaever, Peter, Oertel, Wolfgang, Renno, Axel, Seidel, Peter, Meyer, Markus, Reuter, Markus, König, Stefan 28 June 2021 (has links)
Die Erweiterung experimenteller Infrastrukturen um neuartige Sensor eröffnen die Möglichkeit, qualitativ neuartige Erkenntnisse zu gewinnen. Um diese Informationen vollständig zu erschließen ist ein Abdecken der gesamten Verarbeitungskette von
der Datenauslese bis zu anwendungsbezogenen Auswertung erforderlich. Eine Erweiterung bestehender wissenschaftlicher Instrumente beinhaltet die strukturelle und zeitbezogene Integration der neuen Sensordaten in das Bestandssystem. Das hier vorgestellte Framework bietet durch seinen flexiblen Ansatz das Potenzial, unterschiedliche Sensortypen in unterschiedliche, leistungsfähige Plattformen zu integrieren. Zwei unterschiedliche Integrationsansätze zeigen die Flexibilität dieses Ansatzes, wobei einer auf die Steigerung der Sensitivität einer Anlage zur Sekundärionenmassenspektroskopie und der andere auf die Bereitstellung eines Prototypen zur Untersuchung von Rezyklaten ausgerichtet ist. Die sehr unterschiedlichen Hardwarevoraussetzungen und Anforderungen der Anwendung bildeten die Basis zur Entwicklung eines flexiblen Softwareframeworks. Um komplexe und leistungsfähige Applikationsbausteine bereitzustellen wurde eine Softwaretechnologie entwickelt, die modulare Pipelinestrukturen mit Sensor- und Ausgabeschnittstellen sowie einer Wissensbasis mit entsprechenden Konfigurations- und Verarbeitungsmodulen kombiniert.:1. Introduction
2. Hardware Architecture and Application Background
3. Software Concept
4. Experimental Results
5. Conclusion and Outlook / Novel sensors with the ability to collect qualitatively new information offer the potential to improve experimental infrastructure and methods in the field of research technology. In order to get full access to this information, the entire range from detector readout data transfer over proper data and knowledge models up to complex application functions has to be covered. The extension of existing scientific instruments comprises the integration of diverse sensor information into existing hardware, based on the expansion of pivotal event schemes and data models. Due to its flexible approach, the proposed framework has the potential to integrate additional sensor types and offers migration capabilities to high-performance computing platforms. Two different implementation setups prove the flexibility of this approach, one extending the material analyzing capabilities of a secondary ion mass spectrometry device, the other implementing a functional prototype setup for the online analysis of recyclate. Both setups can be regarded as two complementary parts of a highly topical and ground-breaking unique scientific application field. The requirements and possibilities resulting from different hardware concepts on one hand and diverse application fields on the other hand are the basis for the development of a versatile software framework. In order to support complex and efficient application functions under heterogeneous and flexible technical conditions, a software technology is proposed that offers modular processing pipeline structures with internal and external data interfaces backed by a knowledge base with respective configuration and conclusion mechanisms.:1. Introduction
2. Hardware Architecture and Application Background
3. Software Concept
4. Experimental Results
5. Conclusion and Outlook
|
59 |
Optical Sensor Uncertainties and Variable Repositioning Times in the Single and Multi-Sensor Tasking ProblemMichael James Rose (9750503) 14 December 2020 (has links)
<div>As the number of Resident Space Objects around Earth continues to increase, the need for an optimal sensor tasking strategy, specifically with Ground-Based Optical sensors, continues to be of great importance. This thesis focuses on the single and multi-sensor tasking problem with realistic optical sensor modeling for the observation of objects in the Geosynchronous Earth Orbit regime. In this work, sensor tasking refers to assigning the specific?c observation times and viewing directions of a single or multi sensor framework to either survey for or track new or existing objects. For this work specifically, the sensor tasking problem will seek to maximize the total number of Geosynchronous Earth Orbiting objects to be observed from a catalog of existing objects with a single and multi optical sensor tasking framework. This research focuses on the physical assumptions and limitations on an optical sensor, and how these assumptions affect the single and multi sensor tasking scenario. First, the concept of the probability of detection of a resident space object is calculated based on the viewing geometry of the resident space object. Then, this probability of detection is compared to the system that avoids the computational process by implementing a classical heuristic minimum elevation constraint to an electro-optical charged coupled optical sensor. It is shown that in the single and multi-sensor tasking scenario if the probability of detection is not considered in the sensor tasking framework, then a rigid elevation constraint of around 25<sup>o</sup>-35<sup>o</sup> is recommended for tasking Geosynchronous objects. Secondly, the topic of complete geo-coverage within a single night is explored. A sensor network proposed by Ackermann et al. (2018) is studied with and without the probability of detection considerations, and with and without uncertainties in the resident space objects' states. (then what you have). For the multi-sensor system, it is shown that with the assumed covariance model for this work, the framework developed by Ackermann et al. (2018) does not meet the design requirements for the cataloged Geosynchronous objects from March 19th, 2019. Finally, the concept of a variable repositioning time for the slewing of the ground-based sensors is introduced and compared to a constant repositioning time model. A model for the variable repositioning time is derived from data retrieved from the Purdue Optical Ground Station. This model is applied to a single sensor scenario. Optimizers are developed using the two repositioning time functions derived in this work. It is shown that the constant repositioning models that are greater than the maximum repositioning time produce results close to the variable repositioning solution. When the optimizers are tested, it is shown that there is a small increase in performance only when the maximum repositioning time is significant.</div>
|
60 |
In-process deformation measurements of translucent high speed fibre-reinforced disc rotorsPhilipp, Katrin, Filippatos, Angelos, Koukourakis, Nektarios, Kuschmierz, Robert, Leithold, Christoph, Langkamp, Albert, Fischer, Andreas, Czarske, Jürgen 06 September 2019 (has links)
The high stiffness to weight ratio of glass fibre-reinforced polymers (GFRP) makes them an attractive material for rotors e.g. in the aerospace industry. We report on recent developments towards non-contact, in-situ deformation measurements with temporal resolution up to 200 µs and micron measurement uncertainty. We determine the starting point of damage evolution inside the rotor material through radial expansion measurements. This leads to a better understanding of dynamic material behaviour regarding damage evolution and the prediction of damage initiation and propagation. The measurements are conducted using a novel multi-sensor system consisting of four laser Doppler distance (LDD) sensors. The LDD sensor, a two-wavelength Mach-Zehnder interferometer was already successfully applied for dynamic deformation measurements at metallic rotors. While translucency of the GFRP rotor material limits the applicability of most optical measurement techniques due to speckles from both surface and volume of the rotor, the LDD profits from speckles and is not disturbed by backscattered laser light from the rotor volume. The LDD sensor evaluates only signals from the rotor surface. The anisotropic glass fibre-reinforcement results in a rotationally asymmetric dynamic deformation. A novel signal processing algorithm is applied for the combination of the single sensor signals to obtain the shape of the investigated rotors. In conclusion, the applied multi-sensor system allows high temporal resolution dynamic deformation measurements. First investigations regarding damage evolution inside GFRP are presented as an
important step towards a fundamental understanding of the material behaviour and the prediction of damage initiation and propagation.
|
Page generated in 0.0523 seconds