1 |
Ground Target Tracking with Multi-Lane ConstraintChen, Yangsheng 15 May 2009 (has links)
Knowledge of the lane that a target is located in is of particular interest in on-road surveillance and target tracking systems. We formulate the problem and propose two approaches for on-road target estimation with lane tracking. The first approach for lane tracking is lane identification based ona Hidden Markov Model (HMM) framework. Two identifiers are developed according to different optimality goals of identification, i.e., the optimality for the whole lane sequence and the optimality of the current lane where the target is given the whole observation sequence. The second approach is on-road target tracking with lane estimation. We propose a 2D road representation which additionally allows to model the lateral motion of the target. For fusion of the radar and image sensor based measurement data we develop three, IMM-based, estimators that use different fusion schemes: centralized, distributed, and sequential. Simulation results show that the proposed two methods have new capabilities and achieve improved estimation accuracy for on-road target tracking.
|
2 |
Context- and Physiology-aware Machine Learning for Upper-Limb MyocontrolPatel , Gauravkumar K. 03 May 2018 (has links)
No description available.
|
3 |
Multi-Sensor Data Fusion for Vehicular Navigation ApplicationsIqbal, Umar 08 August 2012 (has links)
Global position system (GPS) is widely used in land vehicles but suffers deterioration in its accuracy in urban canyons; mostly due to satellite signal blockage and signal multipath. To obtain accurate, reliable, and continuous positioning solutions, GPS is usually augmented with inertial sensors, including accelerometers and gyroscopes to monitor both translational and rotational motions of a moving vehicle. Due to space and cost requirements, micro-electro-mechanical-system (MEMS) inertial sensors, which are typically inexpensive are presently utilized in land vehicles for various reasons and can be used for integration with GPS for navigation purposes. Kalman filtering (KF) usually used to performs this integration. However, the complex error characteristics of these MEMS based sensors lead to divergence of the positioning solution. Furthermore, the residual GPS pseudorange correlated errors are always ignored, thus reducing the GPS overall positioning accuracy. This thesis targets enhancing the performance of integrated MEMS based INS/GPS navigation systems through exploring new non-linear modelling approaches that can deal with the non-linear and correlated parts of INS and GPS errors. The research approach in this thesis relies on reduced inertial sensor systems (RISS) incorporating single axis gyroscope, vehicle odometer, and accelerometers is considered for the integration with GPS in one of two schemes; either loosely-coupled where GPS position and velocity are used for the integration or tightly-coupled where GPS pseudorange and pseudorange rates are utilized. A new method based on parallel cascade identification (PCI) is developed in this research to enhance the performance of KF by modelling azimuth errors for the RISS/GPS loosely-coupled integration scheme. In addition, PCI is also utilized for the modelling of residual GPS pseudorange correlated errors. This thesis develops a method to augment a PCI – based model of GPS pseudorange correlated errors to a tightly-coupled KF. In order to take full advantage of the PCI based models, this thesis explores the Particle filter (PF) as a non-linear integration scheme that is capable of accommodating the arbitrary sensor characteristics, motion dynamics, and noise distributions. The performance of the proposed methods is examined through several road test experiments in land vehicles involving different types of inertial sensors and GPS receivers. / Thesis (Ph.D, Electrical & Computer Engineering) -- Queen's University, 2012-07-31 16:09:16.559
|
4 |
Managing trust and reliability for indoor tracking systemsRybarczyk, Ryan Thomas January 2016 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Indoor tracking is a challenging problem. The level of accepted error is on a much
smaller scale than that of its outdoor counterpart. While the global positioning system has
become omnipresent, and a widely accepted outdoor tracking system it has limitations in
indoor environments due to loss or degradation of signal. Many attempts have been made
to address this challenge, but currently none have proven to be the de-facto standard. In
this thesis, we introduce the concept of opportunistic tracking in which tracking takes
place with whatever sensing infrastructure is present – static or mobile, within a given
indoor environment. In this approach many of the challenges (e.g., high cost, infeasible
infrastructure deployment, etc.) that prohibit usage of existing systems in typical
application domains (e.g., asset tracking, emergency rescue) are eliminated. Challenges
do still exist when it comes to provide an accurate positional estimate of an entities
location in an indoor environment, namely: sensor classification, sensor selection, and
multi-sensor data fusion. We propose an enhanced tracking framework that through the
infusion of QoS-based selection criteria of trust and reliability we can improve the overall
accuracy of the tracking estimate. This improvement is predicated on the introduction of
learning techniques to classify sensors that are dynamically discovered as part of this opportunistic tracking approach. This classification allows for sensors to be properly
identified and evaluated based upon their specific behavioral characteristics through
performance evaluation. This in-depth evaluation of sensors provides the basis for
improving the sensor selection process. A side effect of obtaining this improved accuracy
is the cost, found in the form of system runtime. This thesis provides a solution for this
tradeoff between accuracy and cost through an optimization function that analyzes this
tradeoff in an effort to find the optimal subset of sensors to fulfill the goal of tracking an
object as it moves indoors. We demonstrate that through this improved sensor
classification, selection, data fusion, and tradeoff optimization we can provide an
improvement, in terms of accuracy, over other existing indoor tracking systems.
|
5 |
Arranjos de sensores orientados à missão para a geração automática de mapas temáticos em VANTs / Mission oriented sensor arrays to generate thematic maps in UAVsFigueira, Nina Machado 03 February 2016 (has links)
O uso de veículos aéreos não tripulados (VANTs) tem se tornado cada vez mais comum, principalmente em aplicações de uso civil. No cenário militar, o uso de VANTs tem focado o cumprimento de missões específicas que podem ser divididas em duas grandes categorias: sensoriamento remoto e transporte de material de emprego militar. Este trabalho se concentra na categoria do sensoriamento remoto. O trabalho foca a definição de um modelo e uma arquitetura de referência para o desenvolvimento de sensores inteligentes orientados a missões específicas. O principal objetivo destas missões é a geração de mapas temáticos. Neste trabalho são investigados processos e mecanismos que possibilitem a geração desta categoria de mapas. Neste sentido, o conceito de MOSA (Mission Oriented Sensor Array) é proposto e modelado. Como estudos de caso dos conceitos apresentados são propostos dois sistemas de mapeamento automático de fontes sonoras, um para o caso civil e outro para o caso militar. Essas fontes podem ter origem no ruído gerado por grandes animais (inclusive humanos), por motores de combustão interna de veículos ou por atividade de artilharia (incluindo caçadores). Os MOSAs modelados para esta aplicação são baseados na integração de dados provenientes de um sensor de imageamento termal e uma rede de sensores acústicos em solo. A integração das informações de posicionamento providas pelos sensores utilizados, em uma base cartográfica única, é um dos aspectos importantes tratados neste trabalho. As principais contribuições do trabalho são a proposta de sistemas MOSA, incluindo conceitos, modelos, arquitetura e a implementação de referência representada pelo sistema de mapeamento automático de fontes sonoras. / The use of unmanned aerial vehicles (UAV) has become increasingly common, particularly in civilian applications. In the military scenario, the use of UAVs has focused the accomplishment of specific tasks in two broad categories: remote sensing and transport of military material. This work focuses the remote sensing category. It address the definition of a model and reference architecture for the development of smart sensors oriented to specific tasks. The main objective of these missions is to generate thematic maps. This work investigates processes and mechanisms that enable the automatic generation of thematic maps. In this sense, the concept of MOSA (Mission Oriented Sensor Array) is proposed and modeled. As case studies, we propose two automatic mapping systems for on-the-ground generated sound sources, one for the civilian case and one for the military case. These sounds may come from the noise generated by large animals (including humans), from internal combustion engine vehicles or from artillery activity (including hunters). The MOSAs modeled for this application integrate data from a thermal imaging sensor and an on-the-ground network of acoustic sensors. The fusion of position information, provided by the two sensors, into a single cartographic basis is one of the key aspects addressed in this work. The main contributions are the proposed MOSA systems, including concepts, models and architecture and the reference implementation comprised by the system for automatic mapping of sound sources.
|
6 |
Sensordatafusion av IR- och radarbilder / Sensor data fusion of IR- and radar imagesSchultz, Johan January 2004 (has links)
<p>Den här rapporten beskriver och utvärderar ett antal algoritmer för multisensordatafusion av radar och IR/TV-data på rådatanivå. Med rådatafusion menas att fusionen ska ske innan attribut- eller objektextrahering. Attributextrahering kan medföra att information går förlorad som skulle kunna förbättra fusionen. Om fusionen sker på rådatanivå finns mer information tillgänglig och skulle kunna leda till en förbättrad attributextrahering i ett senare steg. Två tillvägagångssätt presenteras. Den ena metoden projicerar radarbilden till IR-vyn och vice versa. Fusionen utförs sedan på de par av bilder med samma dimensioner. Den andra metoden fusionerar de två ursprungliga bilderna till en volym. Volymen spänns upp av de tre dimensionerna representerade i ursprungsbilderna. Metoden utökas också genom att utnyttja stereoseende. Resultaten visar att det kan vara givande att utnyttja stereoseende då den extra informationen underlättar fusionen samt ger en mer generell lösning på problemet.</p> / <p>This thesis describes and evaluates a number of algorithms for multi sensor fusion of radar and IR/TV data. The fusion is performed on raw data level, that is prior to attribute extraction. The idea is that less information will be lost compared to attribute level fusion. Two methods are presented. The first method transforms the radar image to the IR-view and vice versa. The images sharing the same dimension are then fused together. The second method fuses the original images to a three dimensional volume. Another version is also presented, where stereo vision is used. The results show that stereo vision can be used with good performance and gives a more general solution to the problem.</p>
|
7 |
Sensordatafusion av IR- och radarbilder / Sensor data fusion of IR- and radar imagesSchultz, Johan January 2004 (has links)
Den här rapporten beskriver och utvärderar ett antal algoritmer för multisensordatafusion av radar och IR/TV-data på rådatanivå. Med rådatafusion menas att fusionen ska ske innan attribut- eller objektextrahering. Attributextrahering kan medföra att information går förlorad som skulle kunna förbättra fusionen. Om fusionen sker på rådatanivå finns mer information tillgänglig och skulle kunna leda till en förbättrad attributextrahering i ett senare steg. Två tillvägagångssätt presenteras. Den ena metoden projicerar radarbilden till IR-vyn och vice versa. Fusionen utförs sedan på de par av bilder med samma dimensioner. Den andra metoden fusionerar de två ursprungliga bilderna till en volym. Volymen spänns upp av de tre dimensionerna representerade i ursprungsbilderna. Metoden utökas också genom att utnyttja stereoseende. Resultaten visar att det kan vara givande att utnyttja stereoseende då den extra informationen underlättar fusionen samt ger en mer generell lösning på problemet. / This thesis describes and evaluates a number of algorithms for multi sensor fusion of radar and IR/TV data. The fusion is performed on raw data level, that is prior to attribute extraction. The idea is that less information will be lost compared to attribute level fusion. Two methods are presented. The first method transforms the radar image to the IR-view and vice versa. The images sharing the same dimension are then fused together. The second method fuses the original images to a three dimensional volume. Another version is also presented, where stereo vision is used. The results show that stereo vision can be used with good performance and gives a more general solution to the problem.
|
8 |
Arranjos de sensores orientados à missão para a geração automática de mapas temáticos em VANTs / Mission oriented sensor arrays to generate thematic maps in UAVsNina Machado Figueira 03 February 2016 (has links)
O uso de veículos aéreos não tripulados (VANTs) tem se tornado cada vez mais comum, principalmente em aplicações de uso civil. No cenário militar, o uso de VANTs tem focado o cumprimento de missões específicas que podem ser divididas em duas grandes categorias: sensoriamento remoto e transporte de material de emprego militar. Este trabalho se concentra na categoria do sensoriamento remoto. O trabalho foca a definição de um modelo e uma arquitetura de referência para o desenvolvimento de sensores inteligentes orientados a missões específicas. O principal objetivo destas missões é a geração de mapas temáticos. Neste trabalho são investigados processos e mecanismos que possibilitem a geração desta categoria de mapas. Neste sentido, o conceito de MOSA (Mission Oriented Sensor Array) é proposto e modelado. Como estudos de caso dos conceitos apresentados são propostos dois sistemas de mapeamento automático de fontes sonoras, um para o caso civil e outro para o caso militar. Essas fontes podem ter origem no ruído gerado por grandes animais (inclusive humanos), por motores de combustão interna de veículos ou por atividade de artilharia (incluindo caçadores). Os MOSAs modelados para esta aplicação são baseados na integração de dados provenientes de um sensor de imageamento termal e uma rede de sensores acústicos em solo. A integração das informações de posicionamento providas pelos sensores utilizados, em uma base cartográfica única, é um dos aspectos importantes tratados neste trabalho. As principais contribuições do trabalho são a proposta de sistemas MOSA, incluindo conceitos, modelos, arquitetura e a implementação de referência representada pelo sistema de mapeamento automático de fontes sonoras. / The use of unmanned aerial vehicles (UAV) has become increasingly common, particularly in civilian applications. In the military scenario, the use of UAVs has focused the accomplishment of specific tasks in two broad categories: remote sensing and transport of military material. This work focuses the remote sensing category. It address the definition of a model and reference architecture for the development of smart sensors oriented to specific tasks. The main objective of these missions is to generate thematic maps. This work investigates processes and mechanisms that enable the automatic generation of thematic maps. In this sense, the concept of MOSA (Mission Oriented Sensor Array) is proposed and modeled. As case studies, we propose two automatic mapping systems for on-the-ground generated sound sources, one for the civilian case and one for the military case. These sounds may come from the noise generated by large animals (including humans), from internal combustion engine vehicles or from artillery activity (including hunters). The MOSAs modeled for this application integrate data from a thermal imaging sensor and an on-the-ground network of acoustic sensors. The fusion of position information, provided by the two sensors, into a single cartographic basis is one of the key aspects addressed in this work. The main contributions are the proposed MOSA systems, including concepts, models and architecture and the reference implementation comprised by the system for automatic mapping of sound sources.
|
9 |
Analýza a zefektivnění distribuovaných systémů / Analysis and Improvement of Distributed SystemsKenyeres, Martin January 2018 (has links)
A significant progress in the evolution of the computer systems and their interconnection over the past 70 years has allowed replacing the frequently used centralized architectures with the highly distributed ones, formed by independent entities fulfilling specific functionalities as one user-intransparent unit. This has resulted in an intense scientic interest in distributed algorithms and their frequent implementation into real systems. Especially, distributed algorithms for multi-sensor data fusion, ensuring an enhanced QoS of executed applications, find a wide usage. This doctoral thesis addresses an optimization and an analysis of the distributed systems, namely the distributed consensus-based algorithms for an aggregate function estimation (primarily, my attention is focused on a mean estimation). The first section is concerned with a theoretical background of the distributed systems, their evolution, their architectures, and a comparison with the centralized systems (i.e. their advantages/disadvantages). The second chapter deals with multi-sensor data fusion, its application, the classification of the distributed estimation techniques, their mathematical modeling, and frequently quoted algorithms for distributed averaging (e.g. protocol Push-Sum, Metropolis-Hastings weights, Best Constant weights etc.). The practical part is focused on mechanisms for an optimization of the distributed systems, the proposal of novel algorithms and complements for the distributed systems, their analysis, and comparative studies in terms of such as the convergence rate, the estimation precision, the robustness, the applicability to real systems etc.
|
10 |
Recherche linéaire et fusion de données par ajustement de faisceaux : application à la localisation par vision / Linear research and data fusion by beam adjustment : application to vision localizationMichot, Julien 09 December 2010 (has links)
Les travaux présentés dans ce manuscrit concernent le domaine de la localisation et la reconstruction 3D par vision artificielle. Dans ce contexte, la trajectoire d’une caméra et la structure3D de la scène filmée sont initialement estimées par des algorithmes linéaires puis optimisées par un algorithme non-linéaire, l’ajustement de faisceaux. Cette thèse présente tout d’abord une technique de recherche de l’amplitude de déplacement (recherche linéaire), ou line search pour les algorithmes de minimisation itérative. La technique proposée est non itérative et peut être rapidement implantée dans un ajustement de faisceaux traditionnel. Cette technique appelée recherche linéaire algébrique globale (G-ALS), ainsi que sa variante à deux dimensions (Two way-ALS), accélèrent la convergence de l’algorithme d’ajustement de faisceaux. L’approximation de l’erreur de reprojection par une distance algébrique rend possible le calcul analytique d’une amplitude de déplacement efficace (ou de deux pour la variante Two way-ALS), par la résolution d’un polynôme de degré 3 (G-ALS) ou 5 (Two way-ALS). Nos expérimentations sur des données simulées et réelles montrent que cette amplitude, optimale en distance algébrique, est performante en distance euclidienne, et permet de réduire le temps de convergence des minimisations. Une difficulté des algorithmes de localisation en temps réel par la vision (SLAM monoculaire) est que la trajectoire estimée est souvent affectée par des dérives : dérives d’orientation, de position et d’échelle. Puisque ces algorithmes sont incrémentaux, les erreurs et approximations sont cumulées tout au long de la trajectoire, et une dérive se forme sur la localisation globale. De plus, un système de localisation par vision peut toujours être ébloui ou utilisé dans des conditions qui ne permettent plus temporairement de calculer la localisation du système. Pour résoudre ces problèmes, nous proposons d’utiliser un capteur supplémentaire mesurant les déplacements de la caméra. Le type de capteur utilisé varie suivant l’application ciblée (un odomètre pour la localisation d’un véhicule, une centrale inertielle légère ou un système de navigation à guidage inertiel pour localiser une personne). Notre approche consiste à intégrer ces informations complémentaires directement dans l’ajustement de faisceaux, en ajoutant un terme de contrainte pondéré dans la fonction de coût. Nous évaluons trois méthodes permettant de sélectionner dynamiquement le coefficient de pondération et montrons que ces méthodes peuvent être employées dans un SLAM multi-capteur temps réel, avec différents types de contrainte, sur l’orientation ou sur la norme du déplacement de la caméra. La méthode est applicable pour tout autre terme de moindres carrés. Les expérimentations menées sur des séquences vidéo réelles montrent que cette technique d’ajustement de faisceaux contraint réduit les dérives observées avec les algorithmes de vision classiques. Ils améliorent ainsi la précision de la localisation globale du système. / The works presented in this manuscript are in the field of computer vision, and tackle the problem of real-time vision based localization and 3D reconstruction. In this context, the trajectory of a camera and the 3D structure of the filmed scene are initially estimated by linear algorithms and then optimized by a nonlinear algorithm, bundle adjustment. The thesis first presents a new technique of line search, dedicated to the nonlinear minimization algorithms used in Structure-from-Motion. The proposed technique is not iterative and can be quickly installed in traditional bundle adjustment frameworks. This technique, called Global Algebraic Line Search (G-ALS), and its two-dimensional variant (Two way-ALS), accelerate the convergence of the bundle adjustment algorithm. The approximation of the reprojection error by an algebraic distance enables the analytical calculation of an effective displacement amplitude (or two amplitudes for the Two way-ALS variant) by solving a degree 3 (G-ALS) or 5 (Two way-ALS) polynomial. Our experiments, conducted on simulated and real data, show that this amplitude, which is optimal for the algebraic distance, is also efficient for the Euclidean distance and reduces the convergence time of minimizations. One difficulty of real-time tracking algorithms (monocular SLAM) is that the estimated trajectory is often affected by drifts : on the absolute orientation, position and scale. Since these algorithms are incremental, errors and approximations are accumulated throughout the trajectory and cause global drifts. In addition, a tracking vision system can always be dazzled or used under conditions which prevented temporarily to calculate the location of the system. To solve these problems, we propose to use an additional sensor measuring the displacement of the camera. The type of sensor used will vary depending on the targeted application (an odometer for a vehicle, a lightweight inertial navigation system for a person). We propose to integrate this additional information directly into an extended bundle adjustment, by adding a constraint term in the weighted cost function. We evaluate three methods (based on machine learning or regularization) that dynamically select the weight associated to the constraint and show that these methods can be used in a real time multi-sensor SLAM, and validate them with different types of constraint on the orientation or on the scale. Experiments conducted on real video sequences show that this technique of constrained bundle adjustment reduces the drifts observed with the classical vision algorithms and improves the global accuracy of the positioning system.
|
Page generated in 0.1049 seconds