Spelling suggestions: "subject:"data association"" "subject:"mata association""
1 |
Bayesian-based techniques for tracking multiple humans in an enclosed environmentur-Rehman, Ata January 2014 (has links)
This thesis deals with the problem of online visual tracking of multiple humans in an enclosed environment. The focus is to develop techniques to deal with the challenges of varying number of targets, inter-target occlusions and interactions when every target gives rise to multiple measurements (pixels) in every video frame. This thesis contains three different contributions to the research in multi-target tracking. Firstly, a multiple target tracking algorithm is proposed which focuses on mitigating the inter-target occlusion problem during complex interactions. This is achieved with the help of a particle filter, multiple video cues and a new interaction model. A Markov chain Monte Carlo particle filter (MCMC-PF) is used along with a new interaction model which helps in modeling interactions of multiple targets. This helps to overcome tracking failures due to occlusions. A new weighted Markov chain Monte Carlo (WMCMC) sampling technique is also proposed which assists in achieving a reduced tracking error. Although effective, to accommodate multiple measurements (pixels) produced by every target, this technique aggregates measurements into features which results in information loss. In the second contribution, a novel variational Bayesian clustering-based multi-target tracking framework is proposed which can associate multiple measurements to every target without aggregating them into features. It copes with complex inter-target occlusions by maintaining the identity of targets during their close physical interactions and handles efficiently a time-varying number of targets. The proposed multi-target tracking framework consists of background subtraction, clustering, data association and particle filtering. A variational Bayesian clustering technique groups the extracted foreground measurements while an improved feature based joint probabilistic data association filter (JPDAF) is developed to associate clusters of measurements to every target. The data association information is used within the particle filter to track multiple targets. The clustering results are further utilised to estimate the number of targets. The proposed technique improves the tracking accuracy. However, the proposed features based JPDAF technique results in an exponential growth of computational complexity of the overall framework with increase in number of targets. In the final work, a novel data association technique for multi-target tracking is proposed which more efficiently assigns multiple measurements to every target, with a reduced computational complexity. A belief propagation (BP) based cluster to target association method is proposed which exploits the inter-cluster dependency information. Both location and features of clusters are used to re-identify the targets when they emerge from occlusions. The proposed techniques are evaluated on benchmark data sets and their performance is compared with state-of-the-art techniques by using, quantitative and global performance measures.
|
2 |
On Computationally Efficient Frameworks For Data Association In Multi-Target TrackingKrishnaswamy, Sriram January 2019 (has links)
No description available.
|
3 |
Uncertainty Quantification of Tightly Integrated LiDAR/IMU Localization AlgorithmsHassani, Ali 01 June 2023 (has links)
Safety risk evaluation is critical in autonomous vehicle applications. This research aims to develop, implement, and validate new safety monitoring methods for navigation in Global Navigation Satellite System (GNSS)-denied environments. The methods quantify uncertainty in sensors and algorithms that exploit the complementary properties of light detection and ranging (LiDAR) and inertial measuring units (IMU). This dissertation describes the following four contributions.
First, we focus on sensor augmentation for landmark-based localization. We develop new IMU/LiDAR integration methods that guarantee a bound on the integrity risk, which is the probability that the navigation error exceeds predefined acceptability limits. IMU data improves LiDAR position and orientation (pose) prediction and LiDAR limits the IMU error drift over time. In addition, LiDAR return-light intensity measurements improve landmarks recognition. As compared to using the sensors individually, tightly-coupled IMU/LiDAR not only increases pose estimation accuracy but also reduces the risk of incorrectly associating perceived features with mapped landmarks.
Second, we consider algorithm improvements. We derive and analyze a new data association method that provides a tight bound on the risk of incorrect association for LiDAR feature-based localization. The new data association criterion uses projections of the extended Kalman filter's (EKF) innovation vector rather than more conventional innovation vector norms. This method decreases the integrity risk by improving our ability to predict the risk of incorrect association.
Third, we depart from landmark-based approaches. We develop a spherical grid-based localization method that leverages quantization theory to bound navigation uncertainty. This method is integrated with an iterative EKF to establish an analytical bound on the vehicle's pose estimation error. Unlike landmark-based localization which requires feature extraction and data association, this method uses the entire LiDAR point cloud and is robust to extraction and association failures.
Fourth, to validate these methods, we designed and built two testbeds for indoor and outdoor experiments. The indoor testbed includes a sensor platform installed on a rover moving on a figure-eight track in a controlled lab environment. The repeated figure-eight trajectory provides empirical pose estimation error distributions that can directly be compared with analytical error bounds. The outdoor testbed required another set of navigation sensors for reference truth trajectory generation. Sensors were mounted on a car to validate our algorithms in a realistic automotive driving environment. / Doctor of Philosophy / Advances in computing and sensing technologies have enabled large scale demonstrations of autonomous vehicle operations including pilot programs for self-driving cars on public roads. However, a key question that has yet to be answered is about how safe these vehicles really are. "Autonomously" driving millions of miles (with a trained safety driver taking over control to prevent potential collisions) is insufficient to prove fatality rates matching human performance, i.e., lower than 1 per 100,000,000 miles driven.
The safety of an autonomous vehicle depends on the safety of its individual subsystems, components, connected infrastructure, etc. In this research, we evaluate the safety of the navigation subsystem which uses sensor information to determine the vehicle's location and orientation. We focus on light detection and ranging (LiDAR)and inertial measuring units (IMU). A LiDAR provides a point cloud representation of the environment by measuring distances to surrounding objects using beams of infrared light (laser beams) sent at regular angular intervals. An IMU measures the acceleration and angular velocity of the vehicle.
We assume that a map of the environment is available.
In the first part of this research, we extract recognizable objects from the LiDAR point cloud and match them with those in the map: this process helps estimate the vehicle's position and orientation.
We identify the process' limitations that include incorrectly matching sensed and mapped landmarks.
We develop new methods to quantify their impacts on localization errors, which we then reduce by incorporating additional IMU data.
In the second part of this dissertation, we design and evaluate a new approach specifically aimed at provably increasing confidence in landmark matching, thereby improving vehicle navigation safety.
Third, instead of isolating individual landmarks, we use the LiDAR point cloud as a whole and match it directly with the map. The challenge with this approach was in efficiently and accurately quantifying the confidence that can be placed in the vehicle's navigation solution.
We tested these navigation methods using experimental data collected in a controlled lab environment and in a real-world scenario.
|
4 |
LANE TRACKING USING DEPENDENT EXTENDED TARGET MODELSakbari, behzad January 2021 (has links)
Detection of multiple-lane markings (lane-line) on road surfaces is an essential aspect
of autonomous vehicles. Although several approaches have been proposed to detect
lanes, detecting multiple lane-lines consistently, particularly across a stream of frames
and under varying lighting conditions is still a challenging problem. Since the road's
markings are designed to be smooth and parallel, lane-line sampled features tend
to be spatially and temporally correlated inside and between frames. In this thesis,
we develop novel methods to model these spatial and temporal dependencies in the
form of the target tracking problem. In fact, instead of resorting to the conventional
method of processing each frame to detect lanes only in the space domain, we treat
the overall problem as a Multiple Extended Target Tracking (METT) problem.
In the first step, we modelled lane-lines as multiple "independent" extended targets
and developed a spline mathematical model for the shape of the targets. We showed
that expanding the estimations across the time domain could improve the result of
estimation. We identify a set of control points for each spline, which will track over
time. To overcome the clutter problem, we developed an integrated probabilistic data
association fi lter (IPDAF) as our basis, and formulated a METT algorithm to track
multiple splines corresponding to each lane-line.In the second part of our work, we investigated the coupling between multiple extended targets. We considered the non-parametric case and modeled target dependency
using the Multi-Output Gaussian Process. We showed that considering
dependency between extended targets could improve shape estimation results. We
exploit the dependency between extended targets by proposing a novel recursive approach
called the Multi-Output Spatio-Temporal Gaussian Process Kalman Filter
(MO-STGP-KF). We used MO-STGP-KF to estimate and track multiple dependent
lane markings that are possibly degraded or obscured by traffic. Our method tested
for tracking multiple lane-lines but can be employed to track multiple dependent
rigid-shape targets by using the measurement model in the radial space
In the third section, we developed a Spatio-Temporal Joint Probabilistic Data
Association Filter (ST-JPDAF). In multiple extended target tracking problems with
clutter, sometimes extended targets share measurements: for example, in lane-line
detection, when two-lane markings pass or merge together. In single-point target
tracking, this problem can be solved using the famous Joint Probabilistic Data Association
(JPDA) filter. In the single-point case, even when measurements are dependent,
we can stack them in the coupled form of JPDA. In this last chapter, we expanded
JPDA for tracking multiple dependent extended targets using an approach called
ST-JPDAF. We managed dependency of measurements in space (inside a frame) and
time (between frames) using different kernel functions, which can be learned using
the trained data. This extension can be used to track the shape and dynamic of
dependent extended targets within clutter when targets share measurements.
The performance of the proposed methods in all three chapters are quanti ed on
real data scenarios and their results are compared against well-known model-based,
semi-supervised, and fully-supervised methods. The proposed methods offer very promising results. / Thesis / Doctor of Philosophy (PhD)
|
5 |
Recursive-RANSAC: A Novel Algorithm for Tracking Multiple Targets in ClutterNiedfeldt, Peter C. 02 July 2014 (has links) (PDF)
Multiple target tracking (MTT) is the process of identifying the number of targets present in a surveillance region and the state estimates, or track, of each target. MTT remains a challenging problem due to the NP-hard data association step, where unlabeled measurements are identified as either a measurement of an existing target, a new target, or a spurious measurement called clutter. Existing techniques suffer from at least one of the following drawbacks: divergence in clutter, underlying assumptions on the number of targets, high computational complexity, time-consuming implementation, poor performance at low detection rates, and/or poor track continuity. Our goal is to develop an efficient MTT algorithm that is simple yet effective and that maintains track continuity enabling persistent tracking of an unknown number of targets. A related field to tracking is regression analysis, where the parameters of static signals are estimated from a batch or a sequence of data. The random sample consensus (RANSAC) algorithm was developed to mitigate the effects of spurious measurements, and has since found wide application within the computer vision community due to its robustness and efficiency. The main concept of RANSAC is to form numerous simple hypotheses from a batch of data and identify the hypothesis with the most supporting measurements. Unfortunately, RANSAC is not designed to track multiple targets using sequential measurements.To this end, we have developed the recursive-RANSAC (R-RANSAC) algorithm, which tracks multiple signals in clutter without requiring prior knowledge of the number of existing signals. The basic premise of the R-RANSAC algorithm is to store a set of RANSAC hypotheses between time steps. New measurements are used to either update existing hypotheses or generate new hypotheses using RANSAC. Storing multiple hypotheses enables R-RANSAC to track multiple targets. Good tracks are identified when a sufficient number of measurements support a hypothesis track. The complexity of R-RANSAC is shown to be squared in the number of measurements and stored tracks, and under moderate assumptions R-RANSAC converges in mean to the true states. We apply R-RANSAC to a variety of simulation, camera, and radar tracking examples.
|
6 |
Vers le suivi d’objets dans un cadre évidentiel : représentation, filtrage dynamique et association / toward object tracking using evidential framework : Representation, dynamic filtering and data associationRekik, Wafa 23 March 2015 (has links)
Les systèmes intelligents sont de plus en plus présents dans notre société à l’instar des systèmes de surveillance et de protection de sites civils ou militaires. Leur but est de détecter les intrus et remonter une alarme ou une menace à un opérateur distant. Dans nos travaux, nous nous intéressons à de tels systèmes avec comme objectif de gérer au mieux la qualité de l’information présentée à l’opérateur en termes de fiabilité et précision. Nous nous concentrons sur la modalité image en vue de gérer des détections à la fois incertaines et imprécises de façon à présenter des objets fiables à l’opérateur.Pour préciser notre problème nous posons les contraintes suivantes. La première est que le système soit modulaire, l’une des briques (ou sous-fonctions) du système étant la détection de fragments correspondant potentiellement à des objets. Notre deuxième contrainte est alors de n’utiliser que des informations issues de la géométrie des détections fragmentaires : localisation spatiale dans l’image et taille des détections. Une menace est alors supposée d’autant plus importante que les détections sont de tailles importantes et temporellement persistantes.Le cadre formel choisi est la théorie des fonctions de croyance qui permet de modéliser des données à la fois imprécises et incertaines. Les contributions de cette thèse concernent la représentation des objets en termes de localisation imprécise et incertaine et le filtrage des objets.La représentation pertinente des informations est un point clé pour les problèmes d’estimation ou la prise de décision. Une bonne représentation se reconnaît au fait qu’en découlent des critères simples et performants pour résoudre des sous-problèmes. La représentation proposée dans cette thèse a été valorisée par le fait qu’un critère d’association entre nouvelles détections (fragments) et objets en construction, a pu être défini d’une façon simple et rigoureuse. Rappelons que cette association est une étape clé pour de nombreux problèmes impliquant des données non étiquettées, ce qui étend notre contribution au-delà de l’application considérée.Le filtrage des données est utilisé dans de nombreuses méthodes ou algorithmes pour robustifier les résultats en s’appuyant sur la redondance attendue des données s’opposant à l’inconsistance du bruit. Nous avons alors formulé ce problème en termes d’estimation dynamique d’un cadre de discernement contenant les ‘vraies hypothèses’. Ce cadre est estimé dynamiquement avec la prise en compte de nouvelles données (ou observations) permettant de détecter deux principaux types d’erreurs : la duplication de certaines hypothèses (objets dans notre application), la présence de fausses alarmes (dues au bruit ou aux fausses détections dans notre cas).Pour finir nous montrons la possibilité de coupler nos briques de construction des objets et de filtrage de ces derniers avec une brique de suivi utilisant des informations plus haut niveau, telle que les algorithmes de tracking classiques de traitement d’image.Mots clés: théorie des fonctions des croyances, association de données, filtrage. / Intelligent systems are more and more present in our society, like the systems of surveillance and civilian or military sites protection. Their purpose is to detect intruders and present the alarms or threats to a distant operator. In our work, we are interested in such systems with the aim to better handle the quality of information presented to the operator in terms of reliability and precision. We focus on the image modality and we have to handle detections that are both uncertain and imprecise in order to present reliable objects to the operator.To specify our problem, we consider the following constraints. The first one is that the system is modular; one subpart of the system is the detection of fragments corresponding potentially to objects. Our second constraint is then to use only information derived from the geometry of these fragmentary detections: spatial location in the image and size of the detections. Then, a threat is supposed all the more important as the detections have an important size and are temporally persistent.The chosen formal framework is the belief functions theory that allows modeling imprecise and uncertain data. The contributions of this thesis deal with the objects representation in terms of imprecise and uncertain location of the objects and object filtering.The pertinent representation of information is a key point for estimation problems and decision making. A representation is good when simple and efficient criteria for the resolution of sub problems can be derived. The representation proposed has allowed us to derive, in a simple and rigorous way, an association criterion between new detections (fragments) and objects under construction. We remind that this association is a key step for several problems with unlabelled data that extends our contribution beyond of the considered application.Data filtering is used in many methods and algorithms to robustify the results using the expected data redundancy versus the noise inconsistency. Then, we formulated our problem in terms of dynamic estimation of a discernment frame including the 'true hypotheses'. This frame is dynamically estimated taking into account the new data (or observations) that allow us to detect two main types of errors, namely the duplication of some hypotheses (objects in our application) and the presence of false alarms (due to noise or false detections in our case).Finally, we show the possibility of coupling our sub-functions dealing with object construction and their filtering with a tracking process using higher level information such as classical tracking algorithm in image processing.Keywords: belief functions theory, data association, filtering.
|
7 |
Applications of Cost Function-Based Particle Filters for Maneuvering Target TrackingWang, Sung-chieh 23 August 2007 (has links)
For the environment of target tracking with highly non-linear models and non-Gaussian noise, the tracking performance of the particle filter is better than extended Kalman filter; in addition, the design of particle filter is simpler, so it is quite suitable for the realistic environment. However, particle filter depends on the probability model of the noise. If the knowledge of the noise is incorrect, the tracking performance of the particle filter will degrade severely. To tackle the problem, cost function-based particle filters have been studied. Though suffering from minor degradation on the performance, the cost function-based particle filters do not need probability assumptions of the noises. The application of cost function-based particle filters will be more robust in any realistic environment. Cost function-based particle filters will enable maneuvering multiple target tracking to be suitable for any environment because it does not depend on the noise model. The difficulty lies in the link between the estimator and data association. The likelihood function are generally obtained from the algorithm of the data association; while cost functions are used in the cost function-based particle filter for moving the particles and update the corresponding weights without probability assumptions on the noises. The thesis is focused on the combination of data association and cost function-based particle filter, in order to make the algorithm of multiple target tracking more robust in noisy environments.
|
8 |
Ultra WideBand Impulse Radio in Multiple Access Wireless CommunicationsLai, Weei-Shehng 25 July 2004 (has links)
Ultra-Wideband impulse radio (UWB-IR) technology is an attractive method on multi-user for high data rate transmitting structures. In this thesis, we use the ultra wideband (UWB) signal that is modulated by the time-hopping spread spectrum technique in a wireless multiple access environments, and discuss the influences of multiple access interference. We discuss two parts of the influences of multiple access interference in this thesis. The first, we analyze the multiple access interferences on the conventional correlation receiver, and discuss the influences by using the time hopping code on different multiple access structures. The second, we know that the performances of user detection and system capacity would be degraded by the conventional correlation receiver in the multiple access channels. The Probabilistic Data Association(PDA) multi-user detection technology can eliminate multiple access interferences in this part. We will use this method to verify the system performance through the computer simulations, and compare to other multi-user detectors with convention correlation receivers. Finally, the simulation results show that the performance of the PDA multi-user detections is improved when the system is full loaded.
|
9 |
Video content analysis for automated detection and tracking of humans in CCTV surveillance applicationsTawiah, Thomas Andzi-Quainoo January 2010 (has links)
The problems of achieving high detection rate with low false alarm rate for human detection and tracking in video sequence, performance scalability, and improving response time are addressed in this thesis. The underlying causes are the effect of scene complexity, human-to-human interactions, scale changes, and scene background-human interactions. A two-stage processing solution, namely, human detection, and human tracking with two novel pattern classifiers is presented. Scale independent human detection is achieved by processing in the wavelet domain using square wavelet features. These features used to characterise human silhouettes at different scales are similar to rectangular features used in [Viola 2001]. At the detection stage two detectors are combined to improve detection rate. The first detector is based on shape-outline of humans extracted from the scene using a reduced complexity outline extraction algorithm. A Shape mismatch measure is used to differentiate between the human and the background class. The second detector uses rectangular features as primitives for silhouette description in the wavelet domain. The marginal distribution of features collocated at a particular position on a candidate human (a patch of the image) is used to describe statistically the silhouette. Two similarity measures are computed between a candidate human and the model histograms of human and non human classes. The similarity measure is used to discriminate between the human and the non human class. At the tracking stage, a tracker based on joint probabilistic data association filter (JPDAF) for data association, and motion correspondence is presented. Track clustering is used to reduce hypothesis enumeration complexity. Towards improving response time with increase in frame dimension, scene complexity, and number of channels; a scalable algorithmic architecture and operating accuracy prediction technique is presented. A scheduling strategy for improving the response time and throughput by parallel processing is also presented.
|
10 |
Bayesian Data Association for Temporal Scene UnderstandingBrau Avila, Ernesto January 2013 (has links)
Understanding the content of a video sequence is not a particularly difficult problem for humans. We can easily identify objects, such as people, and track their position and pose within the 3D world. A computer system that could understand the world through videos would be extremely beneficial in applications such as surveillance, robotics, biology. Despite significant advances in areas like tracking and, more recently, 3D static scene understanding, such a vision system does not yet exist. In this work, I present progress on this problem, restricted to videos of objects that move in smoothly and which are relatively easily detected, such as people. Our goal is to identify all the moving objects in the scene and track their physical state (e.g., their 3D position or pose) in the world throughout the video. We develop a Bayesian generative model of a temporal scene, where we separately model data association, the 3D scene and imaging system, and the likelihood function. Under this model, the video data is the result of capturing the scene with the imaging system, and noisily detecting video features. This formulation is very general, and can be used to model a wide variety of scenarios, including videos of people walking, and time-lapse images of pollen tubes growing in vitro. Importantly, we model the scene in world coordinates and units, as opposed to pixels, allowing us to reason about the world in a natural way, e.g., explaining occlusion and perspective distortion. We use Gaussian processes to model motion, and propose that it is a general and effective way to characterize smooth, but otherwise arbitrary, trajectories. We perform inference using MCMC sampling, where we fit our model of the temporal scene to data extracted from the videos. We address the problem of variable dimensionality by estimating data association and integrating out all scene variables. Our experiments show our approach is competitive, producing results which are comparable to state-of-the-art methods.
|
Page generated in 0.0838 seconds