Spelling suggestions: "subject:"[een] TRACKING FUSION"" "subject:"[enn] TRACKING FUSION""
1 |
Bias Estimation and Sensor Registration for Target TrackingTaghavi, Ehsan January 2016 (has links)
The main idea of this thesis is to de ne and formulate the role of bias estimation
in multitarget{multisensor scenarios as a general framework for various measurement
types. After a brief introduction of the work that has been done in this thesis, three
main contributions are explained in detail, which exercise the novel ideas.
Starting with radar measurements, a new bias estimation method that can estimate
o set and scaling biases in large network of radars is proposed. Further,
Cram er{Rao Lower Bound is calculated for the bias estimation algorithm to show
the theoretical accuracy that can be achieved by the proposed method. In practice,
communication loss is also part of the distributed systems, which sometimes can not
be avoided. A novel technique is also developed to accompany the proposed bias
estimation method in this thesis to compensate for communication loss at di erent
rates by the use of tracklets.
Next, bearing{only measurements are considered. Biases in this type of measurement
can be di cult to tackle because the measurement noise and systematic biases
are normally larger than in radar measurements. In addition, target observability
is sensitive to sensor{target alignment and can vary over time. In a multitarget{
multisensor bearing{only scenario with biases, a new model is proposed for the biases
that is decoupled form the bearing{only measurements. These decoupled bias measurements
then are used in a maximum likelihood batch estimator to estimate the
biases and then be used for compensation.
The thesis is then expanded by applying bias estimation algorithms into video
sensor measurements. Video sensor measurements are increasingly implemented in
distributed systems because of their economical bene ts. However, geo{location and
geo{registration of the targets must be considered in such systems. In last part of
the thesis, a new approach proposed for modeling and estimation of biases in a two
video sensor platform which can be used as a standalone algorithm. The proposed
algorithm can estimate the gimbal elevation and azimuth biases e ectively.
It is worth noting that in all parts of the thesis, simulation results of various
scenarios with di erent parameter settings are presented to support the ideas, the
accuracy, mathematical modelings and proposed algorithms. These results show that
the bias estimation methods that have been conducted in this thesis are viable and
can handle larger biases and measurement errors than previously proposed methods.
Finally, the thesis conclude with suggestions for future research in three main
directions. / Thesis / Doctor of Philosophy (PhD)
|
2 |
Detailed Simulation of Signal-Level Sensor Data Using Monte Carlo Path Tracing and Photon MappingSchonborn, David January 2018 (has links)
Simulated sensor data from active and passive sensors has numerous applications in target detection and tracking. Simulated data is particularly useful in performance evaluation of target tracking algorithms where the ground truth of a scenario must be known. For real sensor data it is impossible to know the ground truth so simulated data must be used.
This paper discusses existing methods for simulation of data from active sensors and proposes a method that builds on existing techniques from the field of computer graphics. An extension to existing methods is proposed to accommodate the simulation of active sensor data for which timing and frequency information is required in addition to intensity. Results from an existing method of active sensor data simulation are compared to the results of the proposed method. Additionally, a cloud computing framework is proposed and its scalability evaluated to address the fairly large computational load of such a simulation. / Thesis / Master of Applied Science (MASc)
|
3 |
[en] COLLABORATIVE FACE TRACKING: A FRAMEWORK FOR THE LONG-TERM FACE TRACKING / [pt] RASTREAMENTO DE FACES COLABORATIVO: UMA METODOLOGIA PARA O RASTREAMENTO DE FACES AO LONGO PRAZOVICTOR HUGO AYMA QUIRITA 22 March 2021 (has links)
[pt] O rastreamento visual é uma etapa essencial em diversas aplicações
de visão computacional. Em particular, o rastreamento facial é considerado
uma tarefa desafiadora devido às variações na aparência da face, devidas
à etnia, gênero, presença de bigode ou barba e cosméticos, além de variações
na aparência ao longo da sequência de vídeo, como deformações,
variações em iluminação, movimentos abruptos e oclusões. Geralmente, os
rastreadores são robustos a alguns destes fatores, porém não alcançam resultados
satisfatórios ao lidar com múltiplos fatores ao mesmo tempo. Uma
alternativa é combinar as respostas de diferentes rastreadores para alcançar
resultados mais robustos. Este trabalho se insere neste contexto e propõe
um novo método para a fusão de rastreadores escalável, robusto, preciso
e capaz de manipular rastreadores independentemente de seus modelos. O
método prevê ainda a integração de detectores de faces ao modelo de fusão
de forma a aumentar a acurácia do rastreamento. O método proposto foi
implementado para fins de validação, tendo sido testado em diversas configurações
que combinaram até cinco rastreadores distintos e um detector de
faces. Em testes realizados a partir de quatro sequências de vídeo que apresentam
condições diversas de imageamento o método superou em acurácia
os rastreadores utilizados individualmente. / [en] Visual tracking is fundamental in several computer vision applications.
In particular, face tracking is challenging because of the variations in facial
appearance, due to age, ethnicity, gender, facial hair, and cosmetics, as well
as appearance variations in long video sequences caused by facial deformations,
lighting conditions, abrupt movements, and occlusions. Generally,
trackers are robust to some of these factors but do not achieve satisfactory
results when dealing with combined occurrences. An alternative is to combine
the results of different trackers to achieve more robust outcomes. This
work fits into this context and proposes a new method for scalable, robust
and accurate tracker fusion able to combine trackers regardless of their models.
The method further provides the integration of face detectors into the
fusion model to increase the tracking accuracy. The proposed method was
implemented for validation purposes and was tested in different configurations
that combined up to five different trackers and one face detector. In
tests on four video sequences that present different imaging conditions the
method outperformed the trackers used individually.
|
Page generated in 0.0285 seconds