Spelling suggestions: "subject:"calibration off cameras"" "subject:"calibration oof cameras""
1 |
[en] SCENE TRACKING WITH AUTOMATIC CAMERA CALIBRATION / [pt] ACOMPANHAMENTO DE CENAS COM CALIBRAÇÃO AUTOMÁTICA DE CÂMERASFLAVIO SZENBERG 01 June 2005 (has links)
[pt] É cada vez mais comum, na transmissão de eventos esportivos
pelas emissoras de
televisão, a inserção, em tempo real, de elementos
sintéticos na imagem, como anúncios,
marcações no campo, etc. Geralmente, essa inserção é feita
através do emprego de câmeras
especiais, previamente calibradas e dotadas de dispositivos
que registram seu movimento e a
mudança de seus parâmetros. De posse destas informações, é
simples inserir novos elementos
na cena com a projeção apropriada.
Nesta tese, é apresentado um algoritmo para recuperar, em
tempo real e sem utilizar
qualquer informação adicional, a posição e os parâmetros da
câmera em uma seqüência de
imagens contendo a visualização de modelos conhecidos. Para
tal, é explorada a existência,
nessas imagens, de segmentos de retas que compõem a
visualização do modelo cujas posições
são conhecidas no mundo tridimensional. Quando se trata de
uma partida de futebol, por
exemplo, o modelo em questão é composto pelo conjunto das
linhas do campo, segundo as
regras que definem sua geometria e dimensões.
Inicialmente, são desenvolvidos métodos para a extração de
segmentos de retas longos
da primeira imagem. Em seguida é localizada uma imagem do
modelo no conjunto desses
segmentos com base em uma árvore de interpretação. De posse
desse reconhecimento, é feito
um reajuste nos segmentos que compõem a visualização do
modelo, sendo obtidos pontos de
interesse que são repassados a um procedimento capaz de
encontrar a câmera responsável pela
visualização do modelo. Para a segunda imagem da seqüência
em diante, apenas uma parte
do algoritmo é utilizada, levando em consideração a
coerência entre quadros, a fim de
aumentar o desempenho e tornar possível o processamento em
tempo real.
Entre diversas aplicações que podem ser empregadas para
comprovar o desempenho e
a validade do algoritmo proposto, está uma que captura
imagens através de uma câmera para
demonstrar o funcionamento do algoritmo on line. A
utilização de captura de imagens
permite testar o algoritmo em inúmeros casos, incluindo
modelos e ambientes diferentes. / [en] In the television casting of sports events, it has become
very common to insert
synthetic elements to the images in real time, such as
adds, marks on the field, etc. Usually,
this insertion is made using special cameras, previously
calibrated and provided with features
that record their movements and parameter changes. With
such information, inserting new
objects to the scene with the adequate projection is a
simple task.
In the present work, we will introduce an algorithm to
retrieve, in real time and using
no additional information, the position and parameters of
the camera in a sequence of images
containing the visualization of previously-known models.
For such, the method explores the
existence in these images of straight-line segments that
compose the visualization of the
model whose positions are known in the three-dimensional
world. In the case of a soccer
match, for example, the respective model is composed by
the set of field lines determined by
the rules that define their geometry and dimensions.
Firstly, methods are developed to extract long straight-
line segments from the first
image. Then an image of the model is located in the set
formed by such segments based on an
interpretation tree. With such information, the segments
that compose the visualization of the
model are readjusted, resulting in the obtainment of
interest points which are then passed to a
proceeding able to locate the camera responsible for the
model`s visualization. For the second
image on, only a part of the algorithm is used, taking
into account the coherence between the
frames, with the purpose of improving performance to allow
real-time processing.
Among several applications that can be employed to
evaluate the performance and
quality of the proposed method, there is one that captures
images with a camera to show the
on-line functioning of the algorithm. By using image
capture, we can test the algorithm in a
great variety of instances, including different models and
environments.
|
2 |
[en] MULTIPLE CAMERA CALIBRATION BASED ON INVARIANT PATTERN / [pt] CALIBRAÇÃO DE MÚLTIPLAS CÂMERAS BASEADO EM UM PADRÃO INVARIANTEMANUEL EDUARDO LOAIZA FERNANDEZ 11 January 2010 (has links)
[pt] O processo de calibração de câmeras é uma etapa importante na instalação
dos sistemas de rastreamento óptico. Da qualidade da calibração deriva o
funcionamento correto e preciso do sistema de rastreamento. Diversos métodos de
calibração têm sido propostos na literatura em conjunto com o uso de artefatos
sintéticos definidos como padrões de calibração. Esses padrões, de forma e
tamanho conhecidos, permitem a aquisição de pontos de referência que são
utilizados para a determinação dos parâmetros das câmeras. Para minimizar erros,
esta aquisição deve ser feita em todo o espaço de rastreamento. A fácil
identificação dos pontos de referência torna o processo de aquisição eficiente. A
quantidade e a qualidade das relações geométricas das feições do padrão
influenciam diretamente na precisão dos parâmetros de calibração obtidos. É
nesse contexto que esta tese se encaixa, propondo um novo método para múltipla
calibração de câmeras, que é eficiente e produz resultados tão ou mais precisos
que os métodos atualmente disponíveis na literatura. Nosso método também
propõe um novo tipo de padrão de calibração que torna a tarefa de captura e
reconhecimento de pontos de calibração mais robusta e eficiente. Deste padrão
também derivam relações que aumentam a precisão do rastreamento. Nesta tese o
processo de calibração de múltiplas câmeras é revisitado e estruturado de forma a
permitir uma comparação das principais propostas da literatura com o método
proposto. Esta estruturação também dá suporte a uma implementação flexível que
permite a reprodução numérica de diferentes propostas. Finalmente, este trabalho
apresenta resultados numéricos que permitem tirar algumas conclusões. / [en] The calibration of multiple cameras is an important step in the installation
of optical tracking systems. The accuracy of a tracking system is directly related
to the quality of the calibration process. Several calibration methods have been
proposed in the literature in conjunction with the use of artifacts, called
calibration patterns. These patterns, with shape and size known, allow the
capture of reference points to compute camera parameters. To yield good results
these points must be uniformly distributed over the tracking area. The
determination of the reference points in the image is an expensive process prone
to errors. The use of a good calibration pattern can reduce these problems. This
thesis proposes a new multiple camera calibration method that is efficient and
yields better results than previously proposed methods available in the
literature. Our method also proposes the use of a new simple calibration pattern
based on perspective invariant properties and useful geometric properties. This
pattern yields robust reference point identification and more precise tracking.
This thesis also revisits the multiple calibration process and suggests a
framework to compare the existing methods including the one proposed here.
This framework is used to produce a flexible implementation that allows a
numerical evaluation that demonstrates the benefits of the proposed method.
Finally the thesis presents some conclusions and suggestions for further work.
|
3 |
[en] CAMERA CALIBRATION AND POSITIONING USING PHOTOGRAPHS AND MODELS OF BUILDINGS / [pt] CALIBRAÇÃO E POSICIONAMENTO DE CÂMERA UTILIZANDO FOTOS E MODELOS DE EDIFICAÇÕESPABLO CARNEIRO ELIAS 11 November 2009 (has links)
[pt] A reconstrução de câmera é um dos problemas fundamentais da visão computacional.
Sistemas de software desta área utilizam modelos matemáticos
de câmera, ou câmeras virtuais, para, por exemplo, interpretar e reconstruir
a estrutura tridimensional de uma cena real a partir de fotos e vídeos digitais
ou para produzir imagens sintéticas com aparência realista. As técnicas de
reconstruçã de câmera da visão computacional são aplicadas em conjunto
com técnicas de realidade virtual para dar origem a novas aplicações chamadas
de aplicações de realidade aumentada, que utilizam câmeras virtuais
para combinar imagens reais e sintéticas em uma mesma foto digital. Dentre
os diversos usos destes tipos de aplicação, este trabalho tem particular interesse
naqueles que tratam de visitas aumentadas a edificações. Nestes casos,
fotos de edificações — tipicamente de construções antigas ou ruínas — são
reconstruídas a partir de modelos virtuais que são inseridos em meio a tais
fotos digitais com a finalidade de habilitar a visão de como essas edificações
eram em suas estruturas originais. Nesse contexto, este trabalho propõe um
método semi-automático e eficiente para realizar tal reconstrução e registrar
câmeras virtuais a partir de fotos reais e modelos computacionais de
edificações, permitindo compará-los através de superposição direta e disponibilizando
uma nova maneira de navegar de forma tridimensional por entre
diversas fotos registradas. Tal método requer a participação do usuário, mas
é projetado para ser simples e produtivo. / [en] Camera reconstruction is one of the major problems in computer vision.
Software systems in that field uses mathematical camera models, or virtual
cameras, for example, to interpret and reconstruct the tridimensional
structure of a real scene from a set of digital pictures or videos or to produce
synthetic images with realistic looking. Computer vision technics are
applied together with virtual reality technics in order to originate new types
of applications called augmented reality applications, which use virtual
cameras to combine both real and synthetic images within a single digital
image. Among the many uses of these types of applications, this work
has particular interest in those concerning augmented visits to buildings.
In these cases, pictures of buildings — typically old structures os ruins —
are reconstructed from virtual models that are inserted into such pictures
allowing one to have the vision of how those buildings were on they original
appearance. Within this context, this work proposes a semi-automatic and
efficient method to perform such reconstructions and to register virtual cameras
from real pictures and models of buildings, allowing comparing them
through direct superposing and making possible to navigate in a tridimensional
fashion between the many registered pictures. Such method requires
user interaction, but it is designed to be simple and productive.
|
4 |
Color characterization of a new laser printing system / Caractérisation des couleurs d'un nouveau système d'impression laserMartinez Garcia, Juan Manuel 16 September 2016 (has links)
Grâce aux progrès récents dans le domaine des nanotechnologies il est maintenant possible de colorer des plaques de verre avec du dioxyde de titane contenant de l’argent par irradiation laser. L’une des caractéristiques de ce procédé est que le rendu couleur des échantillons produits diffère quand ceux-ci sont observés en réflexion (spéculaire ou diffuse) ou en transmission, ainsi que quand on polarise la lumière. Ce nouveau procédé d’impression laser que l’on a appelé PICSLUP (pour Photo-Induced Colored Silver LUster Printing system) permet de produire des images couleur gonio-apparentes.L’objectif de cette thèse est de caractériser l’apparence couleur (d’un point de vus colorimétrique et photométrique, et ce selon plusieurs géométries d’observation) de plaques de verre colorées par ce procédé. Ce qui pose de nombreux challenges techniques du fait que le système d’impression utilisé est encore en cours de développement et pas souvent accessible, du fait également de plusieurs spécificités photométriques (surface fortement translucide, fortement spéculaire, forte gonio-chromaticité). Afin de lever de toutes ces contraintes techniques notre première approche a été de caractériser la couleur grâce à un imageur monté sur un microscope. Nous avons pour cela généré par impression laser (avec le système PICSLUS) tout un ensemble d’échantillons couleur en faisant varier de manière exhaustive différents paramètres d’impression (temps d’exposition du faisceau laser, longueur d’onde, distance focale). Afin d’obtenir des mesures couleur précises nous avons ensuite développé une méthode de calibrage couleur spécifique dédiée à la mesure de surfaces fortement spéculaires. La précision de cette méthode de calibrage, appliquée aux échantillons créés par le système PICSLUP, est comparable à celles relevées dans l’état de l’art. À partir des couleurs obtenues on peut estimer la gamme des couleur (color gamut) qui peut être obtenue par ce système d’impression, en particulier pour la géométrie spéculaires 0º:0º, et étudier l’influence des différents paramètres d’impression ainsi que l’effet de la polarisation. Quoique les mesures réalisées sous microscope avec la géométrie spéculaire 0°:0° soient particulièrement utile pour étudier les propriétés colorimétriques et photométriques produites par le système PICSLUP, ces mesures ne sont pas suffisantes pour caractériser complètement ce système. En effet, l’apparence couleur des échantillons produits varie également en fonction de la géométrie d’éclairement et d’observation, il est donc nécessaire de caractériser le système PICSLUP selon d’autres géométries que la seule géométrie 0°:0°. Nous avons donc développé une autre méthode de caractérisation basée sur l’utilisation d’un imageur hyperspectral à géométrie ajustable, ce qui nous a permis de caractériser un ensemble donné d’échantillons couleur produits par le système PICSLUP. Ces échantillons ont été mesurés, en recto-verso, en transmission (avec la géométrie 0°:0°), en réflexion spéculaire (avec la géométrie 15°:15°), et en réflexion hors spéculaire (avec la géométrie 45°:0°). Grâce à ces mesures on a pu estimer pour différentes géométries d’observation les changements de gamme des couleurs qui peuvent être obtenus par le système PICSLUP. Le volume qui circonscrit chacune de ces gammes de couleur peut être modélisé par une forme concave qui contient beaucoup de zones éparses, ce qui revient à dire que certaines couleurs ne peuvent être directement obtenues par impression. Afin d’obtenir une forme convexe, plus dense quelque soit la zone d’étude considérée, nous avons testé avec succès une nouvelle méthode d’impression qui consiste non plus à imprimer des aplats (zones uniformément colorées par juxtaposition de lignes laser identiques) mais à imprimer des demi-tons (par juxtaposition de lignes laser de différentes couleurs). Cette méthode est basée sur le principe de l’halftoning et sur un nombre limité de primaires couleur pré-sélectionnées / Recent progresses in nanotechnologies enabled the coloration of glass plates coated with titanium dioxide containing silver by laser irradiation. The colored samples display very different colors when obtained by reflection or transmission of light; in specular or off-specular directions; and with or without polarizing filters. This new laser printing technology, that we call PICSLUP (for Photo-Induced Colored Silver LUster Printing system), enables the production of gonioapparent color images.The goal of this study is to perform a multi-geometry photometric and color characterization of this complex system. This task posed technical challenges due to the system being in a development stage, especially a low availability of the printing material; and due to the photometric properties of the prints: high translucency, high specularity and strong goniochromaticity. In order to overcome these constraints, our first approach was based on color characterization by microscope imaging. The data set used consisted in printing an exhaustive number of micrometric color patches, produced by varying the different laser printing parameters: exposure time, laser wavelength, laser power, and laser focusing distance. To achieve accurate color measurements with samples produced with the PICSLUS system, we successfully developed a color calibration method especially tailored for highly specular materials, whose accuracy is good in comparison to previous studies in the literature on camera color calibration. From the colors obtained, we could estimate the color gamut in the 0º:0º specular reflection geometry and study the influence of the different printing parameters as well as polarization. Although the measurements with microscope imaging in the 0°:0° specular geometry were very useful to study the properties of the colors produced by the PICSLUP technology, they were not sufficient to fully characterize the system, since the samples exhibit very different colors according to the respective positions of the viewer and the light source. With this in mind, we assembled a geometry-adjustable hyperspectral imaging system, which allowed us to characterize a representative subset of the colors that can be produced with the system. The samples were measured from both recto and verso faces, in the 0°:0° transmission, 15°:15° specular reflection, and 45°:0° off-specular reflection illumination/observation geometries. From these measurements, the color gamuts of the system were estimated in the different geometries. The volumes delimited by the colors obtained were concave and contained many sparse regions with very few samples. In order to obtain more continuous, dense and convex color gamut volumes, we successfully tested the generation of new colors by juxtaposing printed lines of different primaries with halftoning techniques. In order to circumvent the need to physically characterize all the different color that can be produced with halftoning using the numerous primaries available, we also tested and fitted existing halftoning prediction models, and obtained a satisfactory accuracy. The use of halftoning not only increased the number colors that can be produced by the system in the different geometries, but also increased the number of different primaries that can be produced when we consider as a whole the set of colors produced by the same printed patch in multiple geometries. Finally, based on the different properties demonstrated by the samples produced by the PISCLUP system, we explored some imaging and security features with colors obtained from our characterization, and propose further potential applications for this new goniochromatic laser printing technology
|
5 |
[pt] CALIBRAÇÃO DE CÂMERA USANDO PROJEÇÃO FRONTAL-PARALELA E COLINEARIDADE DOS PONTOS DE CONTROLE / [en] CAMERA CALIBRATION USING FRONTO PARALLEL PROJECTION AND COLLINEARITY OF CONTROL POINTSSASHA NICOLAS DA ROCHA PINHEIRO 17 November 2016 (has links)
[pt] Imprescindível para quaisquer aplicações de visão computacional ou
realidade aumentada, a calibração de câmera é o processo no qual se obtém
os parâmetros intrínsecos e extrínsecos da câmera, tais como distância
focal, ponto principal e valores que mensuram a distorção ótica da lente.
Atualmente o método mais utilizado para calibrar uma câmera envolve
o uso de imagens de um padrão planar em diferentes perspectivas, a
partir das quais se extrai pontos de controle para montar um sistema de
equações lineares cuja solução representa os parâmetros da câmera, que
são otimizados com base no erro de reprojeção 2D. Neste trabalho, foi
escolhido o padrão de calibração aneliforme por oferecer maior precisão na
detecção dos pontos de controle. Ao aplicarmos técnicas como transformação
frontal-paralela, refinamento iterativo dos pontos de controle e segmentação
adaptativa de elipses, nossa abordagem apresentou melhoria no resultado
do processo de calibração. Além disso, propomos estender o modelo de
otimização ao redefinir a função objetivo, considerando não somente o erro
de reprojeção 2D, mas também o erro de colinearidade 2D. / [en] Crucial for any computer vision or augmented reality application, the
camera calibration is the process in which one gets the intrinsics and the
extrinsics parameters of a camera, such as focal length, principal point
and distortions values. Nowadays, the most used method to deploy the
calibration comprises the use of images of a planar pattern in different
perspectives, in order to extract control points to set up a system of linear
equations whose solution represents the camera parameters, followed by
an optimization based on the 2D reprojection error. In this work, the
ring calibration pattern was chosen because it offers higher accuracy on
the detection of control points. Upon application of techniques such as
fronto-parallel transformation, iterative refinement of the control points and
adaptative segmentation of ellipses, our approach has reached improvements
in the result of the calibration process. Furthermore, we proposed extend
the optimization model by modifying the objective function, regarding not
only the 2D reprojection error but also the 2D collinearity error.
|
6 |
[pt] IDENTIFICAÇÃO E MAPEAMENTO DAS PROPRIEDADES DAS ONDAS ATRAVÉS DE SENSOR REMOTO DE VÍDEO / [en] IDENTIFYING AND MAPPING WAVES PROPERTIES USING REMOTE SENSING VIDEOLAURO HENRIKO GARCIA ALVES DE SOUZA 26 April 2021 (has links)
[pt] A avaliação das condições do mar por meio de instrumentos in situ na zona de
surfe é muito desafiante. Nesse ambiente, temos a quebra das ondas e presença
de banhistas. A quebra das ondas gera grande dissipação de energia, o que
pode danificar os instrumentos e possivelmente causar um choque entre o
instrumento e os banhistas. Uma solução para auferir as condições do mar com
sensor remoto pode apresentar grande vantagem. Neste trabalho, é proposto
um método de visão computacional tradicional, uma vez que não há um banco
público de imagens de ondas para a utilização de redes neurais. Utilizamos
câmeras de rede convecionais e de baixo custo já largamente instaladas nos
principais pontos de surfe do Brasil e do mundo fazendo com que o nosso
método fique mais acessível a todos. Com ele, conseguimos extrair propriedades
das ondas, como distância, frequência, direção, posição no mundo, percurso,
velocidade, intervalo entre séries e altura da face da onda, e prover uma análise
quantitativa das condições do mar. Esses dados devem servir às áreas de
Oceanografia, de Engenharia Costeira, de Segurança do mar e ao novo esporte
olímpico: surfe. / [en] Evaluating sea conditions in the nearshore through in situ instruments can
be challenging. This environment is exposed to wave breaking and civilian
recreation. Wave breaking dissipates energy, which can lead to damaging the
instrument and possibly causing shock with civilians. A solution to acquire sea
conditions data through remote sensing can be of great advantage. This work,
presents a traditional computer vision method, since there is no public wave
image dataset. Low cost conventional network cameras are used, which are
already installed in the main surfing spots around the world makng our method
more accessible to the general public. With it, we are able to extract wave
properties such as length, frequency, direction, world position, path, speed and
sets interval. This data should serve as input to areas such as Oceanography,
Coast Engineering, water safety and the new Olympic Game: Surfing.
|
7 |
Structureless Camera Motion Estimation of Unordered Omnidirectional ImagesSastuba, Mark 08 August 2022 (has links)
This work aims at providing a novel camera motion estimation pipeline from large collections of unordered omnidirectional images. In oder to keep the pipeline as general and flexible as possible, cameras are modelled as unit spheres, allowing to incorporate any central camera type. For each camera an unprojection lookup is generated from intrinsics, which is called P2S-map (Pixel-to-Sphere-map), mapping pixels to their corresponding positions on the unit sphere. Consequently the camera geometry becomes independent of the underlying projection model. The pipeline also generates P2S-maps from world map projections with less distortion effects as they are known from cartography. Using P2S-maps from camera calibration and world map projection allows to convert omnidirectional camera images to an appropriate world map projection in oder to apply standard feature extraction and matching algorithms for data association. The proposed estimation pipeline combines the flexibility of SfM (Structure from Motion) - which handles unordered image collections - with the efficiency of PGO (Pose Graph Optimization), which is used as back-end in graph-based Visual SLAM (Simultaneous Localization and Mapping) approaches to optimize camera poses from large image sequences. SfM uses BA (Bundle Adjustment) to jointly optimize camera poses (motion) and 3d feature locations (structure), which becomes computationally expensive for large-scale scenarios. On the contrary PGO solves for camera poses (motion) from measured transformations between cameras, maintaining optimization managable. The proposed estimation algorithm combines both worlds. It obtains up-to-scale transformations between image pairs using two-view constraints, which are jointly scaled using trifocal constraints. A pose graph is generated from scaled two-view transformations and solved by PGO to obtain camera motion efficiently even for large image collections. Obtained results can be used as input data to provide initial pose estimates for further 3d reconstruction purposes e.g. to build a sparse structure from feature correspondences in an SfM or SLAM framework with further refinement via BA.
The pipeline also incorporates fixed extrinsic constraints from multi-camera setups as well as depth information provided by RGBD sensors. The entire camera motion estimation pipeline does not need to generate a sparse 3d structure of the captured environment and thus is called SCME (Structureless Camera Motion Estimation).:1 Introduction
1.1 Motivation
1.1.1 Increasing Interest of Image-Based 3D Reconstruction
1.1.2 Underground Environments as Challenging Scenario
1.1.3 Improved Mobile Camera Systems for Full Omnidirectional Imaging
1.2 Issues
1.2.1 Directional versus Omnidirectional Image Acquisition
1.2.2 Structure from Motion versus Visual Simultaneous Localization and Mapping
1.3 Contribution
1.4 Structure of this Work
2 Related Work
2.1 Visual Simultaneous Localization and Mapping
2.1.1 Visual Odometry
2.1.2 Pose Graph Optimization
2.2 Structure from Motion
2.2.1 Bundle Adjustment
2.2.2 Structureless Bundle Adjustment
2.3 Corresponding Issues
2.4 Proposed Reconstruction Pipeline
3 Cameras and Pixel-to-Sphere Mappings with P2S-Maps
3.1 Types
3.2 Models
3.2.1 Unified Camera Model
3.2.2 Polynomal Camera Model
3.2.3 Spherical Camera Model
3.3 P2S-Maps - Mapping onto Unit Sphere via Lookup Table
3.3.1 Lookup Table as Color Image
3.3.2 Lookup Interpolation
3.3.3 Depth Data Conversion
4 Calibration
4.1 Overview of Proposed Calibration Pipeline
4.2 Target Detection
4.3 Intrinsic Calibration
4.3.1 Selected Examples
4.4 Extrinsic Calibration
4.4.1 3D-2D Pose Estimation
4.4.2 2D-2D Pose Estimation
4.4.3 Pose Optimization
4.4.4 Uncertainty Estimation
4.4.5 PoseGraph Representation
4.4.6 Bundle Adjustment
4.4.7 Selected Examples
5 Full Omnidirectional Image Projections
5.1 Panoramic Image Stitching
5.2 World Map Projections
5.3 World Map Projection Generator for P2S-Maps
5.4 Conversion between Projections based on P2S-Maps
5.4.1 Proposed Workflow
5.4.2 Data Storage Format
5.4.3 Real World Example
6 Relations between Two Camera Spheres
6.1 Forward and Backward Projection
6.2 Triangulation
6.2.1 Linear Least Squares Method
6.2.2 Alternative Midpoint Method
6.3 Epipolar Geometry
6.4 Transformation Recovery from Essential Matrix
6.4.1 Cheirality
6.4.2 Standard Procedure
6.4.3 Simplified Procedure
6.4.4 Improved Procedure
6.5 Two-View Estimation
6.5.1 Evaluation Strategy
6.5.2 Error Metric
6.5.3 Evaluation of Estimation Algorithms
6.5.4 Concluding Remarks
6.6 Two-View Optimization
6.6.1 Epipolar-Based Error Distances
6.6.2 Projection-Based Error Distances
6.6.3 Comparison between Error Distances
6.7 Two-View Translation Scaling
6.7.1 Linear Least Squares Estimation
6.7.2 Non-Linear Least Squares Optimization
6.7.3 Comparison between Initial and Optimized Scaling Factor
6.8 Homography to Identify Degeneracies
6.8.1 Homography for Spherical Cameras
6.8.2 Homography Estimation
6.8.3 Homography Optimization
6.8.4 Homography and Pure Rotation
6.8.5 Homography in Epipolar Geometry
7 Relations between Three Camera Spheres
7.1 Three View Geometry
7.2 Crossing Epipolar Planes Geometry
7.3 Trifocal Geometry
7.4 Relation between Trifocal, Three-View and Crossing Epipolar Planes
7.5 Translation Ratio between Up-To-Scale Two-View Transformations
7.5.1 Structureless Determination Approaches
7.5.2 Structure-Based Determination Approaches
7.5.3 Comparison between Proposed Approaches
8 Pose Graphs
8.1 Optimization Principle
8.2 Solvers
8.2.1 Additional Graph Solvers
8.2.2 False Loop Closure Detection
8.3 Pose Graph Generation
8.3.1 Generation of Synthetic Pose Graph Data
8.3.2 Optimization of Synthetic Pose Graph Data
9 Structureless Camera Motion Estimation
9.1 SCME Pipeline
9.2 Determination of Two-View Translation Scale Factors
9.3 Integration of Depth Data
9.4 Integration of Extrinsic Camera Constraints
10 Camera Motion Estimation Results
10.1 Directional Camera Images
10.2 Omnidirectional Camera Images
11 Conclusion
11.1 Summary
11.2 Outlook and Future Work
Appendices
A.1 Additional Extrinsic Calibration Results
A.2 Linear Least Squares Scaling
A.3 Proof Rank Deficiency
A.4 Alternative Derivation Midpoint Method
A.5 Simplification of Depth Calculation
A.6 Relation between Epipolar and Circumferential Constraint
A.7 Covariance Estimation
A.8 Uncertainty Estimation from Epipolar Geometry
A.9 Two-View Scaling Factor Estimation: Uncertainty Estimation
A.10 Two-View Scaling Factor Optimization: Uncertainty Estimation
A.11 Depth from Adjoining Two-View Geometries
A.12 Alternative Three-View Derivation
A.12.1 Second Derivation Approach
A.12.2 Third Derivation Approach
A.13 Relation between Trifocal Geometry and Alternative Midpoint Method
A.14 Additional Pose Graph Generation Examples
A.15 Pose Graph Solver Settings
A.16 Additional Pose Graph Optimization Examples
Bibliography
|
Page generated in 0.1406 seconds