• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 144
  • 46
  • 26
  • 18
  • 10
  • 5
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 317
  • 59
  • 55
  • 52
  • 45
  • 44
  • 43
  • 39
  • 36
  • 30
  • 28
  • 28
  • 21
  • 20
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
311

As oportunidades para o uso da mediação nos procedimentos para formalização de acordos de fomento, colaboração e de cooperação, definidos pela Lei 13.019/14, no âmbito das relações entre administração pública e as organizações da sociedade civil

Miguel, Samira de Vasconcellos 08 November 2017 (has links)
Submitted by Samira de Vasconcellos Miguel (samiravmiguel@gmail.com) on 2017-12-11T12:18:59Z No. of bitstreams: 1 trabalho final 06122017 final corrigido.pdf: 2101435 bytes, checksum: 6caac50c4727ec56326458c3c2d6695b (MD5) / Approved for entry into archive by Thais Oliveira (thais.oliveira@fgv.br) on 2017-12-11T15:51:04Z (GMT) No. of bitstreams: 1 trabalho final 06122017 final corrigido.pdf: 2101435 bytes, checksum: 6caac50c4727ec56326458c3c2d6695b (MD5) / Made available in DSpace on 2017-12-11T15:58:04Z (GMT). No. of bitstreams: 1 trabalho final 06122017 final corrigido.pdf: 2101435 bytes, checksum: 6caac50c4727ec56326458c3c2d6695b (MD5) Previous issue date: 2017-11-08 / A mediação é um método utilizado para a solução adequada de conflitos complexos, nos quais há interesses subjacentes múltiplos e necessidades que precisam ser harmonizadas, por meio de soluções criativas que maximizam e potencializam os resultados possíveis de um determinado relacionamento. Pressupõe a atuação de um terceiro, neutro, que auxilia e apoia os envolvidos para uma solução refletida, melhorando a comunicação interpessoal. Há o empoderamento e a responsabilização daqueles que participam da mediação pelos resultados alcançados. Por isso, aplica-se a contratos de trato continuado, complexos, abordando os interesses subjacentes das partes e preparando-as para a solução dos eventuais conflitos que surjam da execução destes contratos. Dada a definição legal das parcerias envolvendo a administração pública e as organizações da sociedade civil, por força da Lei 13.019/14, sugere-se neste trabalho, a utilização da mediação na fase pré-acordo e, depois, no curso da relação que se estabelece, visando dar maior evidência aos princípios da transparência, eficiência e economicidade, tratando-se a assimetria de informações e o desequilíbrio de poder entre os envolvidos, durante a fase de negociação e na execução dos referidos instrumentos, para a manutenção do interesse público e recíproco que é o motivador dessas contratações. Partiu-se da descrição de um cenário de desafios e oportunidades, para uma análise da aderência do uso pretendido à luz da legislação existente. Construiu-se um referencial prático para a constituição de Câmaras de Prevenção e Resolução de Conflitos que atenderão à demandas desse tipo, por meio de um programa piloto, tomando-se como base experiências já existentes e bem sucedidas do uso de métodos alternativos de solução de conflitos no âmbito da Administração Pública. Determinou-se, ainda, indicadores para verificação do sucesso ou da necessidade de melhorias no uso do programa piloto sugerido, bem como na possibilidade de replicar o referido programa nos âmbitos privado e público. / Mediation is a method used for the proper resolution of complex conflicts in which there are multiple underlying interests and needs that need to be harmonized through creative solutions that maximize and potentiate the possible outcomes of a particular relationship. It presupposes the performance of a neutral third party that assists and supports those involved in a reflected solution improving interpersonal communication. There is the empowerment and accountability of those who participate in mediation for the results achieved. It therefore also applies to contracts of continuing relationship, complex relationship, addressing the underlying interests of the parties and preparing them for the settlement of any disputes arising from the performance of these contracts. Due to the legal definition of the partnership involving the public administration and civil society organizations, by means of Law 13.019 / 14, it is suggested in this work, the use of mediation in the pre-contractual phase and then, in the course of the contractual relationship that is establishes, in order to provide more evidence of the principles of transparency, efficiency and cost-effectiveness, as regards the asymmetry of information and the imbalance of power between those involved during the negotiation phase and in the execution of the aforementioned partnership, in order to maintain the public and reciprocal interests which is the goal of such contracts. It started from the description of a scenario of challenges and opportunities, for an analysis of the adherence of the intended use in light of the existing legislation. A practical benchmark was created a Pilot Program, with the creation of a Chamber of Conflict Prevention and Resolution that will meet the demand for this type of contracting, based on already existing and successful experiences envolving the use of alternatives dispute resolutions methods in the Public Administration Indicators were also identified to verify the success or need for improvements in the use of mediation, as well as the possibility of replicability of the proposed pilot program in the private and public spheres.
312

Descripteurs d'images pour les systèmes de vision routiers en situations atmosphériques dégradées et caractérisation des hydrométéores / Image descriptors for road computer vision systems in adverse weather conditions and hydrometeors caracterisation

Duthon, Pierre 01 December 2017 (has links)
Les systèmes de vision artificielle sont de plus en plus présents en contexte routier. Ils sont installés sur l'infrastructure, pour la gestion du trafic, ou placés à l'intérieur du véhicule, pour proposer des aides à la conduite. Dans les deux cas, les systèmes de vision artificielle visent à augmenter la sécurité et à optimiser les déplacements. Une revue bibliographique retrace les origines et le développement des algorithmes de vision artificielle en contexte routier. Elle permet de démontrer l'importance des descripteurs d'images dans la chaîne de traitement des algorithmes. Elle se poursuit par une revue des descripteurs d'images avec une nouvelle approche source de nombreuses analyses, en les considérant en parallèle des applications finales. En conclusion, la revue bibliographique permet de déterminer quels sont les descripteurs d'images les plus représentatifs en contexte routier. Plusieurs bases de données contenant des images et les données météorologiques associées (ex : pluie, brouillard) sont ensuite présentées. Ces bases de données sont innovantes car l'acquisition des images et la mesure des conditions météorologiques sont effectuées en même temps et au même endroit. De plus, des capteurs météorologiques calibrés sont utilisés. Chaque base de données contient différentes scènes (ex: cible noir et blanc, piéton) et divers types de conditions météorologiques (ex: pluie, brouillard, jour, nuit). Les bases de données contiennent des conditions météorologiques naturelles, reproduites artificiellement et simulées numériquement. Sept descripteurs d'images parmi les plus représentatifs du contexte routier ont ensuite été sélectionnés et leur robustesse en conditions de pluie évaluée. Les descripteurs d'images basés sur l'intensité des pixels ou les contours verticaux sont sensibles à la pluie. A l'inverse, le descripteur de Harris et les descripteurs qui combinent différentes orientations sont robustes pour des intensités de pluie de 0 à 30 mm/h. La robustesse des descripteurs d'images en conditions de pluie diminue lorsque l'intensité de pluie augmente. Finalement, les descripteurs les plus sensibles à la pluie peuvent potentiellement être utilisés pour des applications de détection de la pluie par caméra.Le comportement d'un descripteur d'images en conditions météorologiques dégradées n'est pas forcément relié à celui de la fonction finale associée. Pour cela, deux détecteurs de piéton ont été évalués en conditions météorologiques dégradées (pluie, brouillard, jour, nuit). La nuit et le brouillard sont les conditions qui ont l'impact le plus important sur la détection des piétons. La méthodologie développée et la base de données associée peuvent être utilisées à nouveau pour évaluer d'autres fonctions finales (ex: détection de véhicule, détection de signalisation verticale).En contexte routier, connaitre les conditions météorologiques locales en temps réel est essentiel pour répondre aux deux enjeux que sont l'amélioration de la sécurité et l'optimisation des déplacements. Actuellement, le seul moyen de mesurer ces conditions le long des réseaux est l'installation de stations météorologiques. Ces stations sont coûteuses et nécessitent une maintenance particulière. Cependant, de nombreuses caméras sont déjà présentes sur le bord des routes. Une nouvelle méthode de détection des conditions météorologiques utilisant les caméras de surveillance du trafic est donc proposée. Cette méthode utilise des descripteurs d'images et un réseau de neurones. Elle répond à un ensemble de contraintes clairement établies afin de pouvoir détecter l'ensemble des conditions météorologiques en temps réel, mais aussi de pourvoir proposer plusieurs niveaux d'intensité. La méthode proposée permet de détecter les conditions normales de jour, de nuit, la pluie et le brouillard. Après plusieurs phases d'optimisation, la méthode proposée obtient de meilleurs résultats que ceux obtenus dans la littérature, pour des algorithmes comparables. / Computer vision systems are increasingly being used on roads. They can be installed along infrastructure for traffic monitoring purposes. When mounted in vehicles, they perform driver assistance functions. In both cases, computer vision systems enhance road safety and streamline travel.A literature review starts by retracing the introduction and rollout of computer vision algorithms in road environments, and goes on to demonstrate the importance of image descriptors in the processing chains implemented in such algorithms. It continues with a review of image descriptors from a novel approach, considering them in parallel with final applications, which opens up numerous analytical angles. Finally the literature review makes it possible to assess which descriptors are the most representative in road environments.Several databases containing images and associated meteorological data (e.g. rain, fog) are then presented. These databases are completely original because image acquisition and weather condition measurement are at the same location and the same time. Moreover, calibrated meteorological sensors are used. Each database contains different scenes (e.g. black and white target, pedestrian) and different kind of weather (i.e. rain, fog, daytime, night-time). Databases contain digitally simulated, artificial and natural weather conditions.Seven of the most representative image descriptors in road context are then selected and their robustness in rainy conditions is evaluated. Image descriptors based on pixel intensity and those that use vertical edges are sensitive to rainy conditions. Conversely, the Harris feature and features that combine different edge orientations remain robust for rainfall rates ranging in 0 – 30 mm/h. The robustness of image features in rainy conditions decreases as the rainfall rate increases. Finally, the image descriptors most sensitive to rain have potential for use in a camera-based rain classification application.The image descriptor behaviour in adverse weather conditions is not necessarily related to the associated final function one. Thus, two pedestrian detectors were assessed in degraded weather conditions (rain, fog, daytime, night-time). Night-time and fog are the conditions that have the greatest impact on pedestrian detection. The methodology developed and associated database could be reused to assess others final functions (e.g. vehicle detection, traffic sign detection).In road environments, real-time knowledge of local weather conditions is an essential prerequisite for addressing the twin challenges of enhancing road safety and streamlining travel. Currently, the only mean of quantifying weather conditions along a road network requires the installation of meteorological stations. Such stations are costly and must be maintained; however, large numbers of cameras are already installed on the roadside. A new method that uses road traffic cameras to detect weather conditions has therefore been proposed. This method uses a combination of a neural network and image descriptors applied to image patches. It addresses a clearly defined set of constraints relating to the ability to operate in real-time and to classify the full spectrum of meteorological conditions and grades them according to their intensity. The method differentiates between normal daytime, rain, fog and normal night-time weather conditions. After several optimisation steps, the proposed method obtains better results than the ones reported in the literature for comparable algorithms.
313

Large-scale high-performance video surveillance

Sutor, S. R. (Stephan R.) 07 October 2014 (has links)
Abstract The last decade was marked by a set of harmful events ranging from economical crises to organized crime, acts of terror and natural catastrophes. This has led to a paradigm transformation concerning security. Millions of surveillance cameras have been deployed, which led to new challenges, as the systems and operations behind those cameras could not cope with the rapid growth in number of video cameras and systems. Looking at today’s control rooms, often hundreds or even thousands of cameras are displayed, overloading security officers with irrelevant information. The purpose of this research was the creation of a novel video surveillance system with automated analysis mechanisms which enable security authorities and their operators to cope with this information flood. By automating the process, video surveillance was transformed into a proactive information system. The progress in technology as well as the ever increasing demand in security have proven to be an enormous driver for security technology research, such as this study. This work shall contribute to the protection of our personal freedom, our lives, our property and our society by aiding the prevention of crime and terrorist attacks that diminish our personal freedom. In this study, design science research methodology was utilized in order to ensure scientific rigor while constructing and evaluating artifacts. The requirements for this research were sought in close cooperation with high-level security authorities and prior research was studied in detail. The created construct, the “Intelligent Video Surveillance System”, is a distributed, highly-scalable software framework, that can function as a basis for any kind of high-performance video surveillance system, from installations focusing on high-availability to flexible cloud-based installation that scale across multiple locations and tens of thousands of cameras. First, in order to provide a strong foundation, a modular, distributed system architecture was created, which was then augmented by a multi-sensor analysis process. Thus, the analysis of data from multiple sources, combining video and other sensors in order to automatically detect critical events, was enabled. Further, an intelligent mobile client, the video surveillance local control, which addressed remote access applications, was created. Finally, a wireless self-contained surveillance system was introduced, a novel smart camera concept that enabled ad hoc and mobile surveillance. The value of the created artifacts was proven by evaluation at two real-world sites: An international airport, which has a large-scale installation with high-security requirements, and a security service provider, offering a multitude of video-based services by operating a video control center with thousands of cameras connected. / Tiivistelmä Viime vuosikymmen tunnetaan vahingollisista tapahtumista alkaen talouskriiseistä ja ulottuen järjestelmälliseen rikollisuuteen, terrori-iskuihin ja luonnonkatastrofeihin. Tämä tilanne on muuttanut suhtautumista turvallisuuteen. Miljoonia valvontakameroita on otettu käyttöön, mikä on johtanut uusiin haasteisiin, koska kameroihin liittyvät järjestelmät ja toiminnot eivät pysty toimimaan yhdessä lukuisien uusien videokameroiden ja järjestelmien kanssa. Nykyajan valvontahuoneissa voidaan nähdä satojen tai tuhansien kameroiden tuottavan kuvaa ja samalla runsaasti tarpeetonta informaatiota turvallisuusvirkailijoiden katsottavaksi. Tämän tutkimuksen tarkoitus oli luoda uusi videovalvontajärjestelmä, jossa on automaattiset analyysimekanismit, jotka mahdollistavat turva-alan toimijoiden ja niiden operaattoreiden suoriutuvan informaatiotulvasta. Automaattisen videovalvontaprosessin avulla videovalvonta muokattiin proaktiiviseksi tietojärjestelmäksi. Teknologian kehitys ja kasvanut turvallisuusvaatimus osoittautuivat olevan merkittävä ajuri turvallisuusteknologian tutkimukselle, kuten tämä tutkimus oli. Tämä tutkimus hyödyttää yksittäisen ihmisen henkilökohtaista vapautta, elämää ja omaisuutta sekä yhteisöä estämällä rikoksia ja terroristihyökkäyksiä. Tässä tutkimuksessa suunnittelutiedettä sovellettiin varmistamaan tieteellinen kurinalaisuus, kun artefakteja luotiin ja arvioitiin. Tutkimuksen vaatimukset perustuivat läheiseen yhteistyöhön korkeatasoisten turva-alan viranomaisten kanssa, ja lisäksi aiempi tutkimus analysoitiin yksityiskohtaisesti. Luotu artefakti - ’älykäs videovalvontajärjestelmä’ - on hajautettu, skaalautuva ohjelmistoviitekehys, joka voi toimia perustana monenlaiselle huipputehokkaalle videovalvontajärjestelmälle alkaen toteutuksista, jotka keskittyvät saatavuuteen, ja päättyen joustaviin pilviperustaisiin toteutuksiin, jotka skaalautuvat useisiin sijainteihin ja kymmeniin tuhansiin kameroihin. Järjestelmän tukevaksi perustaksi luotiin hajautettu järjestelmäarkkitehtuuri, jota laajennettiin monisensorianalyysiprosessilla. Siten mahdollistettiin monista lähteistä peräisin olevan datan analysointi, videokuvan ja muiden sensorien datan yhdistäminen ja automaattinen kriittisten tapahtumien tunnistaminen. Lisäksi tässä työssä luotiin älykäs kännykkäsovellus, videovalvonnan paikallinen kontrolloija, joka ohjaa sovelluksen etäkäyttöä. Viimeksi tuotettiin langaton itsenäinen valvontajärjestelmä – uudenlainen älykäs kamerakonsepti – joka mahdollistaa ad hoc -tyyppisen ja mobiilin valvonnan. Luotujen artefaktien arvo voitiin todentaa arvioimalla ne kahdessa reaalimaailman ympäristössä: kansainvälinen lentokenttä, jonka laajamittaisessa toteutuksessa on korkeat turvavaatimukset, ja turvallisuuspalveluntuottaja, joka tarjoaa moninaisia videopohjaisia palveluja videovalvontakeskuksen avulla käyttäen tuhansia kameroita.
314

Pozice objektu ze soustavy kamer / Object Position from Multiple Cameras

Dostál, Radek January 2011 (has links)
This thesis deals with reconstruction of golf ball position using multiple cameras. Reconstruction will be used for golf simulator project. System is using fotogrametric calibration and triangulation algorithm for obtaing point coordinates. Work also discuss options for camera selection. The result is making of prototype of the simulator.
315

Video anatomy : spatial-temporal video profile

Cai, Hongyuan 31 July 2014 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / A massive amount of videos are uploaded on video websites, smooth video browsing, editing, retrieval, and summarization are demanded. Most of the videos employ several types of camera operations for expanding field of view, emphasizing events, and expressing cinematic effect. To digest heterogeneous videos in video websites and databases, video clips are profiled to 2D image scroll containing both spatial and temporal information for video preview. The video profile is visually continuous, compact, scalable, and indexing to each frame. This work analyzes the camera kinematics including zoom, translation, and rotation, and categorize camera actions as their combinations. An automatic video summarization framework is proposed and developed. After conventional video clip segmentation and video segmentation for smooth camera operations, the global flow field under all camera actions has been investigated for profiling various types of video. A new algorithm has been designed to extract the major flow direction and convergence factor using condensed images. Then this work proposes a uniform scheme to segment video clips and sections, sample video volume across the major flow, compute flow convergence factor, in order to obtain an intrinsic scene space less influenced by the camera ego-motion. The motion blur technique has also been used to render dynamic targets in the profile. The resulting profile of video can be displayed in a video track to guide the access to video frames, help video editing, and facilitate the applications such as surveillance, visual archiving of environment, video retrieval, and online video preview.
316

Made for America: Japanese Consumer Exports and the Postwar U.S.-Japanese Relationship

Chou, William January 2021 (has links)
No description available.
317

Structureless Camera Motion Estimation of Unordered Omnidirectional Images

Sastuba, Mark 08 August 2022 (has links)
This work aims at providing a novel camera motion estimation pipeline from large collections of unordered omnidirectional images. In oder to keep the pipeline as general and flexible as possible, cameras are modelled as unit spheres, allowing to incorporate any central camera type. For each camera an unprojection lookup is generated from intrinsics, which is called P2S-map (Pixel-to-Sphere-map), mapping pixels to their corresponding positions on the unit sphere. Consequently the camera geometry becomes independent of the underlying projection model. The pipeline also generates P2S-maps from world map projections with less distortion effects as they are known from cartography. Using P2S-maps from camera calibration and world map projection allows to convert omnidirectional camera images to an appropriate world map projection in oder to apply standard feature extraction and matching algorithms for data association. The proposed estimation pipeline combines the flexibility of SfM (Structure from Motion) - which handles unordered image collections - with the efficiency of PGO (Pose Graph Optimization), which is used as back-end in graph-based Visual SLAM (Simultaneous Localization and Mapping) approaches to optimize camera poses from large image sequences. SfM uses BA (Bundle Adjustment) to jointly optimize camera poses (motion) and 3d feature locations (structure), which becomes computationally expensive for large-scale scenarios. On the contrary PGO solves for camera poses (motion) from measured transformations between cameras, maintaining optimization managable. The proposed estimation algorithm combines both worlds. It obtains up-to-scale transformations between image pairs using two-view constraints, which are jointly scaled using trifocal constraints. A pose graph is generated from scaled two-view transformations and solved by PGO to obtain camera motion efficiently even for large image collections. Obtained results can be used as input data to provide initial pose estimates for further 3d reconstruction purposes e.g. to build a sparse structure from feature correspondences in an SfM or SLAM framework with further refinement via BA. The pipeline also incorporates fixed extrinsic constraints from multi-camera setups as well as depth information provided by RGBD sensors. The entire camera motion estimation pipeline does not need to generate a sparse 3d structure of the captured environment and thus is called SCME (Structureless Camera Motion Estimation).:1 Introduction 1.1 Motivation 1.1.1 Increasing Interest of Image-Based 3D Reconstruction 1.1.2 Underground Environments as Challenging Scenario 1.1.3 Improved Mobile Camera Systems for Full Omnidirectional Imaging 1.2 Issues 1.2.1 Directional versus Omnidirectional Image Acquisition 1.2.2 Structure from Motion versus Visual Simultaneous Localization and Mapping 1.3 Contribution 1.4 Structure of this Work 2 Related Work 2.1 Visual Simultaneous Localization and Mapping 2.1.1 Visual Odometry 2.1.2 Pose Graph Optimization 2.2 Structure from Motion 2.2.1 Bundle Adjustment 2.2.2 Structureless Bundle Adjustment 2.3 Corresponding Issues 2.4 Proposed Reconstruction Pipeline 3 Cameras and Pixel-to-Sphere Mappings with P2S-Maps 3.1 Types 3.2 Models 3.2.1 Unified Camera Model 3.2.2 Polynomal Camera Model 3.2.3 Spherical Camera Model 3.3 P2S-Maps - Mapping onto Unit Sphere via Lookup Table 3.3.1 Lookup Table as Color Image 3.3.2 Lookup Interpolation 3.3.3 Depth Data Conversion 4 Calibration 4.1 Overview of Proposed Calibration Pipeline 4.2 Target Detection 4.3 Intrinsic Calibration 4.3.1 Selected Examples 4.4 Extrinsic Calibration 4.4.1 3D-2D Pose Estimation 4.4.2 2D-2D Pose Estimation 4.4.3 Pose Optimization 4.4.4 Uncertainty Estimation 4.4.5 PoseGraph Representation 4.4.6 Bundle Adjustment 4.4.7 Selected Examples 5 Full Omnidirectional Image Projections 5.1 Panoramic Image Stitching 5.2 World Map Projections 5.3 World Map Projection Generator for P2S-Maps 5.4 Conversion between Projections based on P2S-Maps 5.4.1 Proposed Workflow 5.4.2 Data Storage Format 5.4.3 Real World Example 6 Relations between Two Camera Spheres 6.1 Forward and Backward Projection 6.2 Triangulation 6.2.1 Linear Least Squares Method 6.2.2 Alternative Midpoint Method 6.3 Epipolar Geometry 6.4 Transformation Recovery from Essential Matrix 6.4.1 Cheirality 6.4.2 Standard Procedure 6.4.3 Simplified Procedure 6.4.4 Improved Procedure 6.5 Two-View Estimation 6.5.1 Evaluation Strategy 6.5.2 Error Metric 6.5.3 Evaluation of Estimation Algorithms 6.5.4 Concluding Remarks 6.6 Two-View Optimization 6.6.1 Epipolar-Based Error Distances 6.6.2 Projection-Based Error Distances 6.6.3 Comparison between Error Distances 6.7 Two-View Translation Scaling 6.7.1 Linear Least Squares Estimation 6.7.2 Non-Linear Least Squares Optimization 6.7.3 Comparison between Initial and Optimized Scaling Factor 6.8 Homography to Identify Degeneracies 6.8.1 Homography for Spherical Cameras 6.8.2 Homography Estimation 6.8.3 Homography Optimization 6.8.4 Homography and Pure Rotation 6.8.5 Homography in Epipolar Geometry 7 Relations between Three Camera Spheres 7.1 Three View Geometry 7.2 Crossing Epipolar Planes Geometry 7.3 Trifocal Geometry 7.4 Relation between Trifocal, Three-View and Crossing Epipolar Planes 7.5 Translation Ratio between Up-To-Scale Two-View Transformations 7.5.1 Structureless Determination Approaches 7.5.2 Structure-Based Determination Approaches 7.5.3 Comparison between Proposed Approaches 8 Pose Graphs 8.1 Optimization Principle 8.2 Solvers 8.2.1 Additional Graph Solvers 8.2.2 False Loop Closure Detection 8.3 Pose Graph Generation 8.3.1 Generation of Synthetic Pose Graph Data 8.3.2 Optimization of Synthetic Pose Graph Data 9 Structureless Camera Motion Estimation 9.1 SCME Pipeline 9.2 Determination of Two-View Translation Scale Factors 9.3 Integration of Depth Data 9.4 Integration of Extrinsic Camera Constraints 10 Camera Motion Estimation Results 10.1 Directional Camera Images 10.2 Omnidirectional Camera Images 11 Conclusion 11.1 Summary 11.2 Outlook and Future Work Appendices A.1 Additional Extrinsic Calibration Results A.2 Linear Least Squares Scaling A.3 Proof Rank Deficiency A.4 Alternative Derivation Midpoint Method A.5 Simplification of Depth Calculation A.6 Relation between Epipolar and Circumferential Constraint A.7 Covariance Estimation A.8 Uncertainty Estimation from Epipolar Geometry A.9 Two-View Scaling Factor Estimation: Uncertainty Estimation A.10 Two-View Scaling Factor Optimization: Uncertainty Estimation A.11 Depth from Adjoining Two-View Geometries A.12 Alternative Three-View Derivation A.12.1 Second Derivation Approach A.12.2 Third Derivation Approach A.13 Relation between Trifocal Geometry and Alternative Midpoint Method A.14 Additional Pose Graph Generation Examples A.15 Pose Graph Solver Settings A.16 Additional Pose Graph Optimization Examples Bibliography

Page generated in 0.0638 seconds