• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 162
  • 63
  • 25
  • 15
  • 14
  • 6
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 345
  • 345
  • 116
  • 97
  • 61
  • 46
  • 44
  • 40
  • 39
  • 38
  • 32
  • 32
  • 31
  • 29
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

Reconstrução tridimensional de baixo custo a partir de par de imagens estéreo. / Low cost three-dimensional reconstruction using a stereo image pair.

José, Marcelo Archanjo 30 May 2008 (has links)
A obtenção e a reconstrução da geometria tridimensional (3D) de objetos e ambientes têm importância crescente em áreas como visão computacional e computação gráfica. As formas atuais de obtenção e reconstrução 3D necessitam de equipamentos e montagens sofisticadas que, por conseqüência, têm custos elevados e aplicação limitada. Este trabalho apresenta criticamente os principais algoritmos para a reconstrução 3D a partir de par de imagens estéreo e identifica os mais viáveis para utilização com equipamentos convencionais. Por meio da implementação de alguns destes algoritmos, da comparação dos resultados obtidos em sua execução e também pela comparação com os resultados encontrados na literatura, são identificadas as principais deficiências. São propostas adequações aos algoritmos existentes, em particular, é apresentada a proposta da técnica das faixas que proporciona a redução drástica no consumo de memória para o processamento da geometria 3D e que possui desempenho computacional melhor em relação às técnicas tradicionais. Foi implementado um protótipo de sistema de reconstrução 3D que permite a reconstrução pelas diferentes técnicas estudadas e propostas, bem como permite visualizar o cenário reconstruído sob diferentes pontos de vista de forma interativa. / The acquisition and reconstruction of three-dimensional (3D) geometry of objects and environments have their importance growing in areas such as Computer Vision and Computer Graphics. The current methods to acquire and reconstruct three-dimensional data need sophisticated equipments and assemblies, which have expensive costs and limited applications. This work presents the main algorithms for 3D reconstruction using a pair of stereo images and identifies which are viable to use with conventional equipments. Through the implementation of some of these algorithms, by comparing the results obtained and comparing with the results presented in the literature, the main limitations were identified. This work proposes adjustments in the existing algorithms, in particular it proposes the stripping technique, which provides a huge memory usage reduction for 3D geometry processing and better computing performance if compared with traditional approaches. A prototype system for 3D reconstruction was implemented, which allows the reconstruction using the different researched and proposed techniques and allows interactive visualization of the reconstructed scene in different angles.
202

Image matching for 3D reconstruction using complementary optical and geometric information / Appariement d'images pour la reconstruction 3D en utilisant l'information optique et géométrique

Galindo, Patricio A. 20 January 2015 (has links)
L’appariement d’images est un sujet central de recherche en vision par ordinateur. La recherche sur cette problématique s’est longuement concentrée sur ses aspects optiques, mais ses aspects géométriques ont reçu beaucoup moins d’attention. Cette thèse promeut l’utilisation de la géométrie pour compléter les informations optique dans les tâches de mise en correspondance d’images. Tout d’abord, nous nous penchons sur les méthodes globales pour lesquelles les occlusions et arêtes vives posent des défis. Dans ces scenarios, le résultat est fortement dépendant de la contribution du terme de régularisation. À l'aide d’une caractérisation géométrique de ce comportement, nous formulons une méthode d’appariement qui dirige les lignes de la grille loin des régions problématiques. Bien que les méthodes variationnelles fournissent des résultats qui se comportent bien en général, les méthodes locales basées sur la propagation de correspondances fournissent des résultats qui s’adaptent mieux à divers structures 3D mais au détriment de la cohérence globale de la surface reconstruite. Par conséquent, nous présentons une nouvelle méthode de propagation guidée par des reconstructions locales de surface / AbstractImage matching is a central research topic in computer vision which has been mainly focused on optical aspects. The aim of the work presented herein consists in the direct use of geometry to complement optical information in the tasks of 2D matching. First, we focus on global methods based on the calculus of variations. In such methods occlusions and sharp features raise difficult challenges. In these scenarios only the contribution of the regularizer accounts for results. Based on a geometric characterization of this behaviour, we formulate a variational matching method that steers grid lines away from problematic regions. While variational methods provide well behaved results, local methods based on match propagation provide results that adapt closely to varying 3D structures although choppy in nature. Therefore, we present a novel method to propagate matches using local information about surface regularity correcting 3D positions along with corresponding 2D matchings
203

Contribution à la reconstruction 3D de bâtiments à partir de nuage de points de scanner laser terrestre / A contribution to 3D building reconstruction from terrestrial laser scanner points cloud

Bennis, Abdelhamid 02 October 2015 (has links)
La rénovation et la réhabilitation énergétique du parc de bâtiment est un des grands défis identifiés pour les décennies à venir. Devant cet impératif d'une rénovation d'ampleur du parc construit, les solutions techniques utilisant des ossatures rapportées sont employées de plus en plus fréquemment car elles permettent de combiner performance thermique, renouvellement esthétique et ajouts fonctionnels. Une des difficultés dans l'amélioration et l'automatisation des projets de rénovation est liée à la connaissance de la géométrie du bâti existant. Des plans de l'état existant ne sont pas toujours disponibles, et dans le cas échéant, ne sont pas forcément exacts en raison de modifications apportées au bâti et non documentées ou d'écarts initiaux entre les plans et la réalisation. Après une étude bibliographique des méthodes existantes détaillées dans le chapitre 1. Les travaux menés dans le cadre d'une collaboration entre le CRITT Bois et le CRAN ont permis de développer une méthode automatique de reconstruction 3D du modèle de bâtiments à partir de nuages de points obtenus par scanner LASER terrestre. La méthode proposée se décompose en trois phases principales. La première phase détaillée dans le deuxième chapitre, consiste à segmenter le nuage de points en plusieurs plans représentants les façades du bâtiment. L'exploitation de la colorimétrie durant la phase de segmentation du nuage de points permet une réduction importante de la complexité de l'algorithme de segmentation géométrique. L'approche consiste à effectuer dans un premier temps une classification préalable du nuage de points en se basant sur les informations colorimétriques de chaque point. Puis, dans un second temps, il s'agit d'effectuer une segmentation géométrique du nuage de points en utilisant un algorithme de segmentation robuste (RANSAC). Le chapitre 3 présente la deuxième phase qui consiste à modéliser le pas d'échantillonnage de la surface à partir duquel on définit le seuil d'extraction des points de contours. Le but étant d'améliorer la fiabilité d'extraction des points de contours, ainsi que l'approximation de l'erreur sur le modèle. Le chapitre 4 détaille les principales étapes de la reconstruction d'un modèle filaire. Dans un premier temps les régions définies par les points de contours sont classées en Régions d'Irrégularités (RI), Régions d'Eléments Architecturaux (REA) comme les fenêtres et Régions de Façade (RF) représentées par les contours extérieurs de la façade. La deuxième étape consiste à modéliser les différentes régions, par un maillage de Delaunay pour les RI, et des polyèdres pour les RF et REA. La dernière étape calcule une approximation de l'erreur sur le modèle. Les tests de fiabilité de la méthode ont été réalisés sur des chantiers réels conduits par des industriels de la construction et de la rénovation. Il en ressort que la qualité de la reconstruction 3D reste fortement dépendante des facteurs d'acquisition ainsi que de la surface numérisée. L'approximation de l'erreur de modélisation permet ainsi de prévoir à l'avance les erreurs sur le modèle CAO. / The renovation and the improvement of the energy efficiency of existing housing stock is one of big challenges identified for coming decades. In front of this imperative, timber based elements for building renovation are more and more used due to their substantial improvement of the building insulation, aesthetic renewal and functional additions. However, this technology faces some difficulties, one of them is the improvement of the renovation projects automation, which is bound to the knowledge of the existing built geometry. The plans representing the existing state of the building are not always available, and if so, they may be not exact, because the modifications made on the building are usually undocumented. After a literature review of existing methods which are detailed the first chapter. The work within the framework of cooperation between the CRITTBois and CRAN have allowed to develop an automatic method for 3D building CAD model reconstruction from point clouds acquired by a terrestrial LASER scanner. The proposed method is composed of three main phases. The first one detailed in the second chapter, consists in segmenting the point cloud into planar patches representing the building facades. To decrease the segmentation algorithm complexity, the colorimetric information is also considered. The approach consists in making a colorimetric classification of the point cloud in a first step, then a geometrical segmentation of the point cloud using a robust segmentation algorithm (RANSAC). The third chapter presents the second phase of our approach consists in surface sampling steps modeling and boundary point extraction. Here, we consider a local threshold defined according to the approximated surface sampling steps. The aim of considering local threshold is to improve the reliability of the boundary point extraction algorithm and approximating the CAD model error. The last chapter presents the main three steps of the boundary model reconstruction method. The first step consists in classifying the regions defined by their boundary points into three types of regions: Irregularity Region (IR), Architectural Element Region (AER) as windows, and Facades Regions (FR) which represent the building facades defined by their outer boundaries. The second step consists in modeling these regions considering a Delaunay triangulation for the IR and a polyhedral model for the AER and the FR. The third step consists in making an approximation of the error in the model. The method reliability tests were conducted on real projects; they were performed by industrial construction and renovation professionals. The tests show that the quality of the 3D reconstruction remains strongly dependent to the acquisition factors and the scanned surface properties. Also, the approximation of the modeling error can predict in advance the errors on the CAD model.
204

SINGLE VIEW RECONSTRUCTION FOR FOOD PORTION ESTIMATION

Shaobo Fang (6397766) 10 June 2019 (has links)
<p>3D scene reconstruction based on single-view images is an ill-posed problem since most 3D information has been lost during the projection process from the 3D world coordinates to the 2D pixel coordinates. To estimate the portion of an object from a single-view requires either the use of priori information such as the geometric shape of the object, or training based techniques that learn from existing portion sizes distribution. In this thesis, we present a single-view based technique for food portion size estimation.</p><p><br></p> <p>Dietary assessment, the process of determining what someone eats during the course of a day, provides valuable insights for mounting intervention programs for prevention of many chronic diseases such as cancer, diabetes and heart diseases. Measuring accurate dietary intake is considered to be an open research problem in the nutrition and health fields. We have developed a mobile dietary assessment system, the Technology Assisted Dietary Assessment<sup>TM</sup> (TADA<sup>TM</sup>) system to automatically determine the food types and energy consumed by a user using image analysis techniques.</p><p><br></p><p>In this thesis we focus on the use of a single image for food portion size estimation to reduce a user’s burden from having to take multiple images of their meal. We define portion size estimation as the process of determining how much food (or food energy/nutrient) is present in the food image. In addition to estimating food energy/nutrient, food portion estimation could also be estimating food volumes (in cm<sup>3</sup>) or weights (in grams), as they are directly related to food energy/nutrient. Food portion estimation is a challenging problem as food preparation and consumption process can pose large variations in food shapes and appearances.</p><p><br></p><p>As single-view based 3D reconstruction is in general an ill-posed problem, we investigate the use of geometric models such as the shape of a container that can help to partially recover 3D parameters of food items in the scene. We compare the performance of portion estimation technique based on 3D geometric models to techniques using depth maps. We have shown that more accurate estimation can be obtained by using geometric models for objects whose 3D shape are well defined. To further improve the food estimation accuracy we investigate the use of food portions co-occurrence patterns. The food portion co-occurrence patterns can be estimated from food image dataset we collected from dietary studies using the mobile Food Record<sup>TM</sup> (mFR<sup>TM</sup>) system we developed. Co-occurrence patterns is used as prior knowledge to refine portion estimation results. We have been shown that the portion estimation accuracy has been improved when incorporating the co-occurrence patterns as contextual information.</p><p><br></p><p>In addition to food portion estimation techniques that are based on geometric models, we also investigate the use deep learning approach. In the geometric model based approach, we have focused on estimation food volumes. However, food volumes are not the final results that directly show food energy/nutrient consumed. Therefore, instead of developing food portion estimation techniques that lead to an intermediate results (food volumes), we present a food portion estimation method to directly estimate food energy (kilocalories) from food images using Generative Adversarial Networks (GANs). We introduce the concept of an “energy distribution” for each food image. To train the GAN, we design a food image dataset based on ground truth food labels and segmentation masks for each food image as well as energy information associated with the food image. Our goal is to learn the mapping of the food image to the food energy. We then estimate food energy based on the estimated energy distribution image. Based on the estimated energy distribution image, we use a Convolutional Neural Networks (CNN) to estimate the numeric values of food energy presented in the eating scene.</p><p><br></p>
205

Adaptive registration using 2D and 3D features for indoor scene reconstruction. / Registro adaptativo usando características 2D e 3D para reconstrução de cenas em ambientes internos.

Perafán Villota, Juan Carlos 27 October 2016 (has links)
Pairwise alignment between point clouds is an important task in building 3D maps of indoor environments with partial information. The combination of 2D local features with depth information provided by RGB-D cameras are often used to improve such alignment. However, under varying lighting or low visual texture, indoor pairwise frame registration with sparse 2D local features is not a particularly robust method. In these conditions, features are hard to detect, thus leading to misalignment between consecutive pairs of frames. The use of 3D local features can be a solution as such features come from the 3D points themselves and are resistant to variations in visual texture and illumination. Because varying conditions in real indoor scenes are unavoidable, we propose a new framework to improve the pairwise frame alignment using an adaptive combination of sparse 2D and 3D features based on both the levels of geometric structure and visual texture contained in each scene. Experiments with datasets including unrestricted RGB-D camera motion and natural changes in illumination show that the proposed framework convincingly outperforms methods using 2D or 3D features separately, as reflected in better level of alignment accuracy. / O alinhamento entre pares de nuvens de pontos é uma tarefa importante na construção de mapas de ambientes em 3D. A combinação de características locais 2D com informação de profundidade fornecida por câmeras RGB-D são frequentemente utilizadas para melhorar tais alinhamentos. No entanto, em ambientes internos com baixa iluminação ou pouca textura visual o método usando somente características locais 2D não é particularmente robusto. Nessas condições, as características 2D são difíceis de serem detectadas, conduzindo a um desalinhamento entre pares de quadros consecutivos. A utilização de características 3D locais pode ser uma solução uma vez que tais características são extraídas diretamente de pontos 3D e são resistentes a variações na textura visual e na iluminação. Como situações de variações em cenas reais em ambientes internos são inevitáveis, essa tese apresenta um novo sistema desenvolvido com o objetivo de melhorar o alinhamento entre pares de quadros usando uma combinação adaptativa de características esparsas 2D e 3D. Tal combinação está baseada nos níveis de estrutura geométrica e de textura visual contidos em cada cena. Esse sistema foi testado com conjuntos de dados RGB-D, incluindo vídeos com movimentos irrestritos da câmera e mudanças naturais na iluminação. Os resultados experimentais mostram que a nossa proposta supera aqueles métodos que usam características 2D ou 3D separadamente, obtendo uma melhora da precisão no alinhamento de cenas em ambientes internos reais.
206

Annotation of the human genome through the unsupervised analysis of high-dimensional genomic data / Annotation du génome humain grâce à l'analyse non supervisée de données de séquençage haut débit

Morlot, Jean-Baptiste 12 December 2017 (has links)
Le corps humain compte plus de 200 types cellulaires différents possédant une copie identique du génome mais exprimant un ensemble différent de gènes. Le contrôle de l'expression des gènes est assuré par un ensemble de mécanismes de régulation agissant à différentes échelles de temps et d'espace. Plusieurs maladies ont pour cause un dérèglement de ce système, notablement les certains cancers, et de nombreuses applications thérapeutiques, comme la médecine régénérative, reposent sur la compréhension des mécanismes de la régulation géniques. Ce travail de thèse propose, dans une première partie, un algorithme d'annotation (GABI) pour identifier les motifs récurrents dans les données de séquençage haut-débit. La particularité de cet algorithme est de prendre en compte la variabilité observée dans les réplicats des expériences en optimisant le taux de faux positif et de faux négatif, augmentant significativement la fiabilité de l'annotation par rapport à l'état de l'art. L'annotation fournit une information simplifiée et robuste à partir d'un grand ensemble de données. Appliquée à une base de données sur l'activité des régulateurs dans l'hématopoieïse, nous proposons des résultats originaux, en accord avec de précédentes études. La deuxième partie de ce travail s'intéresse à l'organisation 3D du génome, intimement lié à l'expression génique. Elle est accessible grâce à des algorithmes de reconstruction 3D à partir de données de contact entre chromosomes. Nous proposons des améliorations à l'algorithme le plus performant du domaine actuellement, ShRec3D, en permettant d'ajuster la reconstruction en fonction des besoins de l'utilisateur. / The human body has more than 200 different cell types each containing an identical copy of the genome but expressing a different set of genes. The control of gene expression is ensured by a set of regulatory mechanisms acting at different scales of time and space. Several diseases are caused by a disturbance of this system, notably some cancers, and many therapeutic applications, such as regenerative medicine, rely on understanding the mechanisms of gene regulation. This thesis proposes, in a first part, an annotation algorithm (GABI) to identify recurrent patterns in the high-throughput sequencing data. The particularity of this algorithm is to take into account the variability observed in experimental replicates by optimizing the rate of false positive and false negative, increasing significantly the annotation reliability compared to the state of the art. The annotation provides simplified and robust information from a large dataset. Applied to a database of regulators activity in hematopoiesis, we propose original results, in agreement with previous studies. The second part of this work focuses on the 3D organization of the genome, intimately linked to gene expression. This structure is now accessible thanks to 3D reconstruction algorithm from contact data between chromosomes. We offer improvements to the currently most efficient algorithm of the domain, ShRec3D, allowing to adjust the reconstruction according to the user needs.
207

Téléprésence, immersion et interactions pour le reconstruction 3D temps-réel / Telepresence, Immersion and Interactions for Real-time 3D Reconstruction

Petit, Benjamin 21 February 2011 (has links)
Les environnements 3D immersifs et collaboratifs en ligne sont en pleine émergence. Ils posent les problématiques du sentiment de présence au sein des mondes virtuels, de l'immersion et des capacités d'interaction. Les systèmes 3D multi-caméra permettent, sur la base d'une information photométrique, d'extraire une information géométrique (modèle 3D) de la scène observée. Il est alors possible de calculer un modèle numérique texturé en temps-réel qui est utilisé pour assurer la présence de l'utilisateur dans l'espace numérique. Dans le cadre de cette thèse nous avons étudié comment coupler la capacité de présence fournie par un tel système avec une immersion visuelle et des interactions co-localisées. Ceci a mené à la réalisation d'une application qui couple un visio-casque, un système de suivi optique et un système multi-caméra. Ainsi l'utilisateur peut visualiser son modèle 3D correctement aligné avec son corps et mixé avec les objets virtuels. Nous avons aussi mis en place une expérimentation de télépresence sur 3 sites (Bordeaux, Grenoble, Orléans) qui permet à plusieurs utilisateurs de se rencontrer en 3D et de collaborer à distance. Le modèle 3D texturé donne une très forte impression de présence de l'autre et renforce les interactions physiques grâce au langage corporel et aux expressions faciales. Enfin, nous avons étudié comment extraire une information de vitesse à partir des informations issues des caméras, grâce au flot optique et à des correspondances 2D et 3D, nous pouvons estimer le déplacement dense du modèle 3D. Cette donnée étend les capacités d'interaction en enrichissant le modèle 3D. / Online immersive and collaborative 3D environments are emerging very fast. They raise the issues of sensation of presence within virtual worlds, immersion and interaction capabilities. Multi-camera 3D systems allow to extract geometrical information (3D model) of the observed scene using the photometric information. It enables calculation of a numerical textured model in real-time, which is then used to ensure the user's presence in cyberspace. In this thesis we have studied how to pair the ability of presence, obtained from such a system, with visual immersion and co-located interactions. This led to the realization of an application that combines a head mounted display, an optical tracking system and a multi-camera system. Thus, the user can view his 3D model correctly aligned with his own body and mixed with virtual objects. We also have implemented an experimental telepresence application featuring three sites (Bordeaux, Grenoble, Orleans) that allows multiple users to meet in 3D and collaborate remotely. Textured 3D model gives a very strong sense of presence of each other and strengthens the physical interactions, thanks to body language and facial expressions. Finally, we studied how to extract 3D velocity information from the cameras images; using 2D optical flow and 2D and 3D correspondences, we can estimate the dense displacement of the 3D model. This data extend the interaction capabilities by enriching the 3D model.
208

Recherche linéaire et fusion de données par ajustement de faisceaux : application à la localisation par vision / Linear research and data fusion by beam adjustment : application to vision localization

Michot, Julien 09 December 2010 (has links)
Les travaux présentés dans ce manuscrit concernent le domaine de la localisation et la reconstruction 3D par vision artificielle. Dans ce contexte, la trajectoire d’une caméra et la structure3D de la scène filmée sont initialement estimées par des algorithmes linéaires puis optimisées par un algorithme non-linéaire, l’ajustement de faisceaux. Cette thèse présente tout d’abord une technique de recherche de l’amplitude de déplacement (recherche linéaire), ou line search pour les algorithmes de minimisation itérative. La technique proposée est non itérative et peut être rapidement implantée dans un ajustement de faisceaux traditionnel. Cette technique appelée recherche linéaire algébrique globale (G-ALS), ainsi que sa variante à deux dimensions (Two way-ALS), accélèrent la convergence de l’algorithme d’ajustement de faisceaux. L’approximation de l’erreur de reprojection par une distance algébrique rend possible le calcul analytique d’une amplitude de déplacement efficace (ou de deux pour la variante Two way-ALS), par la résolution d’un polynôme de degré 3 (G-ALS) ou 5 (Two way-ALS). Nos expérimentations sur des données simulées et réelles montrent que cette amplitude, optimale en distance algébrique, est performante en distance euclidienne, et permet de réduire le temps de convergence des minimisations. Une difficulté des algorithmes de localisation en temps réel par la vision (SLAM monoculaire) est que la trajectoire estimée est souvent affectée par des dérives : dérives d’orientation, de position et d’échelle. Puisque ces algorithmes sont incrémentaux, les erreurs et approximations sont cumulées tout au long de la trajectoire, et une dérive se forme sur la localisation globale. De plus, un système de localisation par vision peut toujours être ébloui ou utilisé dans des conditions qui ne permettent plus temporairement de calculer la localisation du système. Pour résoudre ces problèmes, nous proposons d’utiliser un capteur supplémentaire mesurant les déplacements de la caméra. Le type de capteur utilisé varie suivant l’application ciblée (un odomètre pour la localisation d’un véhicule, une centrale inertielle légère ou un système de navigation à guidage inertiel pour localiser une personne). Notre approche consiste à intégrer ces informations complémentaires directement dans l’ajustement de faisceaux, en ajoutant un terme de contrainte pondéré dans la fonction de coût. Nous évaluons trois méthodes permettant de sélectionner dynamiquement le coefficient de pondération et montrons que ces méthodes peuvent être employées dans un SLAM multi-capteur temps réel, avec différents types de contrainte, sur l’orientation ou sur la norme du déplacement de la caméra. La méthode est applicable pour tout autre terme de moindres carrés. Les expérimentations menées sur des séquences vidéo réelles montrent que cette technique d’ajustement de faisceaux contraint réduit les dérives observées avec les algorithmes de vision classiques. Ils améliorent ainsi la précision de la localisation globale du système. / The works presented in this manuscript are in the field of computer vision, and tackle the problem of real-time vision based localization and 3D reconstruction. In this context, the trajectory of a camera and the 3D structure of the filmed scene are initially estimated by linear algorithms and then optimized by a nonlinear algorithm, bundle adjustment. The thesis first presents a new technique of line search, dedicated to the nonlinear minimization algorithms used in Structure-from-Motion. The proposed technique is not iterative and can be quickly installed in traditional bundle adjustment frameworks. This technique, called Global Algebraic Line Search (G-ALS), and its two-dimensional variant (Two way-ALS), accelerate the convergence of the bundle adjustment algorithm. The approximation of the reprojection error by an algebraic distance enables the analytical calculation of an effective displacement amplitude (or two amplitudes for the Two way-ALS variant) by solving a degree 3 (G-ALS) or 5 (Two way-ALS) polynomial. Our experiments, conducted on simulated and real data, show that this amplitude, which is optimal for the algebraic distance, is also efficient for the Euclidean distance and reduces the convergence time of minimizations. One difficulty of real-time tracking algorithms (monocular SLAM) is that the estimated trajectory is often affected by drifts : on the absolute orientation, position and scale. Since these algorithms are incremental, errors and approximations are accumulated throughout the trajectory and cause global drifts. In addition, a tracking vision system can always be dazzled or used under conditions which prevented temporarily to calculate the location of the system. To solve these problems, we propose to use an additional sensor measuring the displacement of the camera. The type of sensor used will vary depending on the targeted application (an odometer for a vehicle, a lightweight inertial navigation system for a person). We propose to integrate this additional information directly into an extended bundle adjustment, by adding a constraint term in the weighted cost function. We evaluate three methods (based on machine learning or regularization) that dynamically select the weight associated to the constraint and show that these methods can be used in a real time multi-sensor SLAM, and validate them with different types of constraint on the orientation or on the scale. Experiments conducted on real video sequences show that this technique of constrained bundle adjustment reduces the drifts observed with the classical vision algorithms and improves the global accuracy of the positioning system.
209

Suivi de caméra image en temps réel base et cartographie de l'environnement / Real-time image-based RGB-D camera motion tracking and environment mapping

Tykkälä, Tommi 04 September 2013 (has links)
Dans ce travail, méthodes d'estimation basées sur des images, également connu sous le nom de méthodes directes, sont étudiées qui permettent d'éviter l'extraction de caractéristiques et l'appariement complètement. L'objectif est de produire pose 3D précis et des estimations de la structure. Les fonctions de coût présenté minimiser l'erreur du capteur, car les mesures ne sont pas transformés ou modifiés. Dans la caméra photométrique estimation de la pose, rotation 3D et les paramètres de traduction sont estimées en minimisant une séquence de fonctions de coûts à base d'image, qui sont des non-linéaires en raison de la perspective projection et la distorsion de l'objectif. Dans l'image la structure basée sur le raffinement, d'autre part, de la structure 3D est affinée en utilisant un certain nombre de vues supplémentaires et un coût basé sur l'image métrique. Les principaux domaines d'application dans ce travail sont des reconstitutions d'intérieur, la robotique et la réalité augmentée. L'objectif global du projet est d'améliorer l'image des méthodes d'estimation fondées, et pour produire des méthodes de calcul efficaces qui peuvent être accueillis dans des applications réelles. Les principales questions pour ce travail sont : Qu'est-ce qu'une formulation efficace pour une image 3D basé estimation de la pose et de la structure tâche de raffinement ? Comment organiser calcul afin de permettre une mise en œuvre efficace en temps réel ? Quelles sont les considérations pratiques utilisant l'image des méthodes d'estimation basées sur des applications telles que la réalité augmentée et la reconstruction 3D ? / In this work, image based estimation methods, also known as direct methods, are studied which avoid feature extraction and matching completely. Cost functions use raw pixels as measurements and the goal is to produce precise 3D pose and structure estimates. The cost functions presented minimize the sensor error, because measurements are not transformed or modified. In photometric camera pose estimation, 3D rotation and translation parameters are estimated by minimizing a sequence of image based cost functions, which are non-linear due to perspective projection and lens distortion. In image based structure refinement, on the other hand, 3D structure is refined using a number of additional views and an image based cost metric. Image based estimation methods are usable whenever the Lambertian illumination assumption holds, where 3D points have constant color despite viewing angle. The main application domains in this work are indoor 3D reconstructions, robotics and augmented reality. The overall project goal is to improve image based estimation methods, and to produce computationally efficient methods which can be accomodated into real applications. The main questions for this work are : What is an efficient formulation for an image based 3D pose estimation and structure refinement task ? How to organize computation to enable an efficient real-time implementation ? What are the practical considerations of using image based estimation methods in applications such as augmented reality and 3D reconstruction ?
210

Reconstrução tridimensional de baixo custo a partir de par de imagens estéreo. / Low cost three-dimensional reconstruction using a stereo image pair.

Marcelo Archanjo José 30 May 2008 (has links)
A obtenção e a reconstrução da geometria tridimensional (3D) de objetos e ambientes têm importância crescente em áreas como visão computacional e computação gráfica. As formas atuais de obtenção e reconstrução 3D necessitam de equipamentos e montagens sofisticadas que, por conseqüência, têm custos elevados e aplicação limitada. Este trabalho apresenta criticamente os principais algoritmos para a reconstrução 3D a partir de par de imagens estéreo e identifica os mais viáveis para utilização com equipamentos convencionais. Por meio da implementação de alguns destes algoritmos, da comparação dos resultados obtidos em sua execução e também pela comparação com os resultados encontrados na literatura, são identificadas as principais deficiências. São propostas adequações aos algoritmos existentes, em particular, é apresentada a proposta da técnica das faixas que proporciona a redução drástica no consumo de memória para o processamento da geometria 3D e que possui desempenho computacional melhor em relação às técnicas tradicionais. Foi implementado um protótipo de sistema de reconstrução 3D que permite a reconstrução pelas diferentes técnicas estudadas e propostas, bem como permite visualizar o cenário reconstruído sob diferentes pontos de vista de forma interativa. / The acquisition and reconstruction of three-dimensional (3D) geometry of objects and environments have their importance growing in areas such as Computer Vision and Computer Graphics. The current methods to acquire and reconstruct three-dimensional data need sophisticated equipments and assemblies, which have expensive costs and limited applications. This work presents the main algorithms for 3D reconstruction using a pair of stereo images and identifies which are viable to use with conventional equipments. Through the implementation of some of these algorithms, by comparing the results obtained and comparing with the results presented in the literature, the main limitations were identified. This work proposes adjustments in the existing algorithms, in particular it proposes the stripping technique, which provides a huge memory usage reduction for 3D geometry processing and better computing performance if compared with traditional approaches. A prototype system for 3D reconstruction was implemented, which allows the reconstruction using the different researched and proposed techniques and allows interactive visualization of the reconstructed scene in different angles.

Page generated in 0.1142 seconds