• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • Tagged with
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Dynamic-object-aware simultaneous localization and mapping for augmented reality applications

Oliveira, Douglas Coelho Braga de 19 September 2018 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2018-11-23T09:57:40Z No. of bitstreams: 1 douglascoelhobragadeoliveira.pdf: 19144398 bytes, checksum: 652398b01779c3899281a6ba454c143a (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2018-11-23T12:48:28Z (GMT) No. of bitstreams: 1 douglascoelhobragadeoliveira.pdf: 19144398 bytes, checksum: 652398b01779c3899281a6ba454c143a (MD5) / Made available in DSpace on 2018-11-23T12:48:28Z (GMT). No. of bitstreams: 1 douglascoelhobragadeoliveira.pdf: 19144398 bytes, checksum: 652398b01779c3899281a6ba454c143a (MD5) Previous issue date: 2018-09-19 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Realidade Aumentada (RA) é uma tecnologia que permite combinar objetos virtuais tridimensionais com um ambiente predominantemente real, de forma a construir um novo ambiente onde os objetos reais e virtuais podem interagir uns com os outros em tempo real. Para fazer isso, é necessário encontrar a pose do observador (câmera, HMD, óculos inteligentes, etc.) em relação a um sistema de coordenadas global. Geralmente, algum objeto físico conhecido é usado para marcar o referencial para as projeções e para a posição do observador. O problema de Localização e Mapeamento Simultâneo (SLAM) se origina da comunidade de robótica como uma condição necessária para se construir robôs verdadeiramente autônomos, capazes de se auto localizarem em um ambiente desconhecido ao mesmo tempo que constroem um mapa da cena observada a partir de informações capturadas por um conjunto de sensores. A principal contribuição do SLAM para a RA é permitir aplicações em ambientes despreparados, ou seja, sem marcadores. No entanto, ao eliminar o marcador, perdemos o referencial para a projeção dos objetos virtuais e a principal fonte de interação entre os elementos reais e virtuais. Embora o mapa gerado possa ser processado a fim de encontrar uma estrutura conhecida, como um plano predominante, para usá-la como referencial, isso ainda não resolve a questão das interações. Na literatura recente, encontramos trabalhos que integram um sistema de reconhecimento de objetos ao SLAM e incorporam tais objetos ao mapa. Frequentemente, assume-se um mapa estático, devido às limitações das técnicas envolvidas, de modo que o objeto é usado apenas para fornecer informações semânticas sobre a cena. Neste trabalho, propomos um novo framework que permite estimar simultaneamente a posição da câmera e de objetos para cada quadro de vídeo em tempo real. Dessa forma, cada objeto é independente e pode se mover pelo mapa livremente, assim como nos métodos baseados em marcadores, mas mantendo as vantagens que o SLAM fornece. Implementamos a estrutura proposta sobre um sistema SLAM de última geração a fim de validar nossa proposta e demonstrar a potencial aplicação em Realidade Aumentada. / Augmented Reality (AR) is a technology that allows combining three-dimensional virtual objects with an environment predominantly real in a way to build a new environment where both real and virtual objects can interact with each other in real-time. To do this, it is required to nd the pose of the observer (camera, HMD, smart glasses etc) in relation to a global coordinate system. Commonly, some well known physical object, called marker, is used to de ne the referential for both virtual objects and the observer's position. The Simultaneous Localization and Mapping (SLAM) problem borns from robotics community as a way to build truly autonomous robots by allowing they to localize themselves while they build a map of the observed scene from the input data of their coupled sensors. SLAM-based Augmented Reality is an active and evolving research line. The main contribution of the SLAM to the AR is to allow applications on unprepared environments, i.e., without markers. However, by eliminating the marker object, we lose the referential for virtual object projection and the main source of interaction between real and virtual elements. Although the generated map can be processed in order to nd a known structure, e.g. a predominant plane, to use it as the referential system, this still not solve for interactions. In the recent literature, we can found works that integrate an object recognition system to the SLAM in a way the objects are incorporated into the map. The SLAM map is frequently assumed to be static, due to limitations on techniques involved, so that on these works the object is just used to provide semantic information about the scene. In this work, we propose a new framework that allows estimating simultaneously the camera and object positioning for each camera image in real time. In this way, each object is independent and can move through the map as well as in the marker-based methods but with the SLAM advantages kept. We develop our proposed framework over a stateof- the-art SLAM system in order to evaluate our proposal and demonstrate potentials application in Augmented Reality.
2

Agrégation d'information pour la localisation d'un robot mobile sur une carte imparfaite / Information aggregation for the localization of a mobile robot using a non-perfect map

Delobel, Laurent 04 May 2018 (has links)
La plupart des grandes villes modernes mondiales souffrent des conséquences de la pollution et des bouchons. Une solution à ce problème serait de réglementer l'accès aux centres-villes pour les voitures personnelles en faveur d'un système de transports publics constitués de navettes autonomes propulsées par une énergie n'engendrant pas de pollution gazeuse. Celles-ci pourraient desservir les usagers à la demande, en étant déroutées en fonction des appels de ceux-ci. Ces véhicules pourraient également être utilisés afin de desservir de grands sites industriels, ou bien des sites sensibles dont l'accès, restreint, doit être contrôlé. Afin de parvenir à réaliser cet objectif, un véhicule devra être capable de se localiser dans sa zone de travail. Une bonne partie des méthodes de localisation reprises par la communauté scientifique se basent sur des méthodes de type "Simultaneous Localization and Mapping" (SLAM). Ces méthodes sont capables de construire dynamiquement une carte de l'environnement ainsi que de localiser un véhicule dans une telle carte. Bien que celles-ci aient démontré leur robustesse, dans la plupart des implémentations, le partage d'une carte commune entre plusieurs robots peut s'avérer problématique. En outre, ces méthodes n'utilisent fréquemment aucune information existant au préalable et construisent la carte de leur environnement à partir de zéro.Nous souhaitons lever ces limitations, et proposons d'utiliser des cartes de type sémantique, qui existent au-préalable, par exemple comme OpenStreetMap, comme carte de base afin de se localiser. Ce type de carte contient la position de panneaux de signalisation, de feux tricolores, de murs de bâtiments etc... De telles cartes viennent presque à-coup-sûr avec des imprécisions de position, des erreurs au niveau des éléments qu'elles contiennent, par exemple des éléments réels peuvent manquer dans les données de la carte, ou bien des éléments stockés dans celles-ci peuvent ne plus exister. Afin de gérer de telles erreurs dans les données de la carte, et de permettre à un véhicule autonome de s'y localiser, nous proposons un nouveau paradigme. Tout d'abord, afin de gérer le problème de sur-convergence classique dans les techniques de fusion de données (filtre de Kalman), ainsi que le problème de mise à l'échelle, nous proposons de gérer l'intégralité de la carte par un filtre à Intersection de Covariance Partitionnée. Nous proposons également d'effacer des éléments inexistant des données de la carte en estimant leur probabilité d'existence, calculée en se basant sur les détections de ceux-ci par les capteurs du véhicule, et supprimant ceux doté d'une probabilité trop faible. Enfin, nous proposons de scanner périodiquement la totalité des données capteur pour y chercher de nouveaux amers potentiels que la carte n'intègre pas encore dans ses données, et de les y ajouter. Des expérimentations montrent la faisabilité d'un tel concept de carte dynamique de haut niveau qui serait mise à jour au-vol. / Most large modern cities in the world nowadays suffer from pollution and traffic jams. A possible solution to this problem could be to regulate personnal car access into center downtown, and possibly replace public transportations by pollution-free autonomous vehicles, that could dynamically change their planned trajectory to transport people in a fully on-demand scenario. These vehicles could be used also to transport employees in a large industrial facility or in a regulated access critical infrastructure area. In order to perform such a task, a vehicle should be able to localize itself in its area of operation. Most current popular localization methods in such an environment are based on so-called "Simultaneous Localization and Maping" (SLAM) methods. They are able to dynamically construct a map of the environment, and to locate such a vehicle inside this map. Although these methods demonstrated their robustness, most of the implementations lack to use a map that would allow sharing over vehicles (map size, structure, etc...). On top of that, these methods frequently do not take into account already existing information such as an existing city map and rather construct it from scratch. In order to go beyond these limitations, we propose to use in the end semantic high-level maps, such as OpenStreetMap as a-priori map, and to allow the vehicle to localize based on such a map. They can contain the location of roads, traffic signs and traffic lights, buildings etc... Such kind of maps almost always come with some degree of imprecision (mostly in position), they also can be wrong, lacking existing but undescribed elements (landmarks), or containing in their data elements that do not exist anymore. In order to manage such imperfections in the collected data, and to allow a vehicle to localize based on such data, we propose a new strategy. Firstly, to manage the classical problem of data incest in data fusion in the presence of strong correlations, together with the map scalability problem, we propose to manage the whole map using a Split Covariance Intersection filter. We also propose to remove possibly absent landmarks still present in map data by estimating their probability of being there based on vehicle sensor detections, and to remove those with a low score. Finally, we propose to periodically scan sensor data to detect possible new landmarks that the map does not include yet, and proceed to their integration into map data. Experiments show the feasibility of such a concept of dynamic high level map that could be updated on-the-fly.

Page generated in 0.0511 seconds