• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 152
  • 40
  • 15
  • 11
  • 10
  • 5
  • 4
  • 4
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 279
  • 279
  • 107
  • 65
  • 59
  • 54
  • 51
  • 44
  • 42
  • 40
  • 39
  • 38
  • 37
  • 35
  • 35
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Room layout estimation on mobile devices / Création de plans d’intérieur avec une tablette

Angladon, Vincent 27 April 2018 (has links)
L’objectif de cette thèse CIFRE est d’étudier et de tirer parti des derniers appareils mobiles du marché pour générer des 3D des pièces observées. De nous jours, ces appareils intègrent un grand nombre de capteurs, tel que des capteurs inertiels, des cameras RGB, et depuis peu, des capteurs de profondeur. Sans compter la présence de l’écran tactile qui offre une interface pour interagir avec l’utilisateur. Un cas d’usage typique de ces modèles 3D est la génération de plans d’intérieur, ou de fichiers CAO 3D (conception assistée par ordinateur) appliques a l’industrie du bâtiment. Le modèle permet d’esquisser les travaux de rénovation d’un appartement, ou d’évaluer la fidélité d’un chantier en cours avec le modèle initial. Pour le secteur de l’immobilier, la génération automatique de plans et modèles 3D peut faciliter le calcul de la surface habitable et permet de proposer des visites virtuelles a d’éventuels acquéreurs. Concernant le grand public, ces modèles 3D peuvent être intégrés a des jeux en réalité mixte afin d’offrir une expérience encore plus immersive, ou pour des applications de réalité augmentée, telles que la décoration d’intérieur. La thèse a trois contributions principales. Nous commençons par montrer comment le problème classique de détection des points de fuite dans une image, peut être revisite pour tirer parti de l’utilisation de données inertielles. Nous proposons un algorithme simple et efficace de détection de points de fuite reposant sur l’utilisation du vecteur gravite obtenu via ces données. Un nouveau jeu de données contenant des photos avec des données inertielles est présenté pour l’évaluation d’algorithmes d’estimation de points de fuite et encourager les travaux ultérieurs dans cette direction. Dans une deuxième contribution, nous explorons les approches d’odométrie visuelle de l’état de l’art qui exploitent des capteurs de profondeur. Localiser l’appareil mobile en temps réel est fondamental pour envisager des applications reposant sur la réalité augmentée. Nous proposons une comparaison d’algorithmes existants développés en grande partie pour ordinateur de bureau, afin d’étudier si leur utilisation sur un appareil mobile est envisageable. Pour chaque approche considérée, nous évaluons la précision de la localisation et les performances en temps de calcul sur mobile. Enfin, nous présentons une preuve de concept d’application permettant de générer le plan d’une pièce, en utilisant une tablette du projet Tango, équipée d’un capteur RGB-D. Notre algorithme effectue un traitement incrémental des données 3D acquises au cours de l’observation de la pièce considérée. Nous montrons comment notre approche utilise les indications de l’utilisateur pour corriger pendant la capture le modèle de la pièce. / Room layout generation is the problem of generating a drawing or a digital model of an existing room from a set of measurements such as laser data or images. The generation of floor plans can find application in the building industry to assess the quality and the correctness of an ongoing construction w.r.t. the initial model, or to quickly sketch the renovation of an apartment. Real estate industry can rely on automatic generation of floor plans to ease the process of checking the livable surface and to propose virtual visits to prospective customers. As for the general public, the room layout can be integrated into mixed reality games to provide a better immersiveness experience, or used in other related augmented reality applications such room redecoration. The goal of this industrial thesis (CIFRE) is to investigate and take advantage of the state-of-the art mobile devices in order to automate the process of generating room layouts. Nowadays, modern mobile devices usually come a wide range of sensors, such as inertial motion unit (IMU), RGB cameras and, more recently, depth cameras. Moreover, tactile touchscreens offer a natural and simple way to interact with the user, thus favoring the development of interactive applications, in which the user can be part of the processing loop. This work aims at exploiting the richness of such devices to address the room layout generation problem. The thesis has three major contributions. We first show how the classic problem of detecting vanishing points in an image can benefit from an a-priori given by the IMU sensor. We propose a simple and effective algorithm for detecting vanishing points relying on the gravity vector estimated by the IMU. A new public dataset containing images and the relevant IMU data is introduced to help assessing vanishing point algorithms and foster further studies in the field. As a second contribution, we explored the state of-the-art of real-time localization and map optimization algorithms for RGB-D sensors. Real-time localization is a fundamental task to enable augmented reality applications, and thus it is a critical component when designing interactive applications. We propose an evaluation of existing algorithms for the common desktop set-up in order to be employed on a mobile device. For each considered method, we assess the accuracy of the localization as well as the computational performances when ported on a mobile device. Finally, we present a proof of concept of application able to generate the room layout relying on a Project Tango tablet equipped with an RGB-D sensor. In particular, we propose an algorithm that incrementally processes and fuses the 3D data provided by the sensor in order to obtain the layout of the room. We show how our algorithm can rely on the user interactions in order to correct the generated 3D model during the acquisition process.
242

3D Semantic SLAM of Indoor Environment with Single Depth Sensor / SLAM sémantique 3D de l'environnement intérieur avec capteur de profondeur simple

Ghorpade, Vijaya Kumar 20 December 2017 (has links)
Pour agir de manière autonome et intelligente dans un environnement, un robot mobile doit disposer de cartes. Une carte contient les informations spatiales sur l’environnement. La géométrie 3D ainsi connue par le robot est utilisée non seulement pour éviter la collision avec des obstacles, mais aussi pour se localiser et pour planifier des déplacements. Les robots de prochaine génération ont besoin de davantage de capacités que de simples cartographies et d’une localisation pour coexister avec nous. La quintessence du robot humanoïde de service devra disposer de la capacité de voir comme les humains, de reconnaître, classer, interpréter la scène et exécuter les tâches de manière quasi-anthropomorphique. Par conséquent, augmenter les caractéristiques des cartes du robot à l’aide d’attributs sémiologiques à la façon des humains, afin de préciser les types de pièces, d’objets et leur aménagement spatial, est considéré comme un plus pour la robotique d’industrie et de services à venir. Une carte sémantique enrichit une carte générale avec les informations sur les entités, les fonctionnalités ou les événements qui sont situés dans l’espace. Quelques approches ont été proposées pour résoudre le problème de la cartographie sémantique en exploitant des scanners lasers ou des capteurs de temps de vol RGB-D, mais ce sujet est encore dans sa phase naissante. Dans cette thèse, une tentative de reconstruction sémantisée d’environnement d’intérieur en utilisant une caméra temps de vol qui ne délivre que des informations de profondeur est proposée. Les caméras temps de vol ont modifié le domaine de l’imagerie tridimensionnelle discrète. Elles ont dépassé les scanners traditionnels en termes de rapidité d’acquisition des données, de simplicité fonctionnement et de prix. Ces capteurs de profondeur sont destinés à occuper plus d’importance dans les futures applications robotiques. Après un bref aperçu des approches les plus récentes pour résoudre le sujet de la cartographie sémantique, en particulier en environnement intérieur. Ensuite, la calibration de la caméra a été étudiée ainsi que la nature de ses bruits. La suppression du bruit dans les données issues du capteur est menée. L’acquisition d’une collection d’images de points 3D en environnement intérieur a été réalisée. La séquence d’images ainsi acquise a alimenté un algorithme de SLAM pour reconstruire l’environnement visité. La performance du système SLAM est évaluée à partir des poses estimées en utilisant une nouvelle métrique qui est basée sur la prise en compte du contexte. L’extraction des surfaces planes est réalisée sur la carte reconstruite à partir des nuages de points en utilisant la transformation de Hough. Une interprétation sémantique de l’environnement reconstruit est réalisée. L’annotation de la scène avec informations sémantiques se déroule sur deux niveaux : l’un effectue la détection de grandes surfaces planes et procède ensuite en les classant en tant que porte, mur ou plafond; l’autre niveau de sémantisation opère au niveau des objets et traite de la reconnaissance des objets dans une scène donnée. A partir de l’élaboration d’une signature de forme invariante à la pose et en passant par une phase d’apprentissage exploitant cette signature, une interprétation de la scène contenant des objets connus et inconnus, en présence ou non d’occultations, est obtenue. Les jeux de données ont été mis à la disposition du public de la recherche universitaire. / Intelligent autonomous actions in an ordinary environment by a mobile robot require maps. A map holds the spatial information about the environment and gives the 3D geometry of the surrounding of the robot to not only avoid collision with complex obstacles, but also selflocalization and for task planning. However, in the future, service and personal robots will prevail and need arises for the robot to interact with the environment in addition to localize and navigate. This interaction demands the next generation robots to understand, interpret its environment and perform tasks in human-centric form. A simple map of the environment is far from being sufficient for the robots to co-exist and assist humans in the future. Human beings effortlessly make map and interact with environment, and it is trivial task for them. However, for robots these frivolous tasks are complex conundrums. Layering the semantic information on regular geometric maps is the leap that helps an ordinary mobile robot to be a more intelligent autonomous system. A semantic map augments a general map with the information about entities, i.e., objects, functionalities, or events, that are located in the space. The inclusion of semantics in the map enhances the robot’s spatial knowledge representation and improves its performance in managing complex tasks and human interaction. Many approaches have been proposed to address the semantic SLAM problem with laser scanners and RGB-D time-of-flight sensors, but it is still in its nascent phase. In this thesis, an endeavour to solve semantic SLAM using one of the time-of-flight sensors which gives only depth information is proposed. Time-of-flight cameras have dramatically changed the field of range imaging, and surpassed the traditional scanners in terms of rapid acquisition of data, simplicity and price. And it is believed that these depth sensors will be ubiquitous in future robotic applications. In this thesis, an endeavour to solve semantic SLAM using one of the time-of-flight sensors which gives only depth information is proposed. Starting with a brief motivation in the first chapter for semantic stance in normal maps, the state-of-the-art methods are discussed in the second chapter. Before using the camera for data acquisition, the noise characteristics of it has been studied meticulously, and properly calibrated. The novel noise filtering algorithm developed in the process, helps to get clean data for better scan matching and SLAM. The quality of the SLAM process is evaluated using a context-based similarity score metric, which has been specifically designed for the type of acquisition parameters and the data which have been used. Abstracting semantic layer on the reconstructed point cloud from SLAM has been done in two stages. In large-scale higher-level semantic interpretation, the prominent surfaces in the indoor environment are extracted and recognized, they include surfaces like walls, door, ceiling, clutter. However, in indoor single scene object-level semantic interpretation, a single 2.5D scene from the camera is parsed and the objects, surfaces are recognized. The object recognition is achieved using a novel shape signature based on probability distribution of 3D keypoints that are most stable and repeatable. The classification of prominent surfaces and single scene semantic interpretation is done using supervised machine learning and deep learning systems. To this end, the object dataset and SLAM data are also made publicly available for academic research.
243

Automatic Volume Estimation Using Structure-from-Motion Fused with a Cellphone's Inertial Sensors

Fallqvist, Marcus January 2017 (has links)
The thesis work evaluates a method to estimate the volume of stone and gravelpiles using only a cellphone to collect video and sensor data from the gyroscopesand accelerometers. The project is commissioned by Escenda Engineering withthe motivation to replace more complex and resource demanding systems with acheaper and easy to use handheld device. The implementation features popularcomputer vision methods such as KLT-tracking, Structure-from-Motion, SpaceCarving together with some Sensor Fusion. The results imply that it is possible toestimate volumes up to a certain accuracy which is limited by the sensor qualityand with a bias. / I rapporten framgår hur volymen av storskaliga objekt, nämligen grus-och stenhögar,kan bestämmas i utomhusmiljö med hjälp av en mobiltelefons kamerasamt interna sensorer som gyroskop och accelerometer. Projektet är beställt avEscenda Engineering med motivering att ersätta mer komplexa och resurskrävandesystem med ett enkelt handhållet instrument. Implementationen använderbland annat de vanligt förekommande datorseendemetoderna Kanade-Lucas-Tommasi-punktspårning, Struktur-från-rörelse och 3D-karvning tillsammans medenklare sensorfusion. I rapporten framgår att volymestimering är möjligt mennoggrannheten begränsas av sensorkvalitet och en bias.
244

Underwater 3D Surface Scanning using Structured Light

Törnblom, Nils January 2010 (has links)
In this thesis project, an underwater 3D scanner based on structured light has been constructed and developed. Two other scanners, based on stereoscopy and a line-swept laser, were also tested. The target application is to examine objects inside the water filled reactor vessel of nuclear power plants. Structured light systems (SLS) use a projector to illuminate the surface of the scanned object, and a camera to capture the surfaces' reflection. By projecting a series of specific line-patterns, the pixel columns of the digital projector can be identified off the scanned surface. 3D points can then be triangulated using ray-plane intersection. These points form the basis the final 3D model. To construct an accurate 3D model of the scanned surface, both the projector and the camera need to be calibrated. In the implemented 3D scanner, this was done using the Camera Calibration Toolbox for Matlab. The codebase of this scanner comes from the Matlab implementation by Lanman & Taubin at Brown University. The code has been modified and extended to meet the needs of this project. An examination of the effects of the underwater environment has been performed, both theoretically and experimentally. The performance of the scanner has been analyzed, and different 3D model visualization methods have been tested. In the constructed scanner, a small pico projector was used together with a high pixel count DSLR camera. Because these are both consumer level products, the cost of this system is just a fraction of commercial counterparts, which uses professional components. Yet, thanks to the use of a high pixel count camera, the measurement resolution of the scanner is comparable to the high-end of industrial structured light scanners.
245

Design and Calibration of a Network of RGB-D Sensors for Robotic Applications over Large Workspaces

Macknojia, Rizwan January 2013 (has links)
This thesis presents an approach for configuring and calibrating a network of RGB-D sensors used to guide a robotic arm to interact with objects that get rapidly modeled in 3D. The system is based on Microsoft Kinect sensors for 3D data acquisition. The work presented here also details an analysis and experimental study of the Kinect’s depth sensor capabilities and performance. The study comprises examination of the resolution, quantization error, and random distribution of depth data. In addition, the effects of color and reflectance characteristics of an object are also analyzed. The study examines two versions of Kinect sensors, one dedicated to operate with the Xbox 360 video game console and the more recent Microsoft Kinect for Windows version. The study of the Kinect sensor is extended to the design of a rapid acquisition system dedicated to large workspaces by the linkage of multiple Kinect units to collect 3D data over a large object, such as an automotive vehicle. A customized calibration method for this large workspace is proposed which takes advantage of the rapid 3D measurement technology embedded in the Kinect sensor and provides registration accuracy between local sections of point clouds that is within the range of the depth measurements accuracy permitted by the Kinect technology. The method is developed to calibrate all Kinect units with respect to a reference Kinect. The internal calibration of the sensor in between the color and depth measurements is also performed to optimize the alignment between the modalities. The calibration of the 3D vision system is also extended to formally estimate its configuration with respect to the base of a manipulator robot, therefore allowing for seamless integration between the proposed vision platform and the kinematic control of the robot. The resulting vision-robotic system defines the comprehensive calibration of reference Kinect with the robot. The latter can then be used to interact under visual guidance with large objects, such as vehicles, that are positioned within a significantly enlarged field of view created by the network of RGB-D sensors. The proposed design and calibration method is validated in a real world scenario where five Kinect sensors operate collaboratively to rapidly and accurately reconstruct a 180 degrees coverage of the surface shape of various types of vehicles from a set of individual acquisitions performed in a semi-controlled environment, that is an underground parking garage. The vehicle geometrical properties generated from the acquired 3D data are compared with the original dimensions of the vehicle.
246

Detekce a vizualizace specifických rysů v mračnu bodů / Detection and Vizualization of Features in a Point Cloud

Kratochvíl, Jiří Jaroslav January 2018 (has links)
The point cloud is an unorganized set of points with 3D coordinates (x, y, z) which represents a real object. These point clouds are acquired by the technology called 3D scanning. This scanning technique can be done by various methods, such as LIDAR (Light Detection And Ranging) or by utilizing recently developed 3D scanners. Point clouds can be therefore used in various applications, such as mechanical or reverse engineering, rapid prototyping, biology, nuclear physics or virtual reality. Therefore in this doctoral Ph.D. thesis, I focus on feature detection and visualization in a point cloud. These features represent parts of the object that can be described by the well--known mathematical model (lines, planes, helices etc.). The points on the sharp edges are especialy problematic for commonly used methods. Therefore, I focus on detection of these problematic points. This doctoral Ph.D. thesis presents a new algorithm for precise detection of these problematic points. Visualization of these points is done by a modified curve fitting algoritm with a new weight function that leads to better results. Each of the proposed methods were tested on real data sets and compared with contemporary published methods.
247

Kalibrace robotického pracoviště / Calibration of Robotic Workspace

Uhlíř, Jan January 2019 (has links)
This work is concerned by the issue of calibrating a robotic workplace, including the localization of a calibration object for the purpose of calibrating a 2D or 3D camera, a robotic arm and a scene of robotic workplace. At first, the problems related to the calibration of the aforementioned elements were studied. Further, an analysis of suitable methods for performing these calibrations was performed. The result of this work is application of ROS robotic system providing methods for three different types of calibration programs, whose functionality is experimentally verified at the end of this work.
248

Lepší vymezení herního prostoru pro VR pomocí 3D sensorů / Better Chaperone Bounds Using 3D Sensors

Tinka, Jan January 2018 (has links)
Room-scale tracking encourages users to move more freely and even walk. Even though there has been much research on making the limited physical workspace feel larger in the VR,  these approaches have their limitations and require certain conditions to be met. This thesis proposes an alternative approach to the conventional play-area boundaries of high-end VR products such as the HTC Vive and Oculus Rift which are set by the user in a 2-D fashion as a means of enhance workspace utilization. A 3-D scanner is used to make a 3-D point-cloud model of the play area's surroundings. This model is then used to detect collisions and provide feedback to the user. Evaluation based on user tests showed that this approach can be useful, is well accepted by users and might be worth further research.
249

Využití fotogrammetrie pro realitní praxi / The Use of Photogrammetry in the Real Estate Practice

Viktora, Jakub January 2014 (has links)
This diploma thesis aims to explore possibilities of using IT in real estate and projection practice. It deals with simplification of difficult or insolvable tasks. It focuses on rapidly developing field of photogrammetry and its further processing via software. The work presents methods of measurement of existing realties without using standard methods (tape line, laser). Instead of these tools, the author uses photographic data and program PhotoModeler for surveying and creating a 3D model of the facade. The thesis verifies further serviceability of gained data up to the render phase of the surveyed building facade.
250

Lokalizace mobilního robota v prostředí / Localisation of Mobile Robot in the Environment

Němec, Lukáš January 2016 (has links)
This paper addresses the problem of mobile robot localization based on current 2D and 3D data and previous records. Focusing on practical loop detection in the trajectory of a robot. The objective of this work was to evaluate current methods of image processing and depth data for issues of localization in environment. This work uses Bag of Words for 2D data and environment of point cloud with Viewpoint Feature Histogram for 3D data. Designed system was implemented and evaluated.

Page generated in 0.0452 seconds