• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 52
  • 9
  • 5
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 84
  • 84
  • 30
  • 21
  • 17
  • 15
  • 15
  • 14
  • 13
  • 12
  • 12
  • 12
  • 11
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Untersuchungen zum mobilen 3D-Scannen unter Tage bei K+S

Fischer, Andreas, Schäfer, Andreas 29 July 2016 (has links) (PDF)
Im Rahmen einer Diplomarbeit an der TU Bergakademie Freiberg wurden in 2014 die Grundlagen für die Auswertung von 3D-Punktwolken zur automatisierten Nachtragung des Risswerks gelegt. Um die dafür notwendigen 3D-Punktwolken möglichst wirtschaftlich zu erstellen, laufen seit 2015 Untersuchungen und Testmessungen zur Machbarkeit des untertägigen Einsatzes von mobil messenden Laserscannern. Im Folgenden werden verschiedene technische Ansätze sowie die Ergebnisse der Testmessungen und die weiteren geplanten Schritte vorgestellt. / As part of a thesis at the Technical University of Freiberg, a basis for the analysis of 3D point clouds was set for refining the mine map automatically. Since 2015 studies and test measurements have been running to create the necessary 3D point clouds as economically as possible, by using an underground mobile scanning system. Below the different technical approaches will be presented as well as the results of the test measurements and the next planned steps.
42

SLAM collaboratif dans des environnements extérieurs / Collaborative SLAM for outdoor environments

Contreras Samamé, Luis Federico 10 April 2019 (has links)
Cette thèse propose des modèles cartographiques à grande échelle d'environnements urbains et ruraux à l'aide de données en 3D acquises par plusieurs robots. La mémoire contribue de deux manières principales au domaine de recherche de la cartographie. La première contribution est la création d'une nouvelle structure, CoMapping, qui permet de générer des cartes 3D de façon collaborative. Cette structure s’applique aux environnements extérieurs en ayant une approche décentralisée. La fonctionnalité de CoMapping comprend les éléments suivants : Tout d’abord, chaque robot réalise la construction d'une carte de son environnement sous forme de nuage de points.Pour cela, le système de cartographie a été mis en place sur des ordinateurs dédiés à chaque voiture, en traitant les mesures de distance à partir d'un LiDAR 3D se déplaçant en six degrés de liberté (6-DOF). Ensuite, les robots partagent leurs cartes locales et fusionnent individuellement les nuages de points afin d'améliorer leur estimation de leur cartographie locale. La deuxième contribution clé est le groupe de métriques qui permettent d'analyser les processus de fusion et de partage de cartes entre les robots. Nous présentons des résultats expérimentaux en vue de valider la structure CoMapping et ses métriques. Tous les tests ont été réalisés dans des environnements extérieurs urbains du campus de l’École Centrale de Nantes ainsi que dans des milieux ruraux. / This thesis proposes large-scale mapping model of urban and rural environments using 3D data acquired by several robots. The work contributes in two main ways to the research field of mapping. The first contribution is the creation of a new framework, CoMapping, which allows to generate 3D maps in a cooperative way. This framework applies to outdoor environments with a decentralized approach. The CoMapping's functionality includes the following elements: First of all, each robot builds a map of its environment in point cloud format.To do this, the mapping system was set up on computers dedicated to each vehicle, processing distance measurements from a 3D LiDAR moving in six degrees of freedom (6-DOF). Then, the robots share their local maps and merge the point clouds individually to improve their local map estimation. The second key contribution is the group of metrics that allow to analyze the merging and card sharing processes between robots. We present experimental results to validate the CoMapping framework with their respective metrics. All tests were carried out in urban outdoor environments on the surrounding campus of the École Centrale de Nantes as well as in rural areas.
43

Untersuchung von additiv gefertigten Prägeformen mit graduellen Eigenschaften hinsichtlich ihres Prägeverhaltens

Mohrich, Maximilian 16 July 2021 (has links)
Ziel dieser Masterarbeit ist die Untersuchung neuartiger Prägeformkonzepte hinsichtlich ihres Prägeverhaltens. Die Konzepte weisen lokal unterschiedliche Materialeigenschaften auf, die zu einer verbesserten Ausprägung von Karton führen sollen. Die Konzepte sollen anhand der Prägeergebnisse und der Abformgenauigkeit evaluiert werden. Dabei ist ein weiteres Ziel der Arbeit, Methoden zur Quantifizierung der Abformgenauigkeit zu finden. Die Herstellung der Konzepte erfolgt mithilfe eines additiven Fertigungssystems, welches mehrere Materialien in einem Bauvorgang verarbeiten kann. Zur Datengewinnung werden Oberflächenscans der geprägten Kartonproben und Werkzeuge durchgeführt. Auf Grundlage dieser Scans werden drei Methoden zur Ermittlung der Abformgenauigkeit vorgeschlagen. Abschließend werden die Werkzeuge anhand der Prägeergebnisse und der ermittelten Abformgenauigkeit bewertet. Weiterhin werden die vorgeschlagenen Methoden miteinander verglichen und deren Vor- und Nachteile diskutiert. Dies gibt Auskunft darüber, unter welchen Bedingungen der Einsatz welcher Methode sinnvoll erscheint.:1. Einleitung 2. Theoretische Grundlagen 2.1 Umformprozesse 2.1.1 Prägen von Faserwerkstoffen 2.1.2 Einsatz von Niederhaltern beim Umformen von Blechen 2.1.2.1 Tiefenziehen 2.1.2.2 Tiefen 2.1.3 Einsatz von Niederhaltern beim Umformen von Karton 2.1.3.1 Ziehen und Pressformen 2.1.3.2 Hydroformen 2.2 Multi-Material-Verarbeitung in der additiven Fertigung 2.2.1 Materialextrusion 2.2.2 Badbasierte Photopolymerisation 2.2.3 Material Jetting 2.2.4 Pulverbettbasiertes Schmelzen 2.2.5 Workflow und Datenvorbereitung 2.3.6 Geeignete Dateiformate 2.3 Soll-Ist-Vergleich von 2.5D-Oberflächendaten 2.3.1 Berechnung von Flächeninhalten und Volumen 2.3.2 Registrierung und Abstandsberechnung von Punktwolken 3. Versuche und Messungen 3.1 Herstellung der Prägeformkonzepte 3.1.1 Beschreibung der Konzepte 3.1.2 Fertigungstechnologie und Materialwahl 3.1.3 Datenvorbereitung für die Polyjet-Fertigung 3.2 Prägeversuche und Datenverarbeitung 3.2.1 Prägeversuche 3.2.2 Oberflächenscan am Keyence 3D-Makroskop 3.3 Ermittlung der Abformgenauigkeit 3.3.1 Flächen- und Volumenberechnung in MatLab & CloudCompare 3.3.2 Abformgenauigkeit nach ICP-Algorithmus & Abstandsberechnung 4. Ergebnisse und Diskussion 4.1 Betrachtung der Prägewerkzeuge 4.2 Betrachtung der Kartonprägungen 4.2.1 Prägeergebnisse nach Flächeninhalt der Profilschnitte 4.2.2 Einfluss der Faserlaufrichtung auf Kartonprägungen 4.2.3 Prägeergebnisse nach Volumen der Punktwolken 4.3 Betrachtung der Abformgenauigkeit 4.3.1 Abformgenauigkeit nach Flächeninhalt & Volumen 4.3.2 Abformgenauigkeit nach ICP-Algorithmus & Abstandsberechnung 4.4 Bewertung der Methoden zur Ermittlung der Abformgenauigkeit 4.5 Beurteilung des Bedienereinflusses bei der Datenverarbeitung am Keyence 5. Zusammenfassung und Ausblick Literaturverzeichnis Eidesstattliche Erklärung / The aim of this master thesis is to investigate novel embossing die concepts with regard to their embossing behavior. The concepts have locally different material properties, which should lead to an improved embossing of cardboard. The concepts are to be evaluated on the basis of the embossing results and the impression accuracy. A further aim of the work is to find methods for quantifying the impression accuracy. The concepts will be manufactured using an additive manufacturing system that can process multiple materials in a single build process. Surface scans of the embossed cardboard samples and tools are performed to obtain data. Based on these scans, three methods are proposed to determine the impression accuracy. Finally, the tools are evaluated based on the embossing results and the determined impression accuracy. Furthermore, the proposed methods are compared with each other and their advantages and disadvantages are discussed. This provides information on the conditions under which the use of which method appears to be sensible.:1. Einleitung 2. Theoretische Grundlagen 2.1 Umformprozesse 2.1.1 Prägen von Faserwerkstoffen 2.1.2 Einsatz von Niederhaltern beim Umformen von Blechen 2.1.2.1 Tiefenziehen 2.1.2.2 Tiefen 2.1.3 Einsatz von Niederhaltern beim Umformen von Karton 2.1.3.1 Ziehen und Pressformen 2.1.3.2 Hydroformen 2.2 Multi-Material-Verarbeitung in der additiven Fertigung 2.2.1 Materialextrusion 2.2.2 Badbasierte Photopolymerisation 2.2.3 Material Jetting 2.2.4 Pulverbettbasiertes Schmelzen 2.2.5 Workflow und Datenvorbereitung 2.3.6 Geeignete Dateiformate 2.3 Soll-Ist-Vergleich von 2.5D-Oberflächendaten 2.3.1 Berechnung von Flächeninhalten und Volumen 2.3.2 Registrierung und Abstandsberechnung von Punktwolken 3. Versuche und Messungen 3.1 Herstellung der Prägeformkonzepte 3.1.1 Beschreibung der Konzepte 3.1.2 Fertigungstechnologie und Materialwahl 3.1.3 Datenvorbereitung für die Polyjet-Fertigung 3.2 Prägeversuche und Datenverarbeitung 3.2.1 Prägeversuche 3.2.2 Oberflächenscan am Keyence 3D-Makroskop 3.3 Ermittlung der Abformgenauigkeit 3.3.1 Flächen- und Volumenberechnung in MatLab & CloudCompare 3.3.2 Abformgenauigkeit nach ICP-Algorithmus & Abstandsberechnung 4. Ergebnisse und Diskussion 4.1 Betrachtung der Prägewerkzeuge 4.2 Betrachtung der Kartonprägungen 4.2.1 Prägeergebnisse nach Flächeninhalt der Profilschnitte 4.2.2 Einfluss der Faserlaufrichtung auf Kartonprägungen 4.2.3 Prägeergebnisse nach Volumen der Punktwolken 4.3 Betrachtung der Abformgenauigkeit 4.3.1 Abformgenauigkeit nach Flächeninhalt & Volumen 4.3.2 Abformgenauigkeit nach ICP-Algorithmus & Abstandsberechnung 4.4 Bewertung der Methoden zur Ermittlung der Abformgenauigkeit 4.5 Beurteilung des Bedienereinflusses bei der Datenverarbeitung am Keyence 5. Zusammenfassung und Ausblick Literaturverzeichnis Eidesstattliche Erklärung
44

Semantic Segmentation of Point Clouds Using Deep Learning / Semantisk Segmentering av Punktmoln med Deep Learning

Tosteberg, Patrik January 2017 (has links)
In computer vision, it has in recent years become more popular to use point clouds to represent 3D data. To understand what a point cloud contains, methods like semantic segmentation can be used. Semantic segmentation is the problem of segmenting images or point clouds and understanding what the different segments are. An application for semantic segmentation of point clouds are e.g. autonomous driving, where the car needs information about objects in its surrounding. Our approach to the problem, is to project the point clouds into 2D virtual images using the Katz projection. Then we use pre-trained convolutional neural networks to semantically segment the images. To get the semantically segmented point clouds, we project back the scores from the segmentation into the point cloud. Our approach is evaluated on the semantic3D dataset. We find our method is comparable to state-of-the-art, without any fine-tuning on the Semantic3Ddataset.
45

Untersuchungen zum mobilen 3D-Scannen unter Tage bei K+S

Fischer, Andreas, Schäfer, Andreas January 2016 (has links)
Im Rahmen einer Diplomarbeit an der TU Bergakademie Freiberg wurden in 2014 die Grundlagen für die Auswertung von 3D-Punktwolken zur automatisierten Nachtragung des Risswerks gelegt. Um die dafür notwendigen 3D-Punktwolken möglichst wirtschaftlich zu erstellen, laufen seit 2015 Untersuchungen und Testmessungen zur Machbarkeit des untertägigen Einsatzes von mobil messenden Laserscannern. Im Folgenden werden verschiedene technische Ansätze sowie die Ergebnisse der Testmessungen und die weiteren geplanten Schritte vorgestellt. / As part of a thesis at the Technical University of Freiberg, a basis for the analysis of 3D point clouds was set for refining the mine map automatically. Since 2015 studies and test measurements have been running to create the necessary 3D point clouds as economically as possible, by using an underground mobile scanning system. Below the different technical approaches will be presented as well as the results of the test measurements and the next planned steps.
46

Feature Extraction Based Iterative Closest Point Registration for Large Scale Aerial LiDAR Point Clouds

Graehling, Quinn R. January 2020 (has links)
No description available.
47

Automated Tree Crown Discrimination Using Three-Dimensional Shape Signatures Derived from LiDAR Point Clouds

Sadeghinaeenifard, Fariba 05 1900 (has links)
Discrimination of different tree crowns based on their 3D shapes is essential for a wide range of forestry applications, and, due to its complexity, is a significant challenge. This study presents a modified 3D shape descriptor for the perception of different tree crown shapes in discrete-return LiDAR point clouds. The proposed methodology comprises of five main components, including definition of a local coordinate system, learning salient points, generation of simulated LiDAR point clouds with geometrical shapes, shape signature generation (from simulated LiDAR points as reference shape signature and actual LiDAR point clouds as evaluated shape signature), and finally, similarity assessment of shape signatures in order to extract the shape of a real tree. The first component represents a proposed strategy to define a local coordinate system relating to each tree to normalize 3D point clouds. In the second component, a learning approach is used to categorize all 3D point clouds into two ranks to identify interesting or salient points on each tree. The third component discusses generation of simulated LiDAR point clouds for two geometrical shapes, including a hemisphere and a half-ellipsoid. Then, the operator extracts 3D LiDAR point clouds of actual trees, either deciduous or evergreen. In the fourth component, a longitude-latitude transformation is applied to simulated and actual LiDAR point clouds to generate 3D shape signatures of tree crowns. A critical step is transformation of LiDAR points from their exact positions to their longitude and latitude positions using the longitude-latitude transformation, which is different from the geographic longitude and latitude coordinates, and labeled by their pre-assigned ranks. Then, natural neighbor interpolation converts the point maps to raster datasets. The generated shape signatures from simulated and actual LiDAR points are called reference and evaluated shape signatures, respectively. Lastly, the fifth component determines the similarity between evaluated and reference shape signatures to extract the shape of each examined tree. The entire process is automated by ArcGIS toolboxes through Python programming for further evaluation using more tree crowns in different study areas. Results from LiDAR points captured for 43 trees in the City of Surrey, British Columbia (Canada) suggest that the modified shape descriptor is a promising method for separating different shapes of tree crowns using LiDAR point cloud data. Experimental results also indicate that the modified longitude-latitude shape descriptor fulfills all desired properties of a suitable shape descriptor proposed in computer science along with leaf-off, leaf-on invariance, which makes this process autonomous from the acquisition date of LiDAR data. In summary, the modified longitude-latitude shape descriptor is a promising method for discriminating different shapes of tree crowns using LiDAR point cloud data.
48

Recovering dense 3D point clouds from single endoscopic image

Xi, L., Zhao, Y., Chen, L., Gao, Q.H., Tang, W., Wan, Tao Ruan, Xue, T. 26 March 2022 (has links)
Yes / Recovering high-quality 3D point clouds from monocular endoscopic images is a challenging task. This paper proposes a novel deep learning-based computational framework for 3D point cloud reconstruction from single monocular endoscopic images. An unsupervised mono-depth learning network is used to generate depth information from monocular images. Given a single mono endoscopic image, the network is capable of depicting a depth map. The depth map is then used to recover a dense 3D point cloud. A generative Endo-AE network based on an auto-encoder is trained to repair defects of the dense point cloud by generating the best representation from the incomplete data. The performance of the proposed framework is evaluated against state-of-the-art learning-based methods. The results are also compared with non-learning based stereo 3D reconstruction algorithms. Our proposed methods outperform both the state-of-the-art learning-based and non-learning based methods for 3D point cloud reconstruction. The Endo-AE model for point cloud completion can generate high-quality, dense 3D endoscopic point clouds from incomplete point clouds with holes. Our framework is able to recover complete 3D point clouds with the missing rate of information up to 60%. Five large medical in-vivo databases of 3D point clouds of real endoscopic scenes have been generated and two synthetic 3D medical datasets are created. We have made these datasets publicly available for researchers free of charge. The proposed computational framework can produce high-quality and dense 3D point clouds from single mono-endoscopy images for augmented reality, virtual reality and other computer-mediated medical applications.
49

Visual Analysis of High-Dimensional Point Clouds using Topological Abstraction

Oesterling, Patrick 17 May 2016 (has links) (PDF)
This thesis is about visualizing a kind of data that is trivial to process by computers but difficult to imagine by humans because nature does not allow for intuition with this type of information: high-dimensional data. Such data often result from representing observations of objects under various aspects or with different properties. In many applications, a typical, laborious task is to find related objects or to group those that are similar to each other. One classic solution for this task is to imagine the data as vectors in a Euclidean space with object variables as dimensions. Utilizing Euclidean distance as a measure of similarity, objects with similar properties and values accumulate to groups, so-called clusters, that are exposed by cluster analysis on the high-dimensional point cloud. Because similar vectors can be thought of as objects that are alike in terms of their attributes, the point cloud\'s structure and individual cluster properties, like their size or compactness, summarize data categories and their relative importance. The contribution of this thesis is a novel analysis approach for visual exploration of high-dimensional point clouds without suffering from structural occlusion. The work is based on implementing two key concepts: The first idea is to discard those geometric properties that cannot be preserved and, thus, lead to the typical artifacts. Topological concepts are used instead to shift away the focus from a point-centered view on the data to a more structure-centered perspective. The advantage is that topology-driven clustering information can be extracted in the data\'s original domain and be preserved without loss in low dimensions. The second idea is to split the analysis into a topology-based global overview and a subsequent geometric local refinement. The occlusion-free overview enables the analyst to identify features and to link them to other visualizations that permit analysis of those properties not captured by the topological abstraction, e.g. cluster shape or value distributions in particular dimensions or subspaces. The advantage of separating structure from data point analysis is that restricting local analysis only to data subsets significantly reduces artifacts and the visual complexity of standard techniques. That is, the additional topological layer enables the analyst to identify structure that was hidden before and to focus on particular features by suppressing irrelevant points during local feature analysis. This thesis addresses the topology-based visual analysis of high-dimensional point clouds for both the time-invariant and the time-varying case. Time-invariant means that the points do not change in their number or positions. That is, the analyst explores the clustering of a fixed and constant set of points. The extension to the time-varying case implies the analysis of a varying clustering, where clusters appear as new, merge or split, or vanish. Especially for high-dimensional data, both tracking---which means to relate features over time---but also visualizing changing structure are difficult problems to solve.
50

Inverse geometry : from the raw point cloud to the 3d surface : theory and algorithms / Géométrie inverse : du nuage de points brut à la surface 3D : théorie et algorithmes

Digne, Julie 23 November 2010 (has links)
De nombreux scanners laser permettent d'obtenir la surface 3D a partir d'un objet. Néanmoins, la surface reconstruite est souvent lisse, ce qui est du au débruitage interne du scanner et aux décalages entre les scans. Cette these utilise des scans haute precision et choisit de ne pas perdre ni alterer les echantillons initiaux au cours du traitement afin de les visualiser. C'est en effet la seule façon de decouvrir les imperfections (trous, decalages de scans). De plus, comme les donnees haute precision capturent meme le plus leger detail, tout debruitage ou sous-echantillonnage peut amener a perdre ces details.La these s'attache a prouver que l'on peut trianguler le nuage de point initial en ne perdant presque aucun echantillon. Le probleme de la visualisation exacte sur des donnees de plus de 35 millions de points et de 300 scans differents est ainsi resolu. Deux problemes majeurs sont traites: le premier est l'orientation du nuage de point brut complet et la creation d'un maillage. Le second est la correction des petits decalages entre les scans qui peuvent creer un tres fort aliasing et compromettre la visualisation de la surface. Le second developpement de la these est une decomposition des nuages de points en hautes/basses frequences. Ainsi, des methodes classiques pour l'analyse d'image, l'arbre des ensembles de niveau et la representation MSER, sont etendues aux maillages, ce qui donne une methode intrinseque de segmentation de maillages. Une analyse mathematiques d'operateurs differentiels discrets, proposes dans la litterature et operant sur des nuages de points est realisee. En considerant les developpements asymptotiques de ces operateurs sur une surface reguliere, ces operateurs peuvent etre classifies. Cette analyse amene au developpement d'un operateur discret consistant avec Ie mouvement par courbure moyenne (l'equation de la chaleur intrinseque) definissant ainsi un espace-echelle numerique simple et remarquablement robuste. Cet espace-echelle permet de resoudre de maniere unifiee tous les problemes mentionnes auparavant (orientation et triangulation du nuage de points, fusion de scans, segmentation de maillages) qui sont ordinairement traites avec des techniques distinctes. / Many laser devices acquire directly 3D objects and reconstruct their surface. Nevertheless, the final reconstructed surface is usually smoothed out as a result of the scanner internal de-noising process and the offsets between different scans. This thesis, working on results from high precision scans, adopts the somewhat extreme conservative position, not to loose or alter any raw sample throughout the whole processing pipeline, and to attempt to visualize them. Indeed, it is the only way to discover all surface imperfections (holes, offsets). Furthermore, since high precision data can capture the slightest surface variation, any smoothing and any sub-sampling can incur in the loss of textural detail.The thesis attempts to prove that one can triangulate the raw point cloud with almost no sample loss. It solves the exact visualization problem on large data sets of up to 35 million points made of 300 different scan sweeps and more. Two major problems are addressed. The first one is the orientation of the complete raw point set, an the building of a high precision mesh. The second one is the correction of the tiny scan misalignments which can cause strong high frequency aliasing and hamper completely a direct visualization.The second development of the thesis is a general low-high frequency decomposition algorithm for any point cloud. Thus classic image analysis tools, the level set tree and the MSER representations, are extended to meshes, yielding an intrinsic mesh segmentation method.The underlying mathematical development focuses on an analysis of a half dozen discrete differential operators acting on raw point clouds which have been proposed in the literature. By considering the asymptotic behavior of these operators on a smooth surface, a classification by their underlying curvature operators is obtained.This analysis leads to the development of a discrete operator consistent with the mean curvature motion (the intrinsic heat equation) defining a remarkably simple and robust numerical scale space. By this scale space all of the above mentioned problems (point set orientation, raw point set triangulation, scan merging, segmentation), usually addressed by separated techniques, are solved in a unified framework.

Page generated in 0.04 seconds