• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 52
  • 9
  • 5
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 84
  • 84
  • 30
  • 21
  • 17
  • 15
  • 15
  • 14
  • 13
  • 12
  • 12
  • 12
  • 11
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Punktwolken von Handscannern und ihr Potenzial

Martienßen, Thomas 16 July 2019 (has links)
Der Beitrag beschäftigt sich mit dem Handscanner ZEB-REVO der Firma GeoSLAM. Es werden die Handhabung der Hardware im untertägigen Einsatz und die Weiterverarbeitung der Punktwolken für Anwendungen im Bergbau näher betrachtet. Die Notwendigkeit der Referenzierung der Punktwolken und eine Möglichkeit diese umzusetzen, werden dargelegt. Über den Vergleich der Daten mit Punktwolken von terrestrischen Laserscannern der Firma Riegl in der Software RiScanPro werden Genauigkeitsuntersuchungen angestellt, die dem Anwender die Grenzen des Systems aufzeigen. Schließlich führen die angestellten Untersuchungen zu einer kritischen Bewertung des Systems. / This contribution addresses practical aspects, abilities and limitations in using the ZEBREVO hand-held scanner from GeoSLAM for underground mine mapping. Besides mapping activities, also post-processing of generated point clouds and requirements for georeferencing are discussed. An accuracy assessment is presented by the means of a point cloud comparison, generated by a terrestrial laser scanner from Riegl. Results demonstrate the technical ability and also the limitations of the system ZEB-REVO. Concluding, a critical evaluation of the system is presented.
32

Domain adaptation from 3D synthetic images to real images

Manamasa, Krishna Himaja January 2020 (has links)
Background. Domain adaptation is described as, a model learning from a source data distribution and performing well on the target data. This concept, Domain adaptation is applied to assembly-line production tasks to perform an automatic quality inspection. Objectives. The aim of this master thesis is to apply this concept of 3D domain adaptation from synthetic images to real images. It is an attempt to bridge the gap between different domains (synthetic and real point cloud images), by implementing deep learning models that learn from synthetic 3D point cloud (CAD model images) and perform well on the actual 3D point cloud (3D Camera images). Methods. Through this course of thesis project, various methods for understand- ing the data and analyzing it for bridging the gap between CAD and CAM to make them similar is looked into. Literature review and controlled experiment are research methodologies followed during implementation. In this project, we experiment with four different deep learning models with data generated and compare their performance to know which deep learning model performs best for the data. Results. The results are explained through metrics i.e, accuracy and train time, which were the outcomes of each of the deep learning models after the experiment. These metrics are illustrated in the form of graphs for comparative analysis between the models on which the data is trained and tested on. PointDAN showed better results with higher accuracy compared to the other 3 models. Conclusions. The results attained show that domain adaptation for synthetic images to real images is possible with the data generated. PointDAN deep learning model which focuses on local feature alignment and global feature alignment with single-view point data shows better results with our data.
33

Apprentissage de nouvelles représentations pour la sémantisation de nuages de points 3D / Learning new representations for 3D point cloud semantic segmentation

Thomas, Hugues 19 November 2019 (has links)
Aujourd’hui, de nouvelles technologies permettent l’acquisition de scènes 3D volumineuses et précises sous la forme de nuages de points. Les nouvelles applications ouvertes par ces technologies, comme les véhicules autonomes ou la maintenance d'infrastructure, reposent sur un traitement efficace des nuages de points à grande échelle. Les méthodes d'apprentissage profond par convolution ne peuvent pas être utilisées directement avec des nuages de points. Dans le cas des images, les filtres convolutifs ont permis l’apprentissage de nouvelles représentations, jusqu’alors construites « à la main » dans les méthodes de vision par ordinateur plus anciennes. En suivant le même raisonnement, nous présentons dans cette thèse une étude des représentations construites « à la main » utilisées pour le traitement des nuages de points. Nous proposons ainsi plusieurs contributions, qui serviront de base à la conception d’une nouvelle représentation convolutive pour le traitement des nuages de points. Parmi elles, une nouvelle définition de voisinages sphériques multi-échelles, une comparaison avec les k plus proches voisins multi-échelles, une nouvelle stratégie d'apprentissage actif, la segmentation sémantique des nuages de points à grande échelle, et une étude de l'influence de la densité dans les représentations multi-échelles. En se basant sur ces contributions, nous introduisons la « Kernel Point Convolution » (KPConv), qui utilise des voisinages sphériques et un noyau défini par des points. Ces points jouent le même rôle que les pixels du noyau des convolutions en image. Nos réseaux convolutionnels surpassent les approches de segmentation sémantique de l’état de l’art dans presque toutes les situations. En plus de ces résultats probants, nous avons conçu KPConv avec une grande flexibilité et une version déformable. Pour conclure notre réflexion, nous proposons plusieurs éclairages sur les représentations que notre méthode est capable d'apprendre. / In the recent years, new technologies have allowed the acquisition of large and precise 3D scenes as point clouds. They have opened up new applications like self-driving vehicles or infrastructure monitoring that rely on efficient large scale point cloud processing. Convolutional deep learning methods cannot be directly used with point clouds. In the case of images, convolutional filters brought the ability to learn new representations, which were previously hand-crafted in older computer vision methods. Following the same line of thought, we present in this thesis a study of hand-crafted representations previously used for point cloud processing. We propose several contributions, to serve as basis for the design of a new convolutional representation for point cloud processing. They include a new definition of multiscale radius neighborhood, a comparison with multiscale k-nearest neighbors, a new active learning strategy, the semantic segmentation of large scale point clouds, and a study of the influence of density in multiscale representations. Following these contributions, we introduce the Kernel Point Convolution (KPConv), which uses radius neighborhoods and a set of kernel points to play the role of the kernel pixels in image convolution. Our convolutional networks outperform state-of-the-art semantic segmentation approaches in almost any situation. In addition to these strong results, we designed KPConv with a great flexibility and a deformable version. To conclude our argumentation, we propose several insights on the representations that our method is able to learn.
34

Reverse Engineering of 3-D Point Cloud into NURBS Geometry

Joshi, Shriyanka 04 November 2020 (has links)
No description available.
35

View-Agnostic Point Cloud Generation

Singer, Nina 13 July 2022 (has links)
No description available.
36

Comparing the Technical and Business Effects of Working with Immersive Virtual Reality Instead of, or in Addition to LayCAD in the Factory Design Process

Parthasarathy, Sukesh Rohith January 2018 (has links)
Scania needs to reconfigure their factories quickly to meet the future demands of the market. The process of reconfiguring factories starts with the factory layouts. Factory design is a complicated detail-oriented process, and if major physical changes on the factory floor and installations fail, it affects the entire production flow. It is an expensive and time consuming process to rectify these errors. Hence, it is extremely important that the installations are both quick and accurate. So, Scania wants to investigate how Immersive Virtual Reality could be used in the Factory layout design process. This thesis addresses how to utilize VR with existing technologies at Scania, by mapping the capabilities of VR to the needs of Scania. A function of interest to Scania is the possibility to import data, such as Point Clouds, CAD Objects and Factory Layouts, to create a coordinated VR platform. After understanding the VR technology, it was proved possible to import and visualize all of these data, after exporting it to a format that was readable by the VR system. Once the VR platform was setup, based on the imported models, the next step was to evaluate the aspects of working with VR, as compared to working with Scania’s Factory CAD system –“LayCAD”. It was assessed that the Immersive VR system offers better visualization, evaluation and realization of layout changes, compared to the LayCAD system. But the use of VR requires additional skills, time and cost to setup the VR platform. Based on the degree of maturity of the VR technology and the current state of Scania, it was concluded that VR cannot yet serve as a standalone solution for layouts within Scania. For an efficient factory development process, it is more appropriate to work with both systems in combination, where Immersive VR is used as an additional visualization or verification tool for presenting and evaluating conceptual layouts. / Scania behöver omkonfigurera sina fabriker snabbt för att möta marknadens framtida krav. Processen med omkonfigurering av fabriker börjar med fabrikslayouterna. Fabriksdesign är en komplicerad detaljorienterad process, och om stora fysiska förändringar på fabrikens golv och installationer misslyckas påverkar det hela produktionsflödet. Det är en dyr och tidskrävande process att rätta till dessa fel. Därför är det extremt viktigt att installationerna är både snabba och korrekta. Så, Scania vill undersöka hur Immersive Virtual Reality (VR) kan användas i fabrikslayoutdesignprocessen. Denna avhandling beskriver hur man kan använda VR med befintlig teknik i Scania genom att kartlägga VR: s förmåga att möta Scanias behov. En funktion av intresse för Scania är möjligheten att importera data, såsom Point Clouds, CAD Objects och Factory Layouts, för att skapa en samordnad VR-plattform. Efter att ha förstått VR-tekniken visade det sig vara möjligt att importera och visualisera alla dessa data efter att ha exporterat dem till ett format som var läsbart av VR-systemet. När VR-plattformen var inställd, baserad på de importerade modellerna, var nästa steg att utvärdera aspekterna av att arbeta med VR, jämfört med att arbeta med Scanias Factory CAD-system - "LayCAD". Det bedömdes att systemet för Immersive VR ger bättre visualisering, utvärdering och realisering av layoutändringar jämfört med LayCAD-systemet. Men användningen av VR kräver ytterligare färdigheter, tid och kostnad för att installera VR-plattformen. Baserat på mognadsgraden av VR-tekniken och Scanias nuvarande IT-användning, drogs slutsatsen att VR ännu inte kan fungera som en fristående lösning för layouter inom Scania. För en effektiv fabriksutvecklingsprocess är det mer lämpligt att arbeta med båda systemen i kombination, där Immersive VR används som ett extra visualiserings- eller verifieringsverktyg för att presentera och utvärdera konceptuella layouter.
37

Object registration in semi-cluttered and partial-occluded scenes for augmented reality

Gao, Q.H., Wan, Tao Ruan, Tang, W., Chen, L. 26 November 2018 (has links)
Yes / This paper proposes a stable and accurate object registration pipeline for markerless augmented reality applications. We present two novel algorithms for object recognition and matching to improve the registration accuracy from model to scene transformation via point cloud fusion. Whilst the first algorithm effectively deals with simple scenes with few object occlusions, the second algorithm handles cluttered scenes with partial occlusions for robust real-time object recognition and matching. The computational framework includes a locally supported Gaussian weight function to enable repeatable detection of 3D descriptors. We apply a bilateral filtering and outlier removal to preserve edges of point cloud and remove some interference points in order to increase matching accuracy. Extensive experiments have been carried to compare the proposed algorithms with four most used methods. Results show improved performance of the algorithms in terms of computational speed, camera tracking and object matching errors in semi-cluttered and partial-occluded scenes. / Shanxi Natural Science and Technology Foundation of China, grant number 2016JZ026 and grant number 2016KW-043).
38

Traitement des objets 3D et images par les méthodes numériques sur graphes / 3D object processing and Image processing by numerical methods

El Sayed, Abdul Rahman 24 October 2018 (has links)
La détection de peau consiste à détecter les pixels correspondant à une peau humaine dans une image couleur. Les visages constituent une catégorie de stimulus importante par la richesse des informations qu’ils véhiculent car avant de reconnaître n’importe quelle personne il est indispensable de localiser et reconnaître son visage. La plupart des applications liées à la sécurité et à la biométrie reposent sur la détection de régions de peau telles que la détection de visages, le filtrage d'objets 3D pour adultes et la reconnaissance de gestes. En outre, la détection de la saillance des mailles 3D est une phase de prétraitement importante pour de nombreuses applications de vision par ordinateur. La segmentation d'objets 3D basée sur des régions saillantes a été largement utilisée dans de nombreuses applications de vision par ordinateur telles que la correspondance de formes 3D, les alignements d'objets, le lissage de nuages de points 3D, la recherche des images sur le web, l’indexation des images par le contenu, la segmentation de la vidéo et la détection et la reconnaissance de visages. La détection de peau est une tâche très difficile pour différentes raisons liées en général à la variabilité de la forme et la couleur à détecter (teintes différentes d’une personne à une autre, orientation et tailles quelconques, conditions d’éclairage) et surtout pour les images issues du web capturées sous différentes conditions de lumière. Il existe plusieurs approches connues pour la détection de peau : les approches basées sur la géométrie et l’extraction de traits caractéristiques, les approches basées sur le mouvement (la soustraction de l’arrière-plan (SAP), différence entre deux images consécutives, calcul du flot optique) et les approches basées sur la couleur. Dans cette thèse, nous proposons des méthodes d'optimisation numérique pour la détection de régions de couleurs de peaux et de régions saillantes sur des maillages 3D et des nuages de points 3D en utilisant un graphe pondéré. En se basant sur ces méthodes, nous proposons des approches de détection de visage 3D à l'aide de la programmation linéaire et de fouille de données (Data Mining). En outre, nous avons adapté nos méthodes proposées pour résoudre le problème de la simplification des nuages de points 3D et de la correspondance des objets 3D. En plus, nous montrons la robustesse et l’efficacité de nos méthodes proposées à travers de différents résultats expérimentaux réalisés. Enfin, nous montrons la stabilité et la robustesse de nos méthodes par rapport au bruit. / Skin detection involves detecting pixels corresponding to human skin in a color image. The faces constitute a category of stimulus important by the wealth of information that they convey because before recognizing any person it is essential to locate and recognize his face. Most security and biometrics applications rely on the detection of skin regions such as face detection, 3D adult object filtering, and gesture recognition. In addition, saliency detection of 3D mesh is an important pretreatment phase for many computer vision applications. 3D segmentation based on salient regions has been widely used in many computer vision applications such as 3D shape matching, object alignments, 3D point-point smoothing, searching images on the web, image indexing by content, video segmentation and face detection and recognition. The detection of skin is a very difficult task for various reasons generally related to the variability of the shape and the color to be detected (different hues from one person to another, orientation and different sizes, lighting conditions) and especially for images from the web captured under different light conditions. There are several known approaches to skin detection: approaches based on geometry and feature extraction, motion-based approaches (background subtraction (SAP), difference between two consecutive images, optical flow calculation) and color-based approaches. In this thesis, we propose numerical optimization methods for the detection of skins color and salient regions on 3D meshes and 3D point clouds using a weighted graph. Based on these methods, we provide 3D face detection approaches using Linear Programming and Data Mining. In addition, we adapted our proposed methods to solve the problem of simplifying 3D point clouds and matching 3D objects. In addition, we show the robustness and efficiency of our proposed methods through different experimental results. Finally, we show the stability and robustness of our methods with respect to noise.
39

Automatic Retrieval of Skeletal Structures of Trees from Terrestrial Laser Scanner Data

Schilling, Anita 26 November 2014 (has links) (PDF)
Research on forest ecosystems receives high attention, especially nowadays with regard to sustainable management of renewable resources and the climate change. In particular, accurate information on the 3D structure of a tree is important for forest science and bioclimatology, but also in the scope of commercial applications. Conventional methods to measure geometric plant features are labor- and time-intensive. For detailed analysis, trees have to be cut down, which is often undesirable. Here, Terrestrial Laser Scanning (TLS) provides a particularly attractive tool because of its contactless measurement technique. The object geometry is reproduced as a 3D point cloud. The objective of this thesis is the automatic retrieval of the spatial structure of trees from TLS data. We focus on forest scenes with comparably high stand density and with many occlusions resulting from it. The varying level of detail of TLS data poses a big challenge. We present two fully automatic methods to obtain skeletal structures from scanned trees that have complementary properties. First, we explain a method that retrieves the entire tree skeleton from 3D data of co-registered scans. The branching structure is obtained from a voxel space representation by searching paths from branch tips to the trunk. The trunk is determined in advance from the 3D points. The skeleton of a tree is generated as a 3D line graph. Besides 3D coordinates and range, a scan provides 2D indices from the intensity image for each measurement. This is exploited in the second method that processes individual scans. Furthermore, we introduce a novel concept to manage TLS data that facilitated the researchwork. Initially, the range image is segmented into connected components. We describe a procedure to retrieve the boundary of a component that is capable of tracing inner depth discontinuities. A 2D skeleton is generated from the boundary information and used to decompose the component into sub components. A Principal Curve is computed from the 3D point set that is associated with a sub component. The skeletal structure of a connected component is summarized as a set of polylines. Objective evaluation of the results remains an open problem because the task itself is ill-defined: There exists no clear definition of what the true skeleton should be w.r.t. a given point set. Consequently, we are not able to assess the correctness of the methods quantitatively, but have to rely on visual assessment of results and provide a thorough discussion of the particularities of both methods. We present experiment results of both methods. The first method efficiently retrieves full skeletons of trees, which approximate the branching structure. The level of detail is mainly governed by the voxel space and therefore, smaller branches are reproduced inadequately. The second method retrieves partial skeletons of a tree with high reproduction accuracy. The method is sensitive to noise in the boundary, but the results are very promising. There are plenty of possibilities to enhance the method’s robustness. The combination of the strengths of both presented methods needs to be investigated further and may lead to a robust way to obtain complete tree skeletons from TLS data automatically. / Die Erforschung des ÖkosystemsWald spielt gerade heutzutage im Hinblick auf den nachhaltigen Umgang mit nachwachsenden Rohstoffen und den Klimawandel eine große Rolle. Insbesondere die exakte Beschreibung der dreidimensionalen Struktur eines Baumes ist wichtig für die Forstwissenschaften und Bioklimatologie, aber auch im Rahmen kommerzieller Anwendungen. Die konventionellen Methoden um geometrische Pflanzenmerkmale zu messen sind arbeitsintensiv und zeitaufwändig. Für eine genaue Analyse müssen Bäume gefällt werden, was oft unerwünscht ist. Hierbei bietet sich das Terrestrische Laserscanning (TLS) als besonders attraktives Werkzeug aufgrund seines kontaktlosen Messprinzips an. Die Objektgeometrie wird als 3D-Punktwolke wiedergegeben. Basierend darauf ist das Ziel der Arbeit die automatische Bestimmung der räumlichen Baumstruktur aus TLS-Daten. Der Fokus liegt dabei auf Waldszenen mit vergleichsweise hoher Bestandesdichte und mit zahlreichen daraus resultierenden Verdeckungen. Die Auswertung dieser TLS-Daten, die einen unterschiedlichen Grad an Detailreichtum aufweisen, stellt eine große Herausforderung dar. Zwei vollautomatische Methoden zur Generierung von Skelettstrukturen von gescannten Bäumen, welche komplementäre Eigenschaften besitzen, werden vorgestellt. Bei der ersten Methode wird das Gesamtskelett eines Baumes aus 3D-Daten von registrierten Scans bestimmt. Die Aststruktur wird von einer Voxelraum-Repräsentation abgeleitet indem Pfade von Astspitzen zum Stamm gesucht werden. Der Stamm wird im Voraus aus den 3D-Punkten rekonstruiert. Das Baumskelett wird als 3D-Liniengraph erzeugt. Für jeden gemessenen Punkt stellt ein Scan neben 3D-Koordinaten und Distanzwerten auch 2D-Indizes zur Verfügung, die sich aus dem Intensitätsbild ergeben. Bei der zweiten Methode, die auf Einzelscans arbeitet, wird dies ausgenutzt. Außerdem wird ein neuartiges Konzept zum Management von TLS-Daten beschrieben, welches die Forschungsarbeit erleichtert hat. Zunächst wird das Tiefenbild in Komponenten aufgeteilt. Es wird eine Prozedur zur Bestimmung von Komponentenkonturen vorgestellt, die in der Lage ist innere Tiefendiskontinuitäten zu verfolgen. Von der Konturinformation wird ein 2D-Skelett generiert, welches benutzt wird um die Komponente in Teilkomponenten zu zerlegen. Von der 3D-Punktmenge, die mit einer Teilkomponente assoziiert ist, wird eine Principal Curve berechnet. Die Skelettstruktur einer Komponente im Tiefenbild wird als Menge von Polylinien zusammengefasst. Die objektive Evaluation der Resultate stellt weiterhin ein ungelöstes Problem dar, weil die Aufgabe selbst nicht klar erfassbar ist: Es existiert keine eindeutige Definition davon was das wahre Skelett in Bezug auf eine gegebene Punktmenge sein sollte. Die Korrektheit der Methoden kann daher nicht quantitativ beschrieben werden. Aus diesem Grund, können die Ergebnisse nur visuell beurteiltwerden. Weiterhinwerden die Charakteristiken beider Methoden eingehend diskutiert. Es werden Experimentresultate beider Methoden vorgestellt. Die erste Methode bestimmt effizient das Skelett eines Baumes, welches die Aststruktur approximiert. Der Detaillierungsgrad wird hauptsächlich durch den Voxelraum bestimmt, weshalb kleinere Äste nicht angemessen reproduziert werden. Die zweite Methode rekonstruiert Teilskelette eines Baums mit hoher Detailtreue. Die Methode reagiert sensibel auf Rauschen in der Kontur, dennoch sind die Ergebnisse vielversprechend. Es gibt eine Vielzahl von Möglichkeiten die Robustheit der Methode zu verbessern. Die Kombination der Stärken von beiden präsentierten Methoden sollte weiter untersucht werden und kann zu einem robusteren Ansatz führen um vollständige Baumskelette automatisch aus TLS-Daten zu generieren.
40

Analýza bodových množin reprezentujících povrchy technické praxe / Analysis of Point Clouds Representing Surfaces of Engineering Practice

Surynková, Petra January 2014 (has links)
Title: Analysis of Point Clouds Representing Surfaces of Engineering Practice Author: Petra Surynková Department: Department of Mathematics Education Supervisor: Mgr. Šárka Voráčová, Ph.D., Faculty of Transportation Sciences, Czech Technical University in Prague Abstract: The doctoral dissertation Analysis of Point Clouds Representing Surfaces of Engineering Practice addresses the development and application of methods of digital reconstruction of surfaces of engineering and construction practice from point clouds. The main outcome of the dissertation is a presentation of new procedures and methods that contribute to each of the stages of the reconstruction process from the input point clouds. The work is mainly focused on the analysis of input clouds that describe special types of surfaces. Several completely new algorithms and improvements of existing algorithms that contribute to individual steps of surface reconstruction are presented. New procedures are based on geometrical characteristics of the reconstructed object. An important result of the dissertation is an analysis of not only synthetically generated point clouds but above all an analysis of real point clouds that have been obtained from measurements of real objects. The significant contribution of the dissertation is also an...

Page generated in 0.0354 seconds