• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 149
  • 40
  • 15
  • 11
  • 10
  • 5
  • 4
  • 4
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 275
  • 275
  • 106
  • 64
  • 58
  • 54
  • 49
  • 42
  • 42
  • 40
  • 38
  • 38
  • 37
  • 35
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

[en] DEEP-LEARNING-BASED SHAPE MATCHING FRAMEWORK ON 3D CAD MODELS / [pt] PARA CORRESPONDÊNCIA DE FORMAS BASEADO EM APRENDIZADO PROFUNDO EM MODELOS CAD 3D

LUCAS CARACAS DE FIGUEIREDO 11 November 2022 (has links)
[pt] Modelos CAD 3D ricos em dados são essenciais durante os diferentes estágios do ciclo de vida de projetos de engenharia. Devido à recente popularização da metodologia Modelagem de Informação da Construção e do uso de Gêmeos Digitais para a manufatura inteligente, a quantidade de detalhes, o tamanho, e a complexidade desses modelos aumentaram significativamente. Apesar desses modelos serem compostos de várias geometrias repetidas, os softwares de projeto de plantas geralmente não proveem nenhuma informação de instanciação. Trabalhos anteriores demonstraram que removendo a redundância na representação dos modelos CAD 3D reduz significativamente o armazenamento e requisição de memória deles, ao passo que facilita otimizações de renderização. Este trabalho propõe um arcabouço para correspondência de formas baseado em aprendizado profundo que minimiza as informações redundantes de um modelo CAD 3D a esse respeito. Nos apoiamos nos avanços recentes no processamento profundo de nuvens de pontos, superando desvantagens de trabalhos anteriores, como a forte dependencia da ordenação dos vértices e topologia das malhas de triângulos. O arcabouço desenvolvido utiliza nuvens de pontos uniformemente amostradas para identificar similaridades entre malhas em modelos CAD 3D e computam uma matriz de transformação afim ótima para instancia-las. Resultados em modelos CAD 3D reais demonstram o valor do arcabouço proposto. O procedimento de registro de nuvem de pontos desenvolvido atinge um erro de superfície menor, ao mesmo tempo que executa mais rápido que abordagens anteriores. A abordagem supervisionada de classificação desenvolvida antinge resultados equivalentes em comparação com métodos limitados anteriores e os superou significativamente num cenário de embaralhamento de vértices. Propomos também uma abordagem auto-supervisionada que agrupa malhas semelhantes e supera a necessidade de rotular explicitamente as geometrias no modelo CAD 3D. Este método auto-supervisionado obtém resultados competitivos quando comparados às abordagens anteriores, até mesmo superando-as em determinados cenários. / [en] Data-rich 3D CAD models are essential during different life-cycle stages of engineering projects. Due to the recent popularization of Build Information Modeling methodology and the use of Digital Twins for intelligent manufacturing, the amount of detail, size, and complexity of these models have significantly increased. Although these models are composed of several repeated geometries, plant-design software usually does not provide any instancing information. Previous works have shown that removing redundancy in the representation of 3D CAD models significantly reduces their storage and memory requirements, whilst facilitating rendering optimizations. This work proposes a deep-learning-based shape-matching framework that minimizes a 3D CAD model s redundant information in this regard. We rely on recent advances in the deep processing of point clouds, overcoming drawbacks from previous work, such as heavy dependency on vertex ordering and topology of triangle meshes. The developed framework uses uniformly sampled point clouds to identify similarities among meshes in 3D CAD models and computes an optimal affine transformation matrix to instantiate them. Results on actual 3D CAD models demonstrate the value of the proposed framework. The developed point-cloud-registration procedure achieves a lower surface error while also performing faster than previous approaches. The developed supervised-classification approach achieves equivalent results compared to earlier, limited methods and significantly outperformed them in a vertex shuffling scenario. We also propose a selfsupervised approach that clusters similar meshes and overcomes the need for explicitly labeling geometries in the 3D CAD model. This self-supervised method obtains competitive results when compared to previous approaches, even outperforming them in certain scenarios.
212

Using a Deep Generative Model to Generate and Manipulate 3D Object Representation / Att använda en djup generativ modell för att skapa och manipulera 3D-objektrepresentation.

Hu, Yu January 2023 (has links)
The increasing importance of 3D data in various domains, such as computer vision, robotics, medical analysis, augmented reality, and virtual reality, has gained giant research interest in generating 3D data using deep generative models. The challenging problem is how to build generative models to synthesize diverse and realistic 3D objects representations, while having controllability for manipulating the shape attributes of 3D objects. This thesis explores the use of 3D Generative Adversarial Networks (GANs) for generation of 3D indoor objects shapes represented by point clouds, with a focus on shape editing tasks. Leveraging insights from 2D semantic face editing, the thesis proposes extending the InterFaceGAN framework to 3D GAN model for discovering the relationship between latent codes and semantic attributes of generated shapes. In the end, we successfully perform controllable shape editing by manipulating the latent code of GAN. / Den ökande betydelsen av 3D-data inom olika områden, såsom datorseende, robotik, medicinsk analys, förstärkt verklighet och virtuell verklighet, har väckt stort forskningsintresse för att generera 3D-data med hjälp av djupa generativa modeller. Det utmanande problemet är hur man bygger generativa modeller för att syntetisera varierande och realistiska 3Dobjektrepresentationer samtidigt som man har kontroll över att manipulera formattributen hos 3D-objekt. Denna avhandling utforskar användningen av 3D Generative Adversarial Networks (GANs) för generering av 3Dinomhusobjektformer representerade av punktmoln, med fokus på formredigeringsuppgifter. Genom att dra nytta av insikter från 2D-semantisk ansiktsredigering föreslår avhandlingen att utvidga InterFaceGAN-ramverket till en 3D GAN-modell för att upptäcka förhållandet mellan latenta koder och semantiska egenskaper hos genererade former. I slutändan genomför vi framgångsrikt kontrollerad formredigering genom att manipulera den latenta koden hos GAN.
213

Orthogonal Moment-Based Human Shape Query and Action Recognition from 3D Point Cloud Patches

Cheng, Huaining January 2015 (has links)
No description available.
214

Data Augmentation for Safe 3D Object Detection for Autonomous Volvo Construction Vehicles

Zhao, Xun January 2021 (has links)
Point cloud data can express the 3D features of objects, and is an important data type in the field of 3D object detection. Since point cloud data is more difficult to collect than image data and the scale of existing datasets is smaller, point cloud data augmentation is introduced to allow more features to be discovered on existing data. In this thesis, we propose a novel method to enhance the point cloud scene, based on the generative adversarial network (GAN) to realize the augmentation of the objects and then integrate them into the existing scenes. A good fidelity and coverage are achieved between the fake sample and the real sample, with JSD equal to 0.027, MMD equal to 0.00064, and coverage equal to 0.376. In addition, we investigated the functional data annotation tools and completed the data labeling task. The 3D object detection task is carried out on the point cloud data, and we have achieved a relatively good detection results in a short processing of around 22ms. Quantitative and qualitative analysis is carried out on different models. / Punktmolndata kan uttrycka 3D-egenskaperna hos objekt och är en viktig datatyp inom området för 3D-objektdetektering. Eftersom punktmolndata är svarare att samla in än bilddata och omfattningen av befintlig data är mindre, introduceras punktmolndataförstärkning för att tillåta att fler funktioner kan upptäckas på befintlig data. I det här dokumentet föreslår vi en metod för att förbättra punktmolnsscenen, baserad på det generativa motstridiga nätverket (GAN) för att realisera förstärkningen av objekten och sedan integrera dem i de befintliga scenerna. En god trohet och tackning uppnås mellan det falska provet och det verkliga provet, med JSD lika med 0,027, MMD lika med 0,00064 och täckning lika med 0,376. Dessutom undersökte vi de funktionella verktygen för dataanteckningar och slutförde uppgiften for datamärkning. 3D- objektdetekteringsuppgiften utförs på punktmolnsdata och vi har uppnått ett relativt bra detekteringsresultat på en kort bearbetningstid runt 22ms. Kvantitativ och kvalitativ analys utförs på olika modeller.
215

Dokumentation av en trafikolycka med handhållen laserskanning och UAS-fotogrammetri : En utvärdering av punktmolnens lägesosäkerhet och visuella kvalitet

Andersson, Elias January 2021 (has links)
I samband med en trafikolycka är det ofta viktigt att återställa platsen till det normala så snabbt som möjligt. Emellanåt måste olycksplatsen dokumenteras för att orsaken till olyckan ska kunna utredas i ett senare skede. Traditionellt har detta arbete utförts genom att fotografera platsen och mäta olika avstånd. På senare tid har även terrester laserskanning kommit att bli ett tillförlitligt alternativ. Med det sagt är det tänkbart att även fotogrammetri och andra typer av laserskanning skulle kunna användas för att uppnå liknande resultat.  Syftet med denna studie är att utforska hur handhållen laserskanning och UAS-fotogrammetri kan användas för att dokumentera en trafikolycka. Detta uppnås genom att utvärdera punktmolnens lägesosäkerhet och visuella kvalitet. Vidare utforskas fördelar och nackdelar med respektive metod, bland annat sett till tidsåtgång och kostnader, för att slutligen komma fram till vilken metod som lämpar sig bäst för att dokumentera en trafikolycka.  En trafikolycka med två inblandade bilar iscensattes och laserskannades till en början med den handhållna laserskannern Leica BLK2GO. Därefter samlades bilder in med den obemannade flygfarkosten Leica Aibot följt av att ett referenspunktmoln skapades med den terrestra laserskannern Leica C10. Genom att jämföra koordinater för kontrollpunkter i referenspunktmolnet med koordinaterna för motsvarande kontrollpunkter i de två andra punktmolnen kunde deras lägesosäkerheter bestämmas. Studiens resultat visar att både punktmolnet som framställdes med handhållen laserskanning och UAS-fotogrammetri har en lägesosäkerhet (standardosäkerhet) i 3D på 0,019 m. Båda metoderna är tillämpliga för att dokumentera en trafikolycka, men jämfört med terrester laserskanning är punktmolnen dock bristfälliga på olika sätt. BLK2GO producerar ett förhållandevis mörkt punktmoln och mörka objekt avbildas sämre än ljusare föremål. I punktmolnet som framställdes med Leica Aibot förekom påtagliga håligheter i bilarnas karosser. Handhållen laserskanning är en tidseffektiv metod medan UAS-fotogrammetri kan utföras till en lägre kostnad. Sammanfattningsvis går det inte att dra någon entydig slutsats om vilken metod som lämpar sig bäst för att dokumentera en trafikolycka. Valet beror på vilka omständigheter som råder på olycksplatsen. / In the event of a traffic accident, it is often important to restore the site to its normal condition as fast as possible. Occasionally, the accident scene must be documented so that the cause of the accident can be investigated at a later stage. Traditionally, this work has been performed by taking pictures of the site and measuring different distances. Lately, terrestrial laser scanning has also become a reliable alternative. With that said, it is possible that photogrammetry and other types of laser scanning also could be utilized to achieve similar results.    The aim of this study is to investigate how handheld laser scanning and UAS photogrammetry can be used to document a traffic accident. This is achieved by examining the positional uncertainty and visual quality of the point clouds. Moreover, the advantages and disadvantages of each method are explored, for instance in terms of time consumption and costs, in order to finally come to a conclusion of which method is best suited for documenting a traffic accident. A traffic accident with two involved cars was staged and initially laser scanned with the handheld laser scanner Leica BLK2GO. Thereafter, pictures were collected with the unmanned aerial vehicle Leica Aibot followed by the creation of a reference point cloud with the terrestrial laser scanner Leica C10. By comparing the coordinates of control points in the reference point cloud with the coordinates of the corresponding control points in the two other point clouds, their positional uncertainty could be determined. The results of the study show that both the point cloud produced by the handheld laser scanner and UAS photogrammetry have a positional uncertainty (standard uncertainty) of 0.019 m. Both methods are applicable for documenting a traffic accident but compared to terrestrial laser scanning, the point clouds are deficient in different ways. BLK2GO produces a relatively dark point cloud and dark objects are reproduced worse than lighter objects. In the point cloud produced by Leica Aibot, there were noticeable cavities in the bodies of the cars. Handheld laser scanning is a time-efficient method while UAS photogrammetry can be performed at a lower cost. In conclusion, it is not possible to arrive at an unambiguous conclusion with regards to which method that is best suited for documenting a traffic accident. The choice depends on the prevailing circumstances at the accident scene.
216

Applications and Development of Intelligent UAVs for the Resource Industries

Bishop, Richard Edwin 21 April 2022 (has links)
Drones have become an integral part of the digital transformation currently sweeping the mining industry; particularly in surface operations, where they allow operators to model the terrain quickly and effortlessly with GPS localization and advanced mission planning software. Recently, the usage of drones has expanded to underground mines, with advancements in drone autonomy in GPS-denied environments. Developments in lidar technology and Simultaneous Localization and Mapping (SLAM) algorithms are enabling UAVs to function safely underground where they can be used to map workings and digitally reconstruct them into 3D point clouds for a wide variety of applications. Underground mines can be expansive with inaccessible and dangerous areas preventing safe access for traditional inspections, mapping and monitoring. In addition, abandoned mines and historic mines being reopened may lack reliable maps of sufficient detail. The underground mine environment presents a multitude of unique challenges that must be addressed for reliable drone flights. This work covers the development of drones for GPS-denied underground mines, in addition to several case studies where drone-based lidar and photogrammetry were used to capture 3D point clouds of underground mines, and the associated applications of mine digitization, such as geotechnical analysis and pillar strength analysis. This research also features an applied use case of custom drones built to detect methane leaks at natural gas production and distribution sites. / Doctor of Philosophy / Drones have become an integral part of the digital transformation currently sweeping the mining industry; particularly in surface operations, where they allow operators to model the terrain quickly and effortlessly. Recently, the usage of drones has expanded to underground mines, with advancements in drone autonomy. New developments are enabling UAVs to function safely underground where they can be used to digitally reconstruct workings for a wide variety of applications. Underground mines can be expansive with inaccessible and dangerous areas preventing safe access for traditional inspections, mapping and monitoring. In addition, abandoned mines and historic mines being reopened may lack reliable maps of sufficient detail. The underground mine environment presents a multitude of unique challenges that must be addressed for reliable drone flights. This work covers the development of drones for GPS-denied underground mines, in addition to several case studies where drones were used to create 3D models of mines, and the associated applications of mine digitization. This research also features an applied use case of custom drones built to detect methane leaks at natural gas production and distribution sites.
217

Handling Domain Shift in 3D Point Cloud Perception

Saltori, Cristiano 10 April 2024 (has links)
This thesis addresses the problem of domain shift in 3D point cloud perception. In the last decades, there has been tremendous progress in within-domain training and testing. However, the performance of perception models is affected when training on a source domain and testing on a target domain sampled from different data distributions. As a result, a change in sensor or geo-location can lead to a harmful drop in model performance. While solutions exist for image perception, addressing this problem in point clouds remains unresolved. The focus of this thesis is the study and design of solutions for mitigating domain shift in 3D point cloud perception. We identify several settings differing in the level of target supervision and the availability of source data. We conduct a thorough study of each setting and introduce a new method to solve domain shift in each configuration. In particular, we study three novel settings in domain adaptation and domain generalization and propose five new methods for mitigating domain shift in 3D point cloud perception. Our methods are used by the research community, and at the time of writing, some of the proposed approaches hold the state-of-the-art. In conclusion, this thesis provides a valuable contribution to the computer vision community, setting the groundwork for the development of future works in cross-domain conditions.
218

Acquisition et rendu 3D réaliste à partir de périphériques "grand public" / Capture and Realistic 3D rendering from consumer grade devices

Chakib, Reda 14 December 2018 (has links)
L'imagerie numérique, de la synthèse d'images à la vision par ordinateur est en train de connaître une forte évolution, due entre autres facteurs à la démocratisation et au succès commercial des caméras 3D. Dans le même contexte, l'impression 3D grand public, qui est en train de vivre un essor fulgurant, contribue à la forte demande sur ce type de caméra pour les besoins de la numérisation 3D. L'objectif de cette thèse est d'acquérir et de maîtriser un savoir-faire dans le domaine de la capture/acquisition de modèles 3D en particulier sur l'aspect rendu réaliste. La réalisation d'un scanner 3D à partir d'une caméra RGB-D fait partie de l'objectif. Lors de la phase d'acquisition, en particulier pour un dispositif portable, on est confronté à deux problèmes principaux, le problème lié au référentiel de chaque capture et le rendu final de l'objet reconstruit. / Digital imaging, from the synthesis of images to computer vision isexperiencing a strong evolution, due among other factors to the democratization and commercial success of 3D cameras. In the same context, the consumer 3D printing, which is experiencing a rapid rise, contributes to the strong demand for this type of camera for the needs of 3D scanning. The objective of this thesis is to acquire and master a know-how in the field of the capture / acquisition of 3D models in particular on the rendered aspect. The realization of a 3D scanner from a RGB-D camera is part of the goal. During the acquisition phase, especially for a portable device, there are two main problems, the problem related to the repository of each capture and the final rendering of the reconstructed object.
219

Optimierter Einsatz eines 3D-Laserscanners zur Point-Cloud-basierten Kartierung und Lokalisierung im In- und Outdoorbereich / Optimized use of a 3D laser scanner for point-cloud-based mapping and localization in indoor and outdoor areas

Schubert, Stefan 05 March 2015 (has links) (PDF)
Die Kartierung und Lokalisierung eines mobilen Roboters in seiner Umgebung ist eine wichtige Voraussetzung für dessen Autonomie. In dieser Arbeit wird der Einsatz eines 3D-Laserscanners zur Erfüllung dieser Aufgaben untersucht. Durch die optimierte Anordnung eines rotierenden 2D-Laserscanners werden hochauflösende Bereiche vorgegeben. Zudem wird mit Hilfe von ICP die Kartierung und Lokalisierung im Stillstand durchgeführt. Bei der Betrachtung zur Verbesserung der Bewegungsschätzung wird auch eine Möglichkeit zur Lokalisierung während der Bewegung mit 3D-Scans vorgestellt. Die vorgestellten Algorithmen werden durch Experimente mit realer Hardware evaluiert.
220

Point Cloud Registration in Augmented Reality using the Microsoft HoloLens

Kjellén, Kevin January 2018 (has links)
When a Time-of-Flight (ToF) depth camera is used to monitor a region of interest, it has to be mounted correctly and have information regarding its position. Manual configuration currently require managing captured 3D ToF data in a 2D environment, which limits the user and might give rise to errors due to misinterpretation of the data. This thesis investigates if a real time 3D reconstruction mesh from a Microsoft HoloLens can be used as a target for point cloud registration using the ToF data, thus configuring the camera autonomously. Three registration algorithms, Fast Global Registration (FGR), Joint Registration Multiple Point Clouds (JR-MPC) and Prerejective RANSAC, were evaluated for this purpose. It was concluded that despite using different sensors it is possible to perform accurate registration. Also, it was shown that the registration can be done accurately within a reasonable time, compared with the inherent time to perform 3D reconstruction on the Hololens. All algorithms could solve the problem, but it was concluded that FGR provided the most satisfying results, though requiring several constraints on the data.

Page generated in 0.0351 seconds