• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 149
  • 40
  • 15
  • 11
  • 10
  • 5
  • 4
  • 4
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 275
  • 275
  • 106
  • 64
  • 58
  • 54
  • 49
  • 42
  • 42
  • 40
  • 38
  • 38
  • 37
  • 35
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

Submap Correspondences for Bathymetric SLAM Using Deep Neural Networks / Underkarta Korrespondenser för Batymetrisk SLAM med Hjälp av Djupa Neurala Nätverk

Tan, Jiarui January 2022 (has links)
Underwater navigation is a key technology for exploring the oceans and exploiting their resources. For autonomous underwater vehicles (AUVs) to explore the marine environment efficiently and securely, underwater simultaneous localization and mapping (SLAM) systems are often indispensable due to the lack of the global positioning system (GPS). In an underwater SLAM system, an AUV maps its surroundings and estimates its own pose at the same time. The pose of the AUV can be predicted by dead reckoning, but navigation errors accumulate over time. Therefore, sensors are needed to calibrate the state of the AUV. Among various sensors, the multibeam echosounder (MBES) is one of the most popular ones for underwater SLAM since it can acquire bathymetric point clouds with depth information of the surroundings. However, there are difficulties in data association for seabeds without distinct landmarks. Previous studies have focused more on traditional computer vision methods, which have limited performance on bathymetric data. In this thesis, a novel method based on deep learning is proposed to facilitate underwater perception. We conduct two experiments on place recognition and point cloud registration using data collected during a survey. The results show that, compared with the traditional methods, the proposed neural network is able to detect loop closures and register point clouds more efficiently. This work provides a better data association solution for designing underwater SLAM systems. / Undervattensnavigering är en viktig teknik för att utforska haven och utnyttja deras resurser. För att autonoma undervattensfordon (AUV) ska kunna utforska havsmiljön effektivt och säkert är underwater simultaneous localization and mapping (SLAM) system ofta oumbärliga på grund av bristen av det globala positioneringssystemet (GPS). I ett undervattens SLAM-system kartlägger ett AUV sin omgivning och uppskattar samtidigt sin egen position. AUV:s position kan förutsägas med hjälp av dödräkning, men navigeringsfel ackumuleras med tiden. Därför behövs sensorer för att kalibrera AUV:s tillstånd. Bland olika sensorer är multibeam ekolod (MBES) en av de mest populära för undervattens-SLAM eftersom den kan samla in batymetriska punktmoln med djupinformation om omgivningen. Det finns dock svårigheter med dataassociation för havsbottnar utan tydliga landmärken. Tidigare studier har fokuserat mer på traditionella datorvisionsmetoder som har begränsad prestanda för batymetriska data. I den här avhandlingen föreslås en ny metod baserad på djup inlärning för att underlätta undervattensuppfattning. Vi genomför två experiment på punktmolnregistrering med hjälp av data som samlats in under en undersökning. Resultaten visar att jämfört med de traditionella metoderna kan det föreslagna neurala nätverket upptäcka slingförslutningar och registrera punktmoln mer effektivt. Detta arbete ger en bättre lösning för dataassociation för utformning av undervattens SLAM-system.
252

[pt] DESENVOLVIMENTO E VALIDAÇÃO DE SENSOR LIDAR VIRTUAL / [en] DEVELOPMENT AND VALIDATION OF A LIDAR VIRTUAL SENSOR

GUILHERME FERREIRA GUSMAO 25 June 2020 (has links)
[pt] As tecnologias de imageamento em três dimensões (3D) vêm tendo seu uso cada vez mais disseminado no meio acadêmico e no setor industrial, especialmente na forma de nuvens de pontos, uma representação matemática da geometria e superfície de um objeto ou área. No entanto, a obtenção desses dados pode ainda ser cara e demorada, reduzindo a eficiência de muitos procedimentos que são dependentes de um grande conjunto de nuvens de pontos, como a geração de datasets para treinamento de aprendizagem de máquina, cálculo de dossel florestal e inspeção submarina. Uma solução atualmente em voga é a criação de simuladores computacionais de sistemas de imageamento, realizando o escaneamento virtual de um cenário feito a partir de arquivos de objetos 3D. Este trabalho apresenta o desenvolvimento de um simulador de sistema LiDAR (light detection and ranging) baseado em algoritmos de rastreamento de raio com paralelismo (GPU raytracing), com o sensor virtual modelado por parâmetros metrológicos e calibrado por meio de comparação com um sensor real, juntamente com um gerador flexível de cenários virtuais. A combinação destas ferramentas no simulador resultou em uma geração robusta de nuvens de pontos sintéticas em cenários diversos, possibilitando a criação de datasets para uso em testes de conceitos, combinação de dados reais e virtuais, entre outras aplicações. / [en] Three dimensional (3D) imaging technologies have been increasingly used in academia and in the industrial sector, especially in the form of point clouds, a mathematical representation of the geometry and surface of an object or area. However, obtaining this data can still be expensive and time consuming, reducing the efficiency of many procedures dependent on a large set of point clouds, such as the generation of datasets for machine learning training, forest canopy calculation and subsea survey. A trending solution is the development of computer simulators for imaging systems, performing the virtual scanning of a scenario made from 3D object files. At the end of this process, synthetic point clouds are obtained. This work presents the development of a LiDAR system simulator (light detection and ranging) based on parallel ray tracing algorithms (GPU raytracing), with its virtual sensor modeled by metrological parameters. A way of calibrating the sensor is displayed, by comparing it with the measurements of a real LiDAR sensor, in addition to surveying error models to increase the realism of the virtual scan. A flexible scenario creator was also implemented to facilitate interaction with the user. The combination of these tools in the simulator resulted in a robust generation of synthetic point clouds in different scenarios, enabling the creation of datasets for use in concept tests, combining real and virtual data, among other applications.
253

3D YOLO: End-to-End 3D Object Detection Using Point Clouds / 3D YOLO: Objektdetektering i 3D med LiDAR-data

Al Hakim, Ezeddin January 2018 (has links)
For safe and reliable driving, it is essential that an autonomous vehicle can accurately perceive the surrounding environment. Modern sensor technologies used for perception, such as LiDAR and RADAR, deliver a large set of 3D measurement points known as a point cloud. There is a huge need to interpret the point cloud data to detect other road users, such as vehicles and pedestrians. Many research studies have proposed image-based models for 2D object detection. This thesis takes it a step further and aims to develop a LiDAR-based 3D object detection model that operates in real-time, with emphasis on autonomous driving scenarios. We propose 3D YOLO, an extension of YOLO (You Only Look Once), which is one of the fastest state-of-the-art 2D object detectors for images. The proposed model takes point cloud data as input and outputs 3D bounding boxes with class scores in real-time. Most of the existing 3D object detectors use hand-crafted features, while our model follows the end-to-end learning fashion, which removes manual feature engineering. 3D YOLO pipeline consists of two networks: (a) Feature Learning Network, an artificial neural network that transforms the input point cloud to a new feature space; (b) 3DNet, a novel convolutional neural network architecture based on YOLO that learns the shape description of the objects. Our experiments on the KITTI dataset shows that the 3D YOLO has high accuracy and outperforms the state-of-the-art LiDAR-based models in efficiency. This makes it a suitable candidate for deployment in autonomous vehicles. / För att autonoma fordon ska ha en god uppfattning av sin omgivning används moderna sensorer som LiDAR och RADAR. Dessa genererar en stor mängd 3-dimensionella datapunkter som kallas point clouds. Inom utvecklingen av autonoma fordon finns det ett stort behov av att tolka LiDAR-data samt klassificera medtrafikanter. Ett stort antal studier har gjorts om 2D-objektdetektering som analyserar bilder för att upptäcka fordon, men vi är intresserade av 3D-objektdetektering med hjälp av endast LiDAR data. Därför introducerar vi modellen 3D YOLO, som bygger på YOLO (You Only Look Once), som är en av de snabbaste state-of-the-art modellerna inom 2D-objektdetektering för bilder. 3D YOLO tar in ett point cloud och producerar 3D lådor som markerar de olika objekten samt anger objektets kategori. Vi har tränat och evaluerat modellen med den publika träningsdatan KITTI. Våra resultat visar att 3D YOLO är snabbare än dagens state-of-the-art LiDAR-baserade modeller med en hög träffsäkerhet. Detta gör den till en god kandidat för kunna användas av autonoma fordon.
254

Crime scenes in Virtual Reality : A user centered study / Brottsplatser i Virtuell Verklighet : En användarcentrerad studie

Dath, Catrin January 2017 (has links)
A crime scene is a vital part of an investigation. There are however, depending on the situation and crime, issues connected to physically being at the scene; risk of contamination, destruction of evidence or other issues can hinder the criminal investigators to stay, visit or revisit the scene. It is therefore important to visually capture the crime scene and any possible evidence in order to aid the investigation. This thesis aims to, with an initial research question, map out the main visual documentation needs, wishes and challenges that criminal investigators face during an investigation. In addition, with a second research question, it aims to address these in a Virtual Reality (VR) design and, with a third research question, explore however other professions in the investigation process could benefit from it. This was conducted through a literature review, interviews, workshops and iterations with the approach of the Double Diamond Model of Design. The results from the interviews were thematically analyzed and ultimately summarized into five key themes. These, together with various design criteria and principals, acted as design guidelines when creating a high fidelity VR design. The first two research questions were presented through the key themes and the VR design. The results of the third research question indicated that, besides criminal investigators, both prosecutors and criminal scene investigators may benefit from a VR design, although in different ways. A VR design can, in conclusion, address the needs, wishes and challenges of criminal investigators by being developed as a compiled visualization and collaboration tool. / En brottsplats är en vital del av en brottsundersökning. Det finns emellertid, beroende på situation och brott, problem som är kopplade till att fysiskt befinna sig på brottsplatsen. Risk för kontamination, förstörelse av bevis eller andra problem kan hindra brottsutredarna att stanna, besöka eller återvända till brottsplatsen. Det är därför viktigt att visuellt dokumentara brottsplatsen och eventuella bevis för att bistå utredningen. Detta masterarbete ämnar att, med en första forskningsfråga, kartlägga de viktigaste behoven, önskemålen och utmaningarna gällande visuell dokumentation, som brottsutredare möter under en utredning. Vidare ämnar projektet att, med en andra forskningsfråga, möta dessa i en Virtuell Verklighet (VR) -design och, med en tredje forskningsfråga, undersöka hur andra yrkesgrupper i en utredningsprocess skulle kunna dra nytta av den. Detta genomfördes genom en litteraturstudie, intervjuer, workshops och iterationer grundat i tillvägagångssättet Double Diamond Model of Design. Resultaten från intervjuerna analyserades tematiskt och sammanfattades i fem huvudteman. Dessa teman, tillsammans med olika designkriterier och principer, agerade designriktlinjer vid skapandet av en high-fidelity VR-design. De två första frågorna presenterades genom nyckeltemana och VR-designen. Resultaten gällande den tredje forskningsfrågan visar att, utöver brottsutredare, både åklagare och kriminaltekniker kan dra nytta av en VR-design, även om på olika vis. Sammanfattningsvis kan en VRdesign möta utredarnas behov, önskemål och utmaningar gällande visuell dokumentation genom att utvecklas som ett kompilerat visualiserings- och samarbetsverktyg.
255

Point Cloud Data Augmentation for 4D Panoptic Segmentation / Punktmolndataförstärkning för 4D-panoptisk Segmentering

Jin, Wangkang January 2022 (has links)
4D panoptic segmentation is an emerging topic in the field of autonomous driving, which jointly tackles 3D semantic segmentation, 3D instance segmentation, and 3D multi-object tracking based on point cloud data. However, the difficulty of collection limits the size of existing point cloud datasets. Therefore, data augmentation is employed to expand the amount of existing data for better generalization and prediction ability. In this thesis, we built a new point cloud dataset named VCE dataset from scratch. Besides, we adopted a neural network model for the 4D panoptic segmentation task and proposed a simple geometric method based on translation operation. Compared to the baseline model, better results were obtained after augmentation, with an increase of 2.15% in LSTQ. / 4D-panoptisk segmentering är ett framväxande ämne inom området autonom körning, som gemensamt tar itu med semantisk 3D-segmentering, 3D-instanssegmentering och 3D-spårning av flera objekt baserat på punktmolnsdata. Svårigheten att samla in begränsar dock storleken på befintliga punktmolnsdatauppsättningar. Därför används dataökning för att utöka mängden befintliga data för bättre generalisering och förutsägelseförmåga. I det här examensarbetet byggde vi en ny punktmolndatauppsättning med namnet VCE-datauppsättning från grunden. Dessutom antog vi en neural nätverksmodell för 4D-panoptisk segmenteringsuppgift och föreslog en enkel geometrisk metod baserad på översättningsoperation. Jämfört med baslinjemodellen erhölls bättre resultat efter förstärkning, med en ökning på 2.15% i LSTQ.
256

Methods for 3D Structured Light Sensor Calibration and GPU Accelerated Colormap

Kurella, Venu January 2018 (has links)
In manufacturing, metrological inspection is a time-consuming process. The higher the required precision in inspection, the longer the inspection time. This is due to both slow devices that collect measurement data and slow computational methods that process the data. The goal of this work is to propose methods to speed up some of these processes. Conventional measurement devices like Coordinate Measuring Machines (CMMs) have high precision but low measurement speed while new digitizer technologies have high speed but low precision. Using these devices in synergy gives a significant improvement in the measurement speed without loss of precision. The method of synergistic integration of an advanced digitizer with a CMM is discussed. Computational aspects of the inspection process are addressed next. Once a part is measured, measurement data is compared against its model to check for tolerances. This comparison is a time-consuming process on conventional CPUs. We developed and benchmarked some GPU accelerations. Finally, naive data fitting methods can produce misleading results in cases with non-uniform data. Weighted total least-squares methods can compensate for non-uniformity. We show how they can be accelerated with GPUs, using plane fitting as an example. / Thesis / Doctor of Philosophy (PhD)
257

Use of Photogrammetry Aided Damage Detection for Residual Strength Estimation of Corrosion Damaged Prestressed Concrete Bridge Girders

Neeli, Yeshwanth Sai 27 July 2020 (has links)
Corrosion damage reduces the load-carrying capacity of bridges which poses a threat to passenger safety. The objective of this research was to reduce the resources involved in conventional bridge inspections which are an important tool in the condition assessment of bridges and to help in determining if live load testing is necessary. This research proposes a framework to link semi-automated damage detection on prestressed concrete bridge girders with the estimation of their residual flexural capacity. The framework was implemented on four full-scale corrosion damaged girders from decommissioned bridges in Virginia. 3D point clouds of the girders reconstructed from images using Structure from Motion (SfM) approach were textured with images containing cracks detected at pixel level using a U-Net (Fully Convolutional Network). Spalls were detected by identifying the locations where normals associated with the points in the 3D point cloud deviated from being perpendicular to the reference directions chosen, by an amount greater than a threshold angle. 3D textured mesh models, overlaid with the detected cracks and spalls were used as 3D damage maps to determine reduced cross-sectional areas of prestressing strands to account for the corrosion damage as per the recommendations of Naito, Jones, and Hodgson (2011). Scaling them to real-world dimensions enabled the measurement of any required dimension, eliminating the need for physical contact. The flexural capacities of a box beam and an I-beam estimated using strain compatibility analysis were validated with the actual capacities at failure sections determined from four destructive tests conducted by Al Rufaydah (2020). Along with the reduction in the cross-sectional areas of strands, limiting the ultimate strain that heavily corroded strands can develop was explored as a possible way to improve the results of the analysis. Strain compatibility analysis was used to estimate the ultimate rupture strain, in the heavily corroded bottommost layer prestressing strands exposed before the box beam was tested. More research is required to associate each level of strand corrosion with an average ultimate strain at which the corroded strands rupture. This framework was found to give satisfactory estimates of the residual strength. Reduction in resources involved in current visual inspection practices and eliminating the need for physical access, make this approach worthwhile to be explored further to improve the output of each step in the proposed framework. / Master of Science / Corrosion damage is a major concern for bridges as it reduces their load carrying capacity. Bridge failures in the past have been attributed to corrosion damage. The risk associated with corrosion damage caused failures increases as the infrastructure ages. Many bridges across the world built forty to fifty years ago are now in a deteriorated condition and need to be repaired and retrofitted. Visual inspections to identify damage or deterioration on a bridge are very important to assess the condition of the bridge and determine the need for repairing or for posting weight restrictions for the vehicles that use the bridge. These inspections require close physical access to the hard-to-reach areas of the bridge for physically measuring the damage which involves many resources in the form of experienced engineers, skilled labor, equipment, time, and money. The safety of the personnel involved in the inspections is also a major concern. Nowadays, a lot of research is being done in using Unmanned Aerial Vehicles (UAVs) like drones for bridge inspections and in using artificial intelligence for the detection of cracks on the images of concrete and steel members. Girders or beams in a bridge are the primary longitudinal load carrying members. Concrete inherently is weak in tension. To address this problem, High Strength steel reinforcement (called prestressing steel or prestressing strands) in prestressed concrete beams is pre-loaded with a tensile force before the application of any loads so that the regions which will experience tension under the service loads would be subjected to a pre-compression to improve the performance of the beam and delay cracking. Spalls are a type of corrosion damage on concrete members where portions of concrete fall off (section loss) due to corrosion in the steel reinforcement, exposing the reinforcement to the environment which leads to accelerated corrosion causing a loss of cross-sectional area and ultimately, a rupture in the steel. If the process of detecting the damage (cracks, spalls, exposed or severed reinforcement, etc.) is automated, the next logical step that would add great value would be, to quantify the effect of the damage detected on the load carrying capacity of the bridges. Using a quantified estimate of the remaining capacity of a bridge, determined after accounting for the corrosion damage, informed decisions can be made about the measures to be taken. This research proposes a stepwise framework to forge a link between a semi-automated visual inspection and residual capacity evaluation of actual prestressed concrete bridge girders obtained from two bridges that have been removed from service in Virginia due to extensive deterioration. 3D point clouds represent an object as a set of points on its surface in three dimensional space. These point clouds can be constructed either using laser scanning or using Photogrammetry from images of the girders captured with a digital camera. In this research, 3D point clouds are reconstructed from sequences of overlapping images of the girders using an approach called Structure from Motion (SfM) which locates matched pixels present between consecutive images in the 3D space. Crack-like features were automatically detected and highlighted on the images of the girders that were used to build the 3D point clouds using artificial intelligence (Neural Network). The images with cracks highlighted were applied as texture to the surface mesh on the point cloud to transfer the detail, color, and realism present in the images to the 3D model. Spalls were detected on 3D point clouds based on the orientation of the normals associated with the points with respect to the reference directions. Point clouds and textured meshes of the girders were scaled to real-world dimensions facilitating the measurement of any required dimension on the point clouds, eliminating the need for physical contact in condition assessment. Any cracks or spalls that went unidentified in the damage detection were visible on the textured meshes of the girders improving the performance of the approach. 3D textured mesh models of the girders overlaid with the detected cracks and spalls were used as 3D damage maps in residual strength estimation. Cross-sectional slices were extracted from the dense point clouds at various sections along the length of each girder. The slices were overlaid on the cross-section drawings of the girders, and the prestressing strands affected due to the corrosion damage were identified. They were reduced in cross-sectional area to account for the corrosion damage as per the recommendations of Naito, Jones, and Hodgson (2011) and were used in the calculation of the ultimate moment capacity of the girders using an approach called strain compatibility analysis. Estimated residual capacities were compared to the actual capacities of the girders found from destructive tests conducted by Al Rufaydah (2020). Comparisons are presented for the failure sections in these tests and the results were analyzed to evaluate the effectiveness of this framework. More research is to be done to determine the factors causing rupture in prestressing strands with different degrees of corrosion. This framework was found to give satisfactory estimates of the residual strength. Reduction in resources involved in current visual inspection practices and eliminating the need for physical access, make this approach worthwhile to be explored further to improve the output of each step in the proposed framework.
258

Entwicklung eines Verfahrens für die Koregistrierung von Bildverbänden und Punktwolken mit digitalen Bauwerksmodellen

Kaiser, Tim 08 November 2021 (has links)
Aufgrund der weiter fortschreitenden Digitalisierung verändern sich die seit langer Zeit etablierten Prozesse im Bauwesen. Dies zeigt sich zum Beispiel in der stetig steigenden Bedeutung des Building Information Modelings (BIM). Eine der wesentlichen Grundideen von BIM besteht darin, das zentrale Modell über den gesamten Lebenszyklus des Bauwerks zu verwenden. Das digitale Bauwerksmodell stellt somit eine der zentralen Komponenten der BIM-Methode dar. Neben den rein geometrischen Ausprägungen des Bauwerks werden im Modell auch eine Vielzahl an semantischen Informationen vorgehalten. Da insbesondere bei größeren Bauwerken ein fortlaufender Veränderungsprozess stattfindet, muss das Modell entsprechend aktualisiert werden, um dem tatsächlichen Istzustand zu entsprechen. Diese Aktualisierung betrifft nicht nur Veränderungen in der Geometrie, sondern auch in den verknüpften Sachdaten. Bezüglich der Aktualisierung des Modells kann die Photogrammetrie mit ihren modernen Messverfahren wie zum Beispiel Structure-from-Motion (SfM) und daraus abgeleiteten Punktwolken einen wesentlichen Beitrag zur Datenerfassung des aktuellen Zustands leisten. Für die erfolgreiche Verknüpfung des photogrammetrisch erfassten Istzustands mit dem durch das Modell definierten Sollzustand müssen beide Datentöpfe in einem gemeinsamen Koordinatensystem vorliegen. In der Regel werden zur Registrierung photogrammetrischer Produkte im Bauwerkskoordinatensystem definierte Passpunkte verwendet. Der Registrierprozess über Passpunkte ist jedoch mit einem erheblichen manuellen Aufwand verbunden. Um den Aufwand der Registrierung möglichst gering zu halten, wurde daher in dieser Arbeit ein Konzept entwickelt, das es ermöglicht, kleinräumige Bildverbände und Punktwolken automatisiert mit einem digitalen Bauwerksmodell zu koregistrieren. Das Verfahren nutzt dabei geometrische Beziehungen zwischen aus den Bildern extrahierten 3D-Liniensegmenten und Begrenzungsflächen, die aus dem digitalen Bauwerksmodell gewonnen werden. Die aufgenommenen Bilder des Objektes dienen zu Beginn als Grundlage für die Extraktion von zweidimensionalen Linienstrukturen. Auf Basis eines über SfM durchgeführten Orientierungsprozesses können diese zweidimensionalen Kanten zu einer Rekonstruktion in Form von 3D-Liniensegmenten weiterverarbeitet werden. Die weiterhin benötigten Begrenzungsflächen werden aus einem mit Hilfe der Industry Foundation Classes (IFC) definierten BIM-Modell gewonnen. Das entwickelte Verfahren nutzt dabei auch die von IFC bereitgestellten Möglichkeiten der räumlichen Aggregationshierarchien. Im Zentrum des neuen Koregistrieransatzes stehen zwei große Komponenten. Dies ist einerseits der mittels eines Gauß-Helmert-Modells umgesetze Ausgleichungsvorgang zur Transformationsparameterbestimmung und andererseits der im Vorfeld der Ausgleichung angewandten Matching-Algorithmus zur automatischen Erstellung von Korrespondenzen zwischen den 3D-Liniensegmenten und den Begrenzungsflächen. Die so gebildeten Linien-Ebenen-Paare dienen dann als Beobachtung im Ausgleichungsprozess. Da während der Parameterschätzung eine durchgängige Betrachtung der stochastischen Informationen der Beobachtungen erfolgt, ist am Ende des Registrierprozesses eine Qualitätsaussage zu den berechneten Transformationsparametern möglich. Die Validierung des entwickelten Verfahrens erfolgt an zwei Datensätzen. Der Datensatz M24 diente dabei zum Nachweis der Funktionsfähigkeit unter Laborbedingungen. Über den Datensatz Eibenstock konnte zudem nachgewiesen werden, dass das Verfahren auch in praxisnahen Umgebungen auf einer realen Baustelle zum Einsatz kommen kann. Für beide Fälle konnte eine gute Registriergenauigkeit im Bereich weniger Zentimeter nachgewiesen werden.:Kurzfassung 3 Abstract 4 1. Einleitung 7 1.1. Photogrammetrie und BIM 7 1.2. Anwendungsbezug und Problemstellung 7 1.3. Zielsetzung und Forschungsfragen 9 1.4. Aufbau der Arbeit 10 2. Grundlagen 12 2.1. Photogrammetrie 12 2.1.1. Structure-from-Motion (SfM) 12 2.1.2. Räumliche Ähnlichkeitstransformation 14 2.2. Building Information Modeling (BIM) 16 2.2.1. Besonderheiten der geometrisch / topologischen Modellierung 18 2.2.2. Industry Foundation Classes (IFC) 19 2.3. Parameterschätzung und Statistik 21 2.3.1. Nicht lineares Gauß-Helmert-Modell mit Restriktionen 21 2.3.2. Random Sample Consensus (RANSAC) 23 2.3.3. Density-Based Spatial Clustering of Applications with Noise (DBSCAN) 24 3. Stand der Forschung 26 4. Automatische Koregistrierung von Bildverbänden 30 4.1. Überblick 30 4.2. Relative Orientierung des Bildverbandes und Extraktion der 3D-Liniensegmente 33 4.2.1. Line3D++ 33 4.2.2. Stochastische Informationen der 3D-Liniensegmente 36 4.3. Ebenenextraktion aus dem digitalen Gebäudemodell 37 4.4. Linien-Ebenen-Matching 42 4.4.1. Aufstellen von Ebenenhypothesen 42 4.4.2. Analyse und Clustern der Normalenvektorhypothesen 43 4.4.3. Erstellung von Minimalkonfigurationen 44 4.5. Berechnung von Näherungswerten für die Transformationsparameter 46 4.6. Implementiertes Ausgleichungsmodell 49 4.6.1. Funktionales Modell 49 4.6.2. Stochastisches Modell 50 4.7. Entscheidungskriterien der kombinatorischen Auswertung 51 5. Validierung der Methoden 56 5.1. Messung Seminarraum M24 HTW Dresden 56 5.1.1. Untersuchung des Einfluss der SfM2BIM -Programmparameter 59 5.1.2. Ergebnisse der Validierung 64 5.2. Messung LTV Eibenstock 71 6. Diskussion der Ergebnisse 81 6.1. Bewertung der erzielten Genauigkeit 81 6.2. Bewertung der Automatisierbarkeit 82 6.3. Bewertung der praktischen Anwendbarkeit 83 6.4. Beantwortung der Forschungsfragen 85 7. Zusammenfassung und Ausblick 88 Literaturverzeichnis 90 Abbildungsverzeichnis 94 Tabellenverzeichnis 96 A. Anhang 97 A.1. Systemarchitektur SfM2BIM 97 A.2. Untersuchung SfM2BIM Parameter 97 / Due to the ongoing digitalization, traditional and well-established processes in the construction industry face lasting transformations. The rising significance of Building Information Modeling (BIM) can be seen as an example for this development. One of the core principles of BIM is the usage of the model throughout the entire life cycle of the building. Therefore, the digital twin can be regarded as one of the central components of the BIM method. Besides of the pure geometry of the building the corresponding model also contains a huge amount of semantic data. Especially in large building complexes constant changes are taking place. Consequently, the model also has to be updated regularly in order to reflect the actual state. These actualizations include both changes in geometry and in the linked technical data. Photogrammetry with its modern measuring and reconstruction techniques like structure from motion can help to facilitate this update process. In order to establish a link between the photogrammetric recorded present state and the nominal state specified by the building model both datasets have to be available in a common reference frame. Usually ground control points are used for registering the photogrammetric results with the building coordinate system. However, using ground control points results in a very labor-intensive registration process. In order to keep the required effort as low as possible this work proposes a novel concept to automatically co-register local image blocks with a digital building model. The procedure makes use of geometric relationships between 3D-linesegments that get extracted from the input images and bounding surfaces that are derived from the building model. At first the captured images are used to extract two-dimensional line patterns. These edges get further processed to 3D line segments based on an orientation estimation using structure from motion. The additionally required bounding surfaces are derived from a building model defined by the Industry Foundation Classes (IFC). The spatial aggregation structures defined in the IFC are used for alleviating the procedure. Two big components form the core piece of the novel approach. On the one hand this is the adjustment calculation for the estimation of transformation parameters using a full Gauß-Helmert-Model and the developed matching algorithm for establishing line-plane-correspondences on the other hand. The so formed correspondences serve as the observation for the adjustment process. During the parameter estimation stochastic information of the observations is completely considered. Therefore, quality predictions can be made upon completion of the registration process. The validation of the developed was conducted using two datasets. The dataset M24 served as primary validation source since the results of the algorithm could be checked under laboratory conditions and compared with results obtained by ground control points. By examine the Eibenstock dataset it could be demonstrated that the procedure also works in practical conditions on a real construction site. For both cases the registration accuracy averages to a few centimeters.:Kurzfassung 3 Abstract 4 1. Einleitung 7 1.1. Photogrammetrie und BIM 7 1.2. Anwendungsbezug und Problemstellung 7 1.3. Zielsetzung und Forschungsfragen 9 1.4. Aufbau der Arbeit 10 2. Grundlagen 12 2.1. Photogrammetrie 12 2.1.1. Structure-from-Motion (SfM) 12 2.1.2. Räumliche Ähnlichkeitstransformation 14 2.2. Building Information Modeling (BIM) 16 2.2.1. Besonderheiten der geometrisch / topologischen Modellierung 18 2.2.2. Industry Foundation Classes (IFC) 19 2.3. Parameterschätzung und Statistik 21 2.3.1. Nicht lineares Gauß-Helmert-Modell mit Restriktionen 21 2.3.2. Random Sample Consensus (RANSAC) 23 2.3.3. Density-Based Spatial Clustering of Applications with Noise (DBSCAN) 24 3. Stand der Forschung 26 4. Automatische Koregistrierung von Bildverbänden 30 4.1. Überblick 30 4.2. Relative Orientierung des Bildverbandes und Extraktion der 3D-Liniensegmente 33 4.2.1. Line3D++ 33 4.2.2. Stochastische Informationen der 3D-Liniensegmente 36 4.3. Ebenenextraktion aus dem digitalen Gebäudemodell 37 4.4. Linien-Ebenen-Matching 42 4.4.1. Aufstellen von Ebenenhypothesen 42 4.4.2. Analyse und Clustern der Normalenvektorhypothesen 43 4.4.3. Erstellung von Minimalkonfigurationen 44 4.5. Berechnung von Näherungswerten für die Transformationsparameter 46 4.6. Implementiertes Ausgleichungsmodell 49 4.6.1. Funktionales Modell 49 4.6.2. Stochastisches Modell 50 4.7. Entscheidungskriterien der kombinatorischen Auswertung 51 5. Validierung der Methoden 56 5.1. Messung Seminarraum M24 HTW Dresden 56 5.1.1. Untersuchung des Einfluss der SfM2BIM -Programmparameter 59 5.1.2. Ergebnisse der Validierung 64 5.2. Messung LTV Eibenstock 71 6. Diskussion der Ergebnisse 81 6.1. Bewertung der erzielten Genauigkeit 81 6.2. Bewertung der Automatisierbarkeit 82 6.3. Bewertung der praktischen Anwendbarkeit 83 6.4. Beantwortung der Forschungsfragen 85 7. Zusammenfassung und Ausblick 88 Literaturverzeichnis 90 Abbildungsverzeichnis 94 Tabellenverzeichnis 96 A. Anhang 97 A.1. Systemarchitektur SfM2BIM 97 A.2. Untersuchung SfM2BIM Parameter 97
259

Contributions en traitements basés points pour le rendu et la simulation en mécanique des fluides / Contributions in point based processing for rendering and fluid simulation

Bouchiba, Hassan 05 July 2018 (has links)
Le nuage de points 3D est la donnée obtenue par la majorité des méthodes de numérisation surfacique actuelles. Nous nous intéressons ainsi dans cette thèse à l'utilisation de nuages de points comme unique représentation explicite de surface. Cette thèse présente deux contributions en traitements basés points. La première contribution proposée est une nouvelle méthode de rendu de nuages de points bruts et massifs par opérateurs pyramidaux en espace image. Cette nouvelle méthode s'applique aussi bien à des nuages de points d'objets scannés, que de scènes complexes. La succession d'opérateurs en espace image permet alors de reconstruire en temps réel une surface et d'en estimer des normales, ce qui permet par la suite d'en obtenir un rendu par ombrage. De plus, l'utilisation d'opérateurs pyramidaux en espace image permet d'atteindre des fréquences d'affichage plus élevées d'un ordre de grandeur que l'état de l'art .La deuxième contribution présentée est une nouvelle méthode de simulation numérique en mécanique des fluides en volumes immergés par reconstruction implicite étendue. La méthode proposée se base sur une nouvelle définition de surface implicite par moindres carrés glissants étendue à partir d'un nuage de points. Cette surface est alors utilisée pour définir les conditions aux limites d'un solveur Navier-Stokes par éléments finis en volumes immergés, qui est utilisé pour simuler un écoulement fluide autour de l'objet représenté par le nuage de points. Le solveur est interfacé à un mailleur adaptatif anisotrope qui permet de capturer simultanément la géométrie du nuage de points et l'écoulement à chaque pas de temps de la simulation. / Most surface 3D scanning techniques produce 3D point clouds. This thesis tackles the problem of using points as only explicit surface representation. It presents two contributions in point-based processing. The first contribution is a new raw and massive point cloud screen-space rendering algorithm. This new method can be applied to a wide variety of data from small objects to complex scenes. A sequence of screen-space pyramidal operators is used to reconstruct in real-time a surface and estimate its normals, which are later used to perform deferred shading. In addition, the use of pyramidal operators allows to achieve framerate one order of magnitude higher than state of the art methods. The second proposed contribution is a new immersed boundary computational fluid dynamics method by extended implicit surface reconstruction. The proposed method is based on a new implicit surface definition from a point cloud by extended moving least squares. This surface is then used to define the boundary conditions of a finite-elements immersed boundary transient Navier-Stokes solver, which is used to compute flows around the object sampled by the point cloud. The solver is interfaced with an anisotropic and adaptive meshing algorithm which refines the computational grid around both the geometry defined by point cloud and the flow at each timestep of the simulation.
260

Reconstruction multi-vues et texturation

Aganj, Ehsan 11 December 2009 (has links) (PDF)
Dans cette thèse, nous étudions les problèmes de reconstruction statique et dynamique à partir de vues multiples et texturation, en s'appuyant sur des applications réelles et pratiques. Nous proposons trois méthodes de reconstruction destinées à l'estimation d'une représentation d'une scène statique/dynamique à partir d'un ensemble d'images/vidéos. Nous considérons ensuite le problème de texturation multi-vues en se concentrant sur la qualité visuelle de rendu..

Page generated in 0.0539 seconds