• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 52
  • 9
  • 5
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 84
  • 84
  • 30
  • 21
  • 17
  • 15
  • 15
  • 14
  • 13
  • 12
  • 12
  • 12
  • 11
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Knowledge-based 3D point clouds processing

Truong, Quoc Hung 15 November 2013 (has links) (PDF)
The modeling of real-world scenes through capturing 3D digital data has proven to be both useful andapplicable in a variety of industrial and surveying applications. Entire scenes are generally capturedby laser scanners and represented by large unorganized point clouds possibly along with additionalphotogrammetric data. A typical challenge in processing such point clouds and data lies in detectingand classifying objects that are present in the scene. In addition to the presence of noise, occlusionsand missing data, such tasks are often hindered by the irregularity of the capturing conditions bothwithin the same dataset and from one data set to another. Given the complexity of the underlyingproblems, recent processing approaches attempt to exploit semantic knowledge for identifying andclassifying objects. In the present thesis, we propose a novel approach that makes use of intelligentknowledge management strategies for processing of 3D point clouds as well as identifying andclassifying objects in digitized scenes. Our approach extends the use of semantic knowledge to allstages of the processing, including the guidance of the individual data-driven processing algorithms.The complete solution consists in a multi-stage iterative concept based on three factors: the modeledknowledge, the package of algorithms, and a classification engine. The goal of the present work isto select and guide algorithms following an adaptive and intelligent strategy for detecting objects inpoint clouds. Experiments with two case studies demonstrate the applicability of our approach. Thestudies were carried out on scans of the waiting area of an airport and along the tracks of a railway.In both cases the goal was to detect and identify objects within a defined area. Results show that ourapproach succeeded in identifying the objects of interest while using various data types
72

Novas abordagens para segmentação de nuvens de pontos aplicadas à robótica autônoma e reconstrução 3D / New approaches for segmenting point clouds applied to autonomous robotics and 3D reconstruction

Santos, Gilberto Antônio Marcon dos 12 August 2016 (has links)
Submitted by Luciana Ferreira (lucgeral@gmail.com) on 2016-08-18T11:09:56Z No. of bitstreams: 2 Dissertação - Gilberto Antônio Marcon dos Santos - 2016.pdf: 15378242 bytes, checksum: d10f5df08686b55ad63c406e648a2b8e (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2016-08-18T11:12:10Z (GMT) No. of bitstreams: 2 Dissertação - Gilberto Antônio Marcon dos Santos - 2016.pdf: 15378242 bytes, checksum: d10f5df08686b55ad63c406e648a2b8e (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Made available in DSpace on 2016-08-18T11:12:10Z (GMT). No. of bitstreams: 2 Dissertação - Gilberto Antônio Marcon dos Santos - 2016.pdf: 15378242 bytes, checksum: d10f5df08686b55ad63c406e648a2b8e (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2016-08-12 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / Depth sensing methods yield point clouds that represent neighboring surfaces. Interpreting and extracting information from point clouds is an established field, full of yet unsolved challenges. Classic image processing algorithms are not applicable or must be adapted because the organized structure of 2D images is not available. This work presents three contribution to the field of point cloud processing and segmentation. These contributions are the results of investigations carried out at the Laboratory for Education and Innovation in Automation – LEIA, aiming to advance the knowledges related to applying spacial sensing to autonomous robotics. The first contribution consists of a new algorithm, based on evolutionary methods, for extracting planes from point clouds. Based on the method proposed by Bazargani, Mateus e Loja (2015), this contribution consists of adopting evolutionary strategies in place of genetic algorithms making the process less sensitive to user-defined parameters. The second contribution is a method for segmenting ground and obstacles from point clouds for autonomous navigation, that utilizes the proposed plane extraction algorithm. The use of a quadtree for adaptive area segmentation allows for classifying points with high accuracy efficiently and with a time performance compatible with low cost embedded devices. The third contribution is a variant of the proposed segmentation method that is more noise tolerant and robust by incorporating a neural classifier. The use of a neural classifier in place of simple thresholding makes the process less sensitive to point cloud noise and faults, making it specially interesting for processing point clouds obtained from real time stereo reconstruction methods. A through sensitivity, accuracy, and efficiency analysis is presented for each algorithm. The dihedral angle metric (angle between the detected plane and the reference polygons that share at least one point) proposed by Bazargani, Mateus e Loja (2015) is used to quantify the plane detection method accuracy. The ratio between the correctly classified points and the total number of points is utilized as an accuracy metric for the ground segmentation methods. Additionally, computing costs and execution times are considered and compared to the main state-of-the-art methods. / Métodos de sensoriamento de profundidade produzem nuvens de pontos que representam as superfícies vizinhas. Interpretar e extrair informações de nuvens de pontos é um campo estabelecido e repleto de desafios ainda não superados. Algoritmos de processamento de imagens clássicos não se aplicam ou têm de ser adaptados porque a estrutura organizada que se poderia supor em imagens bidimensionais não se faz presente. Este trabalho apresenta três contribuições ao campo de processamento e segmentação de nuvens de pontos. Tais contribuições são resultados da investigação realizada no Laboratório para Educação e Inovação em Automação – LEIA, com o fim de avançar os conhecimentos relacionados a aplicações de sensoriamento espacial para robótica autônoma. A primeira contribuição consiste de um novo algoritmo para extração de planos de nuvens de pontos, que se baseia em métodos evolutivos. Partindo do método proposto por Bazargani, Mateus e Loja (2015), esta contribuição consiste em utilizar estratégias evolucionárias no lugar de algoritmos genéticos, de forma a tornar o processo menos sensível aos parâmetros definidos pelo usuário. A segunda contribuição é um método para segmentação de piso e obstáculos em nuvens de pontos para navegação autônoma, que utiliza o algoritmo de extração de planos proposto. O uso de uma árvore quaternária para segmentação adaptativa de área permite classificar os pontos com elevada taxa de acerto de forma eficiente e com desempenho compatível com dispositivos embarcados de baixo custo. A terceira contribuição é uma variação do método de segmentação proposto que se faz mais robusta e tolerante a ruído através da agregação de um classificador neural. O uso do classificador neural no lugar da limiarização simples torna o processo menos sensível a ruídos e falhas nas nuvens de pontos, o tornando especialmente interessante para o processamento de nuvens de pontos obtidas por métodos de reconstrução estéreo de tempo real. Uma completa análise de sensibilidade, acurácia e eficiência é apresentada para cada algoritmo. A métrica de ângulo diedral (ângulo entre os planos detectados e os polígonos de referência que compartilham ao menos um ponto em comum) proposta por Bazargani, Mateus e Loja (2015) é utilizada para quantificar a acurácia do método de detecção de planos. A razão entre os pontos corretamente classificados e o número total de pontos é utilizada como métrica de acurácia para os métodos de segmentação de piso. Também são considerados os custos computacionais e o tempo de execução, comparados aos principais métodos estado-da-arte.
73

Knowledge-based 3D point clouds processing / Traitement 3D de nuages de points basé sur la connaissance

Truong, Quoc Hung 15 November 2013 (has links)
La modélisation de scènes réelles à travers la capture de données numériques 3D a été prouvée à la fois utile et applicable dans une variété d’applications. Des scènes entières sont généralement numérisées par des scanners laser et représentées par des grands nuages de points non organisés souvent accompagnés de données photogrammétriques. Un problème typique dans le traitement de ces nuages et données réside dans la détection et la classification des objets présents dans la scène. Ces tâches sont souvent entravées par la variabilité des conditions de capture des données, la présence de bruit, les occlusions ainsi que les données manquantes. Compte tenu de la complexité des problèmes sous-jacents, les approches de traitement récentes tentent d’exploiter les connaissances sémantiques pour identifier et classer les objets. Dans cette thèse, nous proposons une nouvelle approche qui fait appel à des stratégies intelligentes de gestion des connaissances pour le traitement des nuages de points 3D ainsi que l’identification et la classification des objets dans les scènes numérisées. Notre approche étend l’utilisation des connaissances sémantiques à toutes les étapes du traitement, y compris le choix et le guidage des algorithmes de traitement axées sur les données individuelles. Notre solution constitue un concept multi-étape itératif sur la base de trois facteurs : la connaissance modélisée, un ensemble d’algorithmes de traitement, et un moteur de classification. L’objectif de ce travail est de sélectionner et d’orienter les algorithmes de manière adaptative et intelligente pour détecter des objets dans les nuages de points. Des expériences avec deux études de cas démontrent l’applicabilité de notre approche. Les études ont été réalisées sur des analyses de la salle d’attente d’un aéroport et le long des voies de chemin de fer. Dans les deux cas, l’objectif était de détecter et d’identifier des objets dans une zone définie. Les résultats montrent que notre approche a réussi à identifier les objets d’intérêt tout en utilisant différents types de données / The modeling of real-world scenes through capturing 3D digital data has proven to be both useful andapplicable in a variety of industrial and surveying applications. Entire scenes are generally capturedby laser scanners and represented by large unorganized point clouds possibly along with additionalphotogrammetric data. A typical challenge in processing such point clouds and data lies in detectingand classifying objects that are present in the scene. In addition to the presence of noise, occlusionsand missing data, such tasks are often hindered by the irregularity of the capturing conditions bothwithin the same dataset and from one data set to another. Given the complexity of the underlyingproblems, recent processing approaches attempt to exploit semantic knowledge for identifying andclassifying objects. In the present thesis, we propose a novel approach that makes use of intelligentknowledge management strategies for processing of 3D point clouds as well as identifying andclassifying objects in digitized scenes. Our approach extends the use of semantic knowledge to allstages of the processing, including the guidance of the individual data-driven processing algorithms.The complete solution consists in a multi-stage iterative concept based on three factors: the modeledknowledge, the package of algorithms, and a classification engine. The goal of the present work isto select and guide algorithms following an adaptive and intelligent strategy for detecting objects inpoint clouds. Experiments with two case studies demonstrate the applicability of our approach. Thestudies were carried out on scans of the waiting area of an airport and along the tracks of a railway.In both cases the goal was to detect and identify objects within a defined area. Results show that ourapproach succeeded in identifying the objects of interest while using various data types
74

3D urban cartography incorporating recognition and temporal integration / Cartographie urbaine 3D avec reconnaissance et intégration temporelle

Aijazi, Ahmad Kamal 15 December 2014 (has links)
Au cours des dernières années, la cartographie urbaine 3D a suscité un intérêt croissant pour répondre à la demande d’applications d’analyse des scènes urbaines tournées vers un large public. Conjointement les techniques d’acquisition de données 3D progressaient. Les travaux concernant la modélisation et la visualisation 3D des villes se sont donc intensifiés. Des applications fournissent au plus grand nombre des visualisations efficaces de modèles urbains à grande échelle sur la base des imageries aérienne et satellitaire. Naturellement, la demande s’est portée vers des représentations avec un point de vue terrestre pour offrir une visualisation 3D plus détaillée et plus réaliste. Intégrées dans plusieurs navigateurs géographiques comme Google Street View, Microsoft Visual Earth ou Géoportail, ces modélisations sont désormais accessibles et offrent une représentation réaliste du terrain, créée à partir des numérisateurs mobiles terrestres. Dans des environnements urbains, la qualité des données obtenues à partir de ces véhicules terrestres hybrides est largement entravée par la présence d’objets temporairement statiques ou dynamiques (piétons, voitures, etc.) dans la scène. La mise à jour de la cartographie urbaine via la détection des modifications et le traitement des données bruitées dans les environnements urbains complexes, l’appariement des nuages de points au cours de passages successifs, voire la gestion des grandes variations d’aspect de la scène dues aux conditions environnementales constituent d’autres problèmes délicats associés à cette thématique. Plus récemment, les tâches de perception s’efforcent également de mener une analyse sémantique de l’environnement urbain pour renforcer les applications intégrant des cartes urbaines 3D. Dans cette thèse, nous présentons un travail supportant le passage à l’échelle pour la cartographie 3D urbaine automatique incorporant la reconnaissance et l’intégration temporelle. Nous présentons en détail les pratiques actuelles du domaine ainsi que les différentes méthodes, les applications, les technologies récentes d’acquisition des données et de cartographie, ainsi que les différents problèmes et les défis qui leur sont associés. Le travail présenté se confronte à ces nombreux défis mais principalement à la classification des zones urbaines l’environnement, à la détection automatique des changements, à la mise à jour efficace de la carte et l’analyse sémantique de l’environnement urbain. Dans la méthode proposée, nous effectuons d’abord la classification de l’environnement urbain en éléments permanents et temporaires. Les objets classés comme temporaire sont ensuite retirés du nuage de points 3D laissant une zone perforée dans le nuage de points 3D. Ces zones perforées ainsi que d’autres imperfections sont ensuite analysées et progressivement éliminées par une mise à jour incrémentale exploitant le concept de multiples passages. Nous montrons que la méthode d’intégration temporelle proposée permet également d’améliorer l’analyse sémantique de l’environnement urbain, notamment les façades des bâtiments. Les résultats, évalués sur des données réelles en utilisant différentes métriques, démontrent non seulement que la cartographie 3D résultante est précise et bien mise à jour, qu’elle ne contient que les caractéristiques permanentes exactes et sans imperfections, mais aussi que la méthode est également adaptée pour opérer sur des scènes urbaines de grande taille. La méthode est adaptée pour des applications liées à la modélisation et la cartographie du paysage urbain nécessitant une mise à jour fréquente de la base de données. / Over the years, 3D urban cartography has gained widespread interest and importance in the scientific community due to an ever increasing demand for urban landscape analysis for different popular applications, coupled with advances in 3D data acquisition technology. As a result, in the last few years, work on the 3D modeling and visualization of cities has intensified. Lately, applications have been very successful in delivering effective visualizations of large scale models based on aerial and satellite imagery to a broad audience. This has created a demand for ground based models as the next logical step to offer 3D visualizations of cities. Integrated in several geographical navigators, like Google Street View, Microsoft Visual Earth or Geoportail, several such models are accessible to large public who enthusiastically view the real-like representation of the terrain, created by mobile terrestrial image acquisition techniques. However, in urban environments, the quality of data acquired by these hybrid terrestrial vehicles is widely hampered by the presence of temporary stationary and dynamic objects (pedestrians, cars, etc.) in the scene. Other associated problems include efficient update of the urban cartography, effective change detection in the urban environment and issues like processing noisy data in the cluttered urban environment, matching / registration of point clouds in successive passages, and wide variations in environmental conditions, etc. Another aspect that has attracted a lot of attention recently is the semantic analysis of the urban environment to enrich semantically 3D mapping of urban cities, necessary for various perception tasks and modern applications. In this thesis, we present a scalable framework for automatic 3D urban cartography which incorporates recognition and temporal integration. We present in details the current practices in the domain along with the different methods, applications, recent data acquisition and mapping technologies as well as the different problems and challenges associated with them. The work presented addresses many of these challenges mainly pertaining to classification of urban environment, automatic change detection, efficient updating of 3D urban cartography and semantic analysis of the urban environment. In the proposed method, we first classify the urban environment into permanent and temporary classes. The objects classified as temporary are then removed from the 3D point cloud leaving behind a perforated 3D point cloud of the urban environment. These perforations along with other imperfections are then analyzed and progressively removed by incremental updating exploiting the concept of multiple passages. We also show that the proposed method of temporal integration also helps in improved semantic analysis of the urban environment, specially building façades. The proposed methods ensure that the resulting 3D cartography contains only the exact, accurate and well updated permanent features of the urban environment. These methods are validated on real data obtained from different sources in different environments. The results not only demonstrate the efficiency, scalability and technical strength of the method but also that it is ideally suited for applications pertaining to urban landscape modeling and cartography requiring frequent database updating.
75

Měřická dokumentace památkového objektu v areálu hradu Veveří / Metric survey documentation of historic building in the area of Veveří castle

Velecká, Kateřina January 2021 (has links)
The diploma thesis deals with the production of metric survey documentation of part of historic building in the area of Veveří castle, namely two buildings located on the so-called "Příhrádek" in the form of a ground plan and section in the scale of 1:50 and elevations in the scale of 1:100. The thesis contains a theoretical part describing methods of surveying monuments, their outcomes and used software. And a practical part that deals with the measurement itself and the following procession leading to the production of graphical outcomes.
76

Visual Analysis of High-Dimensional Point Clouds using Topological Abstraction

Oesterling, Patrick 14 April 2016 (has links)
This thesis is about visualizing a kind of data that is trivial to process by computers but difficult to imagine by humans because nature does not allow for intuition with this type of information: high-dimensional data. Such data often result from representing observations of objects under various aspects or with different properties. In many applications, a typical, laborious task is to find related objects or to group those that are similar to each other. One classic solution for this task is to imagine the data as vectors in a Euclidean space with object variables as dimensions. Utilizing Euclidean distance as a measure of similarity, objects with similar properties and values accumulate to groups, so-called clusters, that are exposed by cluster analysis on the high-dimensional point cloud. Because similar vectors can be thought of as objects that are alike in terms of their attributes, the point cloud\''s structure and individual cluster properties, like their size or compactness, summarize data categories and their relative importance. The contribution of this thesis is a novel analysis approach for visual exploration of high-dimensional point clouds without suffering from structural occlusion. The work is based on implementing two key concepts: The first idea is to discard those geometric properties that cannot be preserved and, thus, lead to the typical artifacts. Topological concepts are used instead to shift away the focus from a point-centered view on the data to a more structure-centered perspective. The advantage is that topology-driven clustering information can be extracted in the data\''s original domain and be preserved without loss in low dimensions. The second idea is to split the analysis into a topology-based global overview and a subsequent geometric local refinement. The occlusion-free overview enables the analyst to identify features and to link them to other visualizations that permit analysis of those properties not captured by the topological abstraction, e.g. cluster shape or value distributions in particular dimensions or subspaces. The advantage of separating structure from data point analysis is that restricting local analysis only to data subsets significantly reduces artifacts and the visual complexity of standard techniques. That is, the additional topological layer enables the analyst to identify structure that was hidden before and to focus on particular features by suppressing irrelevant points during local feature analysis. This thesis addresses the topology-based visual analysis of high-dimensional point clouds for both the time-invariant and the time-varying case. Time-invariant means that the points do not change in their number or positions. That is, the analyst explores the clustering of a fixed and constant set of points. The extension to the time-varying case implies the analysis of a varying clustering, where clusters appear as new, merge or split, or vanish. Especially for high-dimensional data, both tracking---which means to relate features over time---but also visualizing changing structure are difficult problems to solve.
77

A 2D/3D Feature-Level Information Fusion Architecture For Remote Sensing Applications

Schierl, Jonathan 11 August 2022 (has links)
No description available.
78

Automatische Extraktion von 3D-Baumparametern aus terrestrischen Laserscannerdaten

Bienert, Anne 11 January 2013 (has links)
Ein großes Anwendungsgebiet des Flugzeuglaserscannings ist in Bereichen der Forstwirtschaft und der Forstwissenschaft zu finden. Die Daten dienen flächendeckend zur Ableitung von digitalen Gelände- und Kronenmodellen, aus denen sich die Baumhöhe ableiten lässt. Aufgrund der Aufnahmerichtung aus der Luft lassen sich spezielle bodennahe Baumparameter wie Stammdurchmesser und Kronenansatzhöhe nur durch Modelle schätzen. Der Einsatz terrestrischer Laserscanner bietet auf Grund der hochauflösenden Datenakquisition eine gute Ergänzung zu den Flugzeuglaserscannerdaten. Inventurrelevante Baumparameter wie Brusthöhendurchmesser und Baumhöhe lassen sich ableiten und eine Verdichtung von digitalen Geländemodellen durch die terrestrisch erfassten Daten vornehmen. Aufgrund der dichten, dreidimensionalen Punktwolken ist ein hoher Dokumentationswert gegeben und eine Automatisierung der Ableitung der Geometrieparameter realisierbar. Um den vorhandenen Holzvorrat zu kontrollieren und zu bewirtschaften, werden in periodischen Zeitabständen Forstinventuren auf Stichprobenbasis durchgeführt. Geometrische Baumparameter, wie Baumhöhe, Baumposition und Brusthöhendurchmesser, werden gemessen und dokumentiert. Diese herkömmliche Erfassung ist durch einen hohen Arbeits- und Zeitaufwand gekennzeichnet. Aus diesem Grund wurden im Rahmen dieser Arbeit Algorithmen entwickelt, die eine automatische Ableitung der geometrischen Baumparameter aus terrestrischen Laserscannerpunktwolken ermöglichen. Die Daten haben neben der berührungslosen und lichtunabhängigen Datenaufnahme den Vorteil einer objektiven und schnellen Parameterbestimmung. Letztendlich wurden die Algorithmen in einem Programm zusammengefasst, das neben der Baumdetektion eine Bestimmung der wichtigsten Parameter in einem Schritt realisiert. An Datensätzen von drei verschiedenen Studiengebieten werden die Algorithmen getestet und anhand manuell gewonnener Baumparameter validiert. Aufgrund der natürlich gewachsenen Vegetationsstruktur sind bei Aufnahmen von einem Standpunkt gerade im Kronenraum Abschattungen vorhanden. Durch geeignete Scankonfigurationen können diese Abschattungen minimiert, allerdings nicht vollständig umgangen werden. Zusätzlich ist der Prozess der Registrierung gerade im Wald mit einem zeitlichen Aufwand verbunden. Die größte Schwierigkeit besteht in der effizienten Verteilung der Verknüpfungspunkte bei dichter Bodenvegetation. Deshalb wird ein Ansatz vorgestellt, der eine Registrierung über die berechneten Mittelpunkte der Brusthöhendurchmesser durchführt. Diese Methode verzichtet auf künstliche Verknüpfungspunkte und setzt Mittelpunkte von identischen Stammabschnitten in beiden Datensätzen voraus. Dennoch ist die größte Unsicherheit in der Z-Komponente der Translation zu finden. Eine Methode unter Verwendung der Lage der Baumachsen sowie mit einem identischen Verknüpfungspunkt führt zu besseren Ergebnissen, da die Datensätze an dem homologen Punkt fixiert werden. Anhand eines Studiengebietes werden die Methoden mit den herkömmlichen Registrierungsverfahren über homologe Punkte verglichen und analysiert. Eine Georeferenzierung von terrestrischen Laserscannerpunktwolken von Waldbeständen ist aufgrund der Signalabschattung der Satellitenpositionierungssysteme nur bedingt und mit geringer Genauigkeit möglich. Deshalb wurde ein Ansatz entwickelt, um Flugzeuglaserscannerdaten mit terrestrischen Punktwolken allein über die Kenntnis der Baumposition und des vorliegenden digitalen Geländemodells zu verknüpfen und zusätzlich das Problem der Georeferenzierung zu lösen. Dass ein terrestrischer Laserscanner nicht nur für Forstinventuren gewinnbringend eingesetzt werden kann, wird anhand von drei verschiedenen Beispielen beleuchtet. Neben der Ableitung von statischen Verformungsstrukturen an Einzelbäumen werden beispielsweise auch die Daten zur Bestimmung von Vegetationsmodellen auf Basis von Gitterstrukturen (Voxel) zur Simulation von turbulenten Strömungen in und über Waldbeständen eingesetzt. Das aus Laserscannerdaten abgeleitete Höhenbild einer Rinde führt unter Verwendung von Bildverarbeitungsmethoden (Texturanalyse) zur Klassifizierung der Baumart. Mit dem terrestrischen Laserscanning ist ein interessantes Werkzeug für den Einsatz im Forst gegeben. Bestehende Konzepte der Forstinventur können erweiterte werden und es eröffnen sich neue Felder in forstwirtschaftlichen und forstwissenschaftlichen Anwendungen, wie beispielsweise die Nutzung eines Scanners auf einem Harvester während des Erntevorganges. Mit der stetigen Weiterentwicklung der Laserscannertechnik hinsichtlich Gewicht, Reichweite und Geschwindigkeit wird der Einsatz im Forst immer attraktiver. / An important application field of airborne laser scanning is forestry and the science of forestry. The captured data serve as an area-wide determination of digital terrain and canopy models, with a derived tree height. Due to the nadir recording direction, near-ground tree parameters, such as diameter at breast height (dbh) and crown base height, are predicted using forest models. High resolution terrestrial laser scanner data complements the airborne laser scanner data. Forest inventory parameters, such as dbh and tree height can be derived directly and digital terrain models are created. As a result of the dense three dimensional point clouds captured, a high level of detail exists, and a high degree of automation of the determination of the parameters is possible. To control and manage the existing stock of wood, forest inventories are carried out at periodic time intervals, on the base of sample plots. Geometric tree parameters, such as tree height, tree position and dbh are measured and documented. This conventional data acquisition is characterised by a large amount of work and time. Because of this, algorithms are developed to automatically determine geometric tree parameters from terrestrial laser scanner point clouds. The data acquisition enables an objective and fast determination of parameters, remotely, and independent of light conditions. Finally the majority of the algorithms are combined into a single program, allowing tree detection and the determination of relevant parameters in one step. Three different sample plots are used to test the algorithms. Manually measured tree parameters are also used to validate the algorithms. The natural vegetation structure causes occlusions inside the crown when scanning from one position. These scan shadows can be minimized, though not completely avoided, via an appropriate scan configuration. Additional the registration process in forest scenes is time-consuming. The largest problem is to find a suitable distribution of tie points when dense ground vegetation exists. Therefore an approach is introduced that allows data registration with the determined centre points of the dbh. The method removes the need for artificial tie points. However, the centre points of identical stem sections in both datasets are assumed. Nevertheless the biggest uncertainness is found in the Z co-ordinate of the translation. A method using the tree axes and one homologous tie point, which fixes the datasets, shows better results. The methods are compared and analysed with the traditional registration process with tie points, using a single study area. Georeferencing of terrestrial laser scanner data in forest stands is problematic, due to signal shadowing of global navigation satellite systems. Thus an approach was developed to register airborne and terrestrial laser scanner data, taking the tree positions and the available digital terrain model. With the help of three examples the benefits of applying laser scanning to forest applications is shown. Besides the derivation of static deformation structures of single trees, the data is used to determine vegetation models on the basis of a grid structure (voxel space) for simulation of turbulent flows in and over forest stands. In addition, the derived height image of tree bark using image processing methods (texture analysis) can be used to classify the tree species. Terrestrial laser scanning is a valuable tool for forest applications. Existing inventory concepts can be enlarged, and new fields in forestry and the science of forestry are established, e. g. the application of scanners on a harvester. Terrestrial laser scanners are becoming increasingly important for forestry applications, caused by continuous technological enhancements that reduce the weight, whilst increasing the range and the data rate.
79

A Deep-Learning Approach to Evaluating the Navigability of Off-Road Terrain from 3-D Imaging

Pech, Thomas Joel 30 August 2017 (has links)
No description available.
80

A Comprehensive Framework for Quality Control and Enhancing Interpretation Capability of Point Cloud Data

Yi-chun Lin (13960494) 14 October 2022 (has links)
<p>Emerging mobile mapping systems include a wide range of platforms, for instance, manned aircraft, unmanned aerial vehicles (UAV), terrestrial systems like trucks, tractors, robots, and backpacks, that can carry multiple sensors including LiDAR scanners, cameras, and georeferencing units. Such systems can maneuver in the field to quickly collect high-resolution data, capturing detailed information over an area of interest. With the increased volume and distinct characteristics of the data collected, practical quality control procedures that assess the agreement within/among datasets acquired by various sensors/systems at different times are crucial for accurate, robust interpretation. Moreover, the ability to derive semantic information from acquired data is the key to leveraging the complementary information captured by mobile mapping systems for diverse applications. This dissertation addresses these challenges for different systems (airborne and terrestrial), environments (urban and rural), and applications (agriculture, archaeology, hydraulics/hydrology, and transportation).</p> <p>In this dissertation, quality control procedures that utilize features automatically identified and extracted from acquired data are developed to evaluate the relative accuracy between multiple datasets. The proposed procedures do not rely on manually deployed ground control points or targets and can handle challenging environments such as coastal areas or agricultural fields. Moreover, considering the varying characteristics of acquired data, this dissertation improves several data processing/analysis techniques essential for meeting the needs of various applications. An existing ground filtering algorithm is modified to deal with variation in point density; digital surface model (DSM) smoothing and seamline control techniques are proposed for improving the orthophoto quality in agricultural fields. Finally, this dissertation derives semantic information for diverse applications, including 1) shoreline retreat quantification, 2) automated row/alley detection for plant phenotyping, 3) enhancement of orthophoto quality for tassel/panicle detection, and 4) point cloud semantic segmentation for mapping transportation corridors. The proposed approaches are tested using multiple datasets from UAV and wheel-based mobile mapping systems. Experimental results verify that the proposed approaches can effectively assess the data quality and provide reliable interpretation. This dissertation highlights the potential of modern mobile mapping systems to map challenging environments for a variety of applications.</p>

Page generated in 0.0761 seconds