• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 65
  • 15
  • 9
  • 6
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 131
  • 131
  • 131
  • 37
  • 36
  • 31
  • 30
  • 26
  • 24
  • 17
  • 14
  • 14
  • 14
  • 13
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

DELINEATION AND ANALYSIS OF ACTIVE GEOMORPHOLOGICAL PROCESSES USING HIGH RESOLUTION SPATIAL SURVEYS

Lee, Rebecca January 2022 (has links)
The past few decades have seen rapid improvement in technologies related to remote sensing, specifically in digital photogrammetry and the use of unmanned aerial vehicles (UAVs). This has presented new opportunities to collect imagery at both a high temporal and spatial resolution to create detailed digital elevation models (DEMs) and investigate small-scale geomorphological features and their development over time. The high-resolution capacity of this methodology is well-suited to the study of a variety of terrains in which many critical geomorphological features are low relief and difficult or impossible to delineate using traditional remote sensing datasets. This study utilizes UAV-based imagery collection and data analysis, in conjunction with sedimentological analysis, of two study sites in Iceland and southern Ontario. The primary objective of this work is to explore the utility of integrating high-resolution spatial surveys with more traditional field techniques to identify geomorphological features, interpret their depositional origin, and quantify temporal changes in their form. The first study was completed on the forefields of Öldufellsjökull and western Sléttjökull, two surge-type outlet glaciers of the Mýrdalsjökull Ice Cap in southeast Iceland. Glacial deposits are important sources of paleoclimatic information but not all deposits are formed by processes that reflect the overall climatic conditions of a region; surge-type (fast-flowing) glaciers undergo periodic episodes of rapid ice movement, often unrelated to ambient climatic conditions. Remotely sensed data and field investigations were combined to complete a landsystem analysis of the forefields at each of Öldufellsjökull and western Sléttjökull, and an unmanned aerial vehicle (UAV) was used to collect high-resolution imagery of areas of particular interest. The forefields of Öldufellsjökull and western Sléttjökull, lack many of the characteristics typical of surge-type landsystems and instead are more similar to the active temperate landsystem common in Iceland. The identification of landforms considered to be diagnostic of surge-type glacier behaviour was only possible through a targeted high-resolution UAV survey suggesting that small-scale diagnostic landforms may be overlooked in many investigations. The second study area focused on the Niagara Escarpment in Hamilton, Ontario, a major landform resulting from extensive glacial and fluvial erosion of Paleozoic sedimentary rocks during the late Quaternary. In Hamilton, the Niagara Escarpment is a steep faced cuesta composed of Ordovician and Silurian sedimentary rocks. Recent rockfalls onto roads crossing the escarpment have raised serious concerns about its stability. To address these concerns, and to provide more information on erosional processes active along the escarpment in Hamilton, a comprehensive study of the Niagara Escarpment was completed including the collection of multi-temporal photogrammetric surveys of select rock faces, and detailed sedimentological and fracture analysis. A comprehensive lithological investigation was completed of all accessible rock outcrops in Hamilton to identify areas most likely to experience erosion based on site characteristics. A second component of this investigation was to evaluate the utility of using high-resolution imagery combined with Structure from Motion (SfM) software to detect temporal changes on the escarpment face. A staged erosion study was conducted in which lithological blocks of a known size were removed from the escarpment face at a selected site, to determine the lower limits of detection of erosion using this methodology. The study found that the location of block removal (erosion) was consistently identified, but the calculated volume of blocks removed was less accurately determined, differing by an average of 175% from the known volume of the block. A further study using this same methodology tested its ability to identify areas of natural loss (erosion) from the escarpment face. Based on multiple surveys taken 14 months apart at a selected study site, approximately one third of the area of interest experienced either loss (erosion) or gain (deposition) of material. There appear to be clear connections between lithology, density of fracturing, and the location of material loss (erosion); areas of the outcrop characterised by interbedded shales, and those areas exposing densely fractured sandstone or dolostone, were most likely to erode. The lithological characteristics of the Niagara Escarpment, including the strength of individual stratigraphic units, their vertical arrangement, and their density of fracturing, as well as climatic and hydrological factors (e.g., groundwater flow, location of surficial water features, mean annual temperature, mean annual precipitation etc.), all contribute to the amount and types of erosion active on the exposed rock face. The studies reported in this thesis have integrated high-resolution, close-range imagery with traditional field techniques to explore the characteristics and development of geomorphological forms in different terrain types. In each of the studies, the importance of collecting high-resolution imagery (<10 cm) to map geomorphological features of various scales is highlighted. / Dissertation / Doctor of Science (PhD)
62

Understanding perception of different urban thermal model visualizations

Barua, Gunjan 17 March 2023 (has links)
While satellite-based remote sensing techniques are often used for studying and visualizing the urban heat island effect, they are limited in terms of resolution, view bias, and revisit times. In comparison, modern UAVs equipped with infrared sensors allow very fine-scale (cm) data to be collected over smaller areas and can provide the means for a full 3D thermal reconstruction over limited spatial extents. Irrespective of the data collection method, the thermal properties of cities are typically visually represented using color, although the choice of colormap varies widely. Previous cartographic research has demonstrated that colormap and other cartographic choices affect people's understanding. This research study examines the difference in map reading performance between satellite and drone-sourced thermal pseudo-color images for three map reading tasks, the impact of color map selection on map reading, and the potential benefits of adding shading to thermal maps using high-resolution digital surface models for improved interaction. Participants expressed a preference for the newly designed rainbow-style color map "turbo" and the FLIR "ironbow" colormap. However, user preferences were not strongly related to map reading performance, and differences were partly explained by the extra information afforded by multi-hue and shading-enhanced images. / Master of Science / While satellite-based remote sensing techniques are often used for studying and visualizing the urban heat island effect, they are limited in terms of resolution, view bias, and revisit times. In comparison, modern drones or Unmanned Aerial Vehicles (UAVs) equipped with infrared sensors allow very fine-scale (cm) data to be collected over smaller areas and can provide the means for a full 3D thermal reconstruction over a small area. Irrespective of the data collection method, the thermal properties of cities are typically visually represented using color, although the choice of colormap varies widely. Previous cartographic research has demonstrated that colormap and other cartographic choices affect people's understanding. This research study examines the difference in map reading performance between satellite and drone-sourced thermal pseudo-color images for three map reading tasks, the impact of color map selection on map reading, and the potential benefits of adding hillshade augmentation to thermal maps using high-resolution digital surface models for improved interaction. Participants expressed a preference for the newly designed rainbow-style color map "turbo" and the FLIR "ironbow" colormap. However, user preferences were not strongly related to map reading performance, and differences were partly explained by the extra information afforded by multi-hue and shading-enhanced images.
63

Estimating Floodplain Vegetative Roughness using Drone-Based Laser Scanning and Structure from Motion Photogrammetry

Aquilina, Charles A. 20 August 2020 (has links)
We compared high-resolution drone laser scanning (DLS) and structure from motion (SfM) photogrammetry-derived vegetation heights at the Virginia Tech StREAM Lab to determine Manning's roughness coefficient. We utilized two calibrated approaches and a calculated approach to estimate roughness from the two data sets (DLS and SfM), then utilized them in a two-dimensional (2D) hydrodynamic model (HEC-RAS). The calculated approach used plant characteristics to determine vegetative roughness, while the calibrated approaches involved adjusting roughness values until model outputs approached values of field data (e.g., velocity probe and visual observations). We compared the model simulations to seven actual high-flow events during the fall of 2018 and 2019 using measured field data (velocity sensors, groundwater well height, marked flood extents). We used a t-test to find that all models were not significantly different to water surface elevations from our 18 wells in the floodplain (p > 0.05). There was a decrease in RMSE (-0.02 m) using the calculated compared to the calibrated models. Another decrease in RMSE was found for DLS compared to SfM (-0.01 m). This increase might not justify the increased cost of a DLS setup over SfM (~$150,000 versus ~$2,000), though future studies are needed. Our results inform hydrodynamic modeling efforts, which are becoming increasingly important for management and planning as we experience increasing high-flow events in the eastern United States due to climate change. / Master of Science / We compared high-resolution drone laser scanning (DLS) and structure from motion (SfM) photogrammetry-derived vegetation heights at the Virginia Tech StREAM Lab to improve flood modeling. DLS uses laser pulses to measure distances to create a three-dimensional (3D) point cloud of the landscape. SfM combines overlapping aerial images to create a 3D point cloud. Each method has limitations, such as cost (DLS) and accuracy (SfM). These remote sensing methods have been increasingly used to provide inputs to flood models, due to lower cost, and increased accuracy compared to airplane or satellite-based surveys. Quantifying roughness or resistance to flow can be extremely difficult and results in flood model accuracy problems. We used two forms of a calibrated approach, and a calculated approach to estimate roughness from the two data sets (DLS and SfM) which were then used in a two-dimensional (2D) flood model. We compared the model results to measured field data from seven actual high-flow events in Fall 2018 and 2019. We used statistics to determine compare the various techniques. We found that model results were not significantly different from measured water-surface elevations measured in the floodplain during floods. We also used root mean square error (RMSE) to measure the differences between modeled and observed data. There was slight decrease (-0.02 m) in error when comparing model results using the calculated and calibrated techniques. The error also decreased (-0.01 m) for simulations using the DLS versus SfM data sets. The improved accuracy due to the use of DLS might not be justified based on the increased cost of a DLS setup to SfM (~$150,000 versus ~$2,000), though future studies are needed. Insights from this analysis will help improve flood modeling, particularly as we plan for increasing high-flow events in the eastern Unites States due to climate change.
64

Modélisation 3D automatique d'environnements : une approche éparse à partir d'images prises par une caméra catadioptrique / Automatic 3d modeling of environments : a sparse approach from images taken by a catadioptric camera

Yu, Shuda 03 June 2013 (has links)
La modélisation 3d automatique d'un environnement à partir d'images est un sujet toujours d'actualité en vision par ordinateur. Ce problème se résout en général en trois temps : déplacer une caméra dans la scène pour prendre la séquence d'images, reconstruire la géométrie, et utiliser une méthode de stéréo dense pour obtenir une surface de la scène. La seconde étape met en correspondances des points d'intérêts dans les images puis estime simultanément les poses de la caméra et un nuage épars de points 3d de la scène correspondant aux points d'intérêts. La troisième étape utilise l'information sur l'ensemble des pixels pour reconstruire une surface de la scène, par exemple en estimant un nuage de points dense.Ici nous proposons de traiter le problème en calculant directement une surface à partir du nuage épars de points et de son information de visibilité fournis par l'estimation de la géométrie. Les avantages sont des faibles complexités en temps et en espace, ce qui est utile par exemple pour obtenir des modèles compacts de grands environnements comme une ville. Pour cela, nous présentons une méthode de reconstruction de surface du type sculpture dans une triangulation de Delaunay 3d des points reconstruits. L'information de visibilité est utilisée pour classer les tétraèdres en espace vide ou matière. Puis une surface est extraite de sorte à séparer au mieux ces tétraèdres à l'aide d'une méthode gloutonne et d'une minorité de points de Steiner. On impose sur la surface la contrainte de 2-variété pour permettre des traitements ultérieurs classiques tels que lissage, raffinement par optimisation de photo-consistance ... Cette méthode a ensuite été étendue au cas incrémental : à chaque nouvelle image clef sélectionnée dans une vidéo, de nouveaux points 3d et une nouvelle pose sont estimés, puis la surface est mise à jour. La complexité en temps est étudiée dans les deux cas (incrémental ou non). Dans les expériences, nous utilisons une caméra catadioptrique bas coût et obtenons des modèles 3d texturés pour des environnements complets incluant bâtiments, sol, végétation ... Un inconvénient de nos méthodes est que la reconstruction des éléments fins de la scène n'est pas correcte, par exemple les branches des arbres et les pylônes électriques. / The automatic 3d modeling of an environment using images is still an active topic in Computer Vision. Standard methods have three steps : moving a camera in the environment to take an image sequence, reconstructing the geometry of the environment, and applying a dense stereo method to obtain a surface model of the environment. In the second step, interest points are detected and matched in images, then camera poses and a sparse cloud of 3d points corresponding to the interest points are simultaneously estimated. In the third step, all pixels of images are used to reconstruct a surface of the environment, e.g. by estimating a dense cloud of 3d points. Here we propose to generate a surface directly from the sparse point cloud and its visibility information provided by the geometry reconstruction step. The advantages are low time and space complexities ; this is useful e.g. for obtaining compact models of large and complete environments like a city. To do so, a surface reconstruction method by sculpting 3d Delaunay triangulation of the reconstructed points is proposed.The visibility information is used to classify the tetrahedra in free-space and matter. Then a surface is extracted thanks to a greedy method and a minority of Steiner points. The 2-manifold constraint is enforced on the surface to allow standard surface post-processing such as denoising, refinement by photo-consistency optimization ... This method is also extended to the incremental case : each time a new key-frame is selected in the input video, new 3d points and camera pose are estimated, then the reconstructed surface is updated.We study the time complexity in both cases (incremental or not). In experiments, a low-cost catadioptric camera is used to generate textured 3d models for complete environments including buildings, ground, vegetation ... A drawback of our methods is that thin scene components cannot be correctly reconstructed, e.g. tree branches and electric posts.
65

Entwicklung eines Verfahrens für die Koregistrierung von Bildverbänden und Punktwolken mit digitalen Bauwerksmodellen

Kaiser, Tim 08 November 2021 (has links)
Aufgrund der weiter fortschreitenden Digitalisierung verändern sich die seit langer Zeit etablierten Prozesse im Bauwesen. Dies zeigt sich zum Beispiel in der stetig steigenden Bedeutung des Building Information Modelings (BIM). Eine der wesentlichen Grundideen von BIM besteht darin, das zentrale Modell über den gesamten Lebenszyklus des Bauwerks zu verwenden. Das digitale Bauwerksmodell stellt somit eine der zentralen Komponenten der BIM-Methode dar. Neben den rein geometrischen Ausprägungen des Bauwerks werden im Modell auch eine Vielzahl an semantischen Informationen vorgehalten. Da insbesondere bei größeren Bauwerken ein fortlaufender Veränderungsprozess stattfindet, muss das Modell entsprechend aktualisiert werden, um dem tatsächlichen Istzustand zu entsprechen. Diese Aktualisierung betrifft nicht nur Veränderungen in der Geometrie, sondern auch in den verknüpften Sachdaten. Bezüglich der Aktualisierung des Modells kann die Photogrammetrie mit ihren modernen Messverfahren wie zum Beispiel Structure-from-Motion (SfM) und daraus abgeleiteten Punktwolken einen wesentlichen Beitrag zur Datenerfassung des aktuellen Zustands leisten. Für die erfolgreiche Verknüpfung des photogrammetrisch erfassten Istzustands mit dem durch das Modell definierten Sollzustand müssen beide Datentöpfe in einem gemeinsamen Koordinatensystem vorliegen. In der Regel werden zur Registrierung photogrammetrischer Produkte im Bauwerkskoordinatensystem definierte Passpunkte verwendet. Der Registrierprozess über Passpunkte ist jedoch mit einem erheblichen manuellen Aufwand verbunden. Um den Aufwand der Registrierung möglichst gering zu halten, wurde daher in dieser Arbeit ein Konzept entwickelt, das es ermöglicht, kleinräumige Bildverbände und Punktwolken automatisiert mit einem digitalen Bauwerksmodell zu koregistrieren. Das Verfahren nutzt dabei geometrische Beziehungen zwischen aus den Bildern extrahierten 3D-Liniensegmenten und Begrenzungsflächen, die aus dem digitalen Bauwerksmodell gewonnen werden. Die aufgenommenen Bilder des Objektes dienen zu Beginn als Grundlage für die Extraktion von zweidimensionalen Linienstrukturen. Auf Basis eines über SfM durchgeführten Orientierungsprozesses können diese zweidimensionalen Kanten zu einer Rekonstruktion in Form von 3D-Liniensegmenten weiterverarbeitet werden. Die weiterhin benötigten Begrenzungsflächen werden aus einem mit Hilfe der Industry Foundation Classes (IFC) definierten BIM-Modell gewonnen. Das entwickelte Verfahren nutzt dabei auch die von IFC bereitgestellten Möglichkeiten der räumlichen Aggregationshierarchien. Im Zentrum des neuen Koregistrieransatzes stehen zwei große Komponenten. Dies ist einerseits der mittels eines Gauß-Helmert-Modells umgesetze Ausgleichungsvorgang zur Transformationsparameterbestimmung und andererseits der im Vorfeld der Ausgleichung angewandten Matching-Algorithmus zur automatischen Erstellung von Korrespondenzen zwischen den 3D-Liniensegmenten und den Begrenzungsflächen. Die so gebildeten Linien-Ebenen-Paare dienen dann als Beobachtung im Ausgleichungsprozess. Da während der Parameterschätzung eine durchgängige Betrachtung der stochastischen Informationen der Beobachtungen erfolgt, ist am Ende des Registrierprozesses eine Qualitätsaussage zu den berechneten Transformationsparametern möglich. Die Validierung des entwickelten Verfahrens erfolgt an zwei Datensätzen. Der Datensatz M24 diente dabei zum Nachweis der Funktionsfähigkeit unter Laborbedingungen. Über den Datensatz Eibenstock konnte zudem nachgewiesen werden, dass das Verfahren auch in praxisnahen Umgebungen auf einer realen Baustelle zum Einsatz kommen kann. Für beide Fälle konnte eine gute Registriergenauigkeit im Bereich weniger Zentimeter nachgewiesen werden.:Kurzfassung 3 Abstract 4 1. Einleitung 7 1.1. Photogrammetrie und BIM 7 1.2. Anwendungsbezug und Problemstellung 7 1.3. Zielsetzung und Forschungsfragen 9 1.4. Aufbau der Arbeit 10 2. Grundlagen 12 2.1. Photogrammetrie 12 2.1.1. Structure-from-Motion (SfM) 12 2.1.2. Räumliche Ähnlichkeitstransformation 14 2.2. Building Information Modeling (BIM) 16 2.2.1. Besonderheiten der geometrisch / topologischen Modellierung 18 2.2.2. Industry Foundation Classes (IFC) 19 2.3. Parameterschätzung und Statistik 21 2.3.1. Nicht lineares Gauß-Helmert-Modell mit Restriktionen 21 2.3.2. Random Sample Consensus (RANSAC) 23 2.3.3. Density-Based Spatial Clustering of Applications with Noise (DBSCAN) 24 3. Stand der Forschung 26 4. Automatische Koregistrierung von Bildverbänden 30 4.1. Überblick 30 4.2. Relative Orientierung des Bildverbandes und Extraktion der 3D-Liniensegmente 33 4.2.1. Line3D++ 33 4.2.2. Stochastische Informationen der 3D-Liniensegmente 36 4.3. Ebenenextraktion aus dem digitalen Gebäudemodell 37 4.4. Linien-Ebenen-Matching 42 4.4.1. Aufstellen von Ebenenhypothesen 42 4.4.2. Analyse und Clustern der Normalenvektorhypothesen 43 4.4.3. Erstellung von Minimalkonfigurationen 44 4.5. Berechnung von Näherungswerten für die Transformationsparameter 46 4.6. Implementiertes Ausgleichungsmodell 49 4.6.1. Funktionales Modell 49 4.6.2. Stochastisches Modell 50 4.7. Entscheidungskriterien der kombinatorischen Auswertung 51 5. Validierung der Methoden 56 5.1. Messung Seminarraum M24 HTW Dresden 56 5.1.1. Untersuchung des Einfluss der SfM2BIM -Programmparameter 59 5.1.2. Ergebnisse der Validierung 64 5.2. Messung LTV Eibenstock 71 6. Diskussion der Ergebnisse 81 6.1. Bewertung der erzielten Genauigkeit 81 6.2. Bewertung der Automatisierbarkeit 82 6.3. Bewertung der praktischen Anwendbarkeit 83 6.4. Beantwortung der Forschungsfragen 85 7. Zusammenfassung und Ausblick 88 Literaturverzeichnis 90 Abbildungsverzeichnis 94 Tabellenverzeichnis 96 A. Anhang 97 A.1. Systemarchitektur SfM2BIM 97 A.2. Untersuchung SfM2BIM Parameter 97 / Due to the ongoing digitalization, traditional and well-established processes in the construction industry face lasting transformations. The rising significance of Building Information Modeling (BIM) can be seen as an example for this development. One of the core principles of BIM is the usage of the model throughout the entire life cycle of the building. Therefore, the digital twin can be regarded as one of the central components of the BIM method. Besides of the pure geometry of the building the corresponding model also contains a huge amount of semantic data. Especially in large building complexes constant changes are taking place. Consequently, the model also has to be updated regularly in order to reflect the actual state. These actualizations include both changes in geometry and in the linked technical data. Photogrammetry with its modern measuring and reconstruction techniques like structure from motion can help to facilitate this update process. In order to establish a link between the photogrammetric recorded present state and the nominal state specified by the building model both datasets have to be available in a common reference frame. Usually ground control points are used for registering the photogrammetric results with the building coordinate system. However, using ground control points results in a very labor-intensive registration process. In order to keep the required effort as low as possible this work proposes a novel concept to automatically co-register local image blocks with a digital building model. The procedure makes use of geometric relationships between 3D-linesegments that get extracted from the input images and bounding surfaces that are derived from the building model. At first the captured images are used to extract two-dimensional line patterns. These edges get further processed to 3D line segments based on an orientation estimation using structure from motion. The additionally required bounding surfaces are derived from a building model defined by the Industry Foundation Classes (IFC). The spatial aggregation structures defined in the IFC are used for alleviating the procedure. Two big components form the core piece of the novel approach. On the one hand this is the adjustment calculation for the estimation of transformation parameters using a full Gauß-Helmert-Model and the developed matching algorithm for establishing line-plane-correspondences on the other hand. The so formed correspondences serve as the observation for the adjustment process. During the parameter estimation stochastic information of the observations is completely considered. Therefore, quality predictions can be made upon completion of the registration process. The validation of the developed was conducted using two datasets. The dataset M24 served as primary validation source since the results of the algorithm could be checked under laboratory conditions and compared with results obtained by ground control points. By examine the Eibenstock dataset it could be demonstrated that the procedure also works in practical conditions on a real construction site. For both cases the registration accuracy averages to a few centimeters.:Kurzfassung 3 Abstract 4 1. Einleitung 7 1.1. Photogrammetrie und BIM 7 1.2. Anwendungsbezug und Problemstellung 7 1.3. Zielsetzung und Forschungsfragen 9 1.4. Aufbau der Arbeit 10 2. Grundlagen 12 2.1. Photogrammetrie 12 2.1.1. Structure-from-Motion (SfM) 12 2.1.2. Räumliche Ähnlichkeitstransformation 14 2.2. Building Information Modeling (BIM) 16 2.2.1. Besonderheiten der geometrisch / topologischen Modellierung 18 2.2.2. Industry Foundation Classes (IFC) 19 2.3. Parameterschätzung und Statistik 21 2.3.1. Nicht lineares Gauß-Helmert-Modell mit Restriktionen 21 2.3.2. Random Sample Consensus (RANSAC) 23 2.3.3. Density-Based Spatial Clustering of Applications with Noise (DBSCAN) 24 3. Stand der Forschung 26 4. Automatische Koregistrierung von Bildverbänden 30 4.1. Überblick 30 4.2. Relative Orientierung des Bildverbandes und Extraktion der 3D-Liniensegmente 33 4.2.1. Line3D++ 33 4.2.2. Stochastische Informationen der 3D-Liniensegmente 36 4.3. Ebenenextraktion aus dem digitalen Gebäudemodell 37 4.4. Linien-Ebenen-Matching 42 4.4.1. Aufstellen von Ebenenhypothesen 42 4.4.2. Analyse und Clustern der Normalenvektorhypothesen 43 4.4.3. Erstellung von Minimalkonfigurationen 44 4.5. Berechnung von Näherungswerten für die Transformationsparameter 46 4.6. Implementiertes Ausgleichungsmodell 49 4.6.1. Funktionales Modell 49 4.6.2. Stochastisches Modell 50 4.7. Entscheidungskriterien der kombinatorischen Auswertung 51 5. Validierung der Methoden 56 5.1. Messung Seminarraum M24 HTW Dresden 56 5.1.1. Untersuchung des Einfluss der SfM2BIM -Programmparameter 59 5.1.2. Ergebnisse der Validierung 64 5.2. Messung LTV Eibenstock 71 6. Diskussion der Ergebnisse 81 6.1. Bewertung der erzielten Genauigkeit 81 6.2. Bewertung der Automatisierbarkeit 82 6.3. Bewertung der praktischen Anwendbarkeit 83 6.4. Beantwortung der Forschungsfragen 85 7. Zusammenfassung und Ausblick 88 Literaturverzeichnis 90 Abbildungsverzeichnis 94 Tabellenverzeichnis 96 A. Anhang 97 A.1. Systemarchitektur SfM2BIM 97 A.2. Untersuchung SfM2BIM Parameter 97
66

Modeling flood-induced processes causing Russell lupin mortality in the braided Ahuriri River, New Zealand

Javernick, Luke Anthony January 2013 (has links)
The braided rivers and floodplains in the Upper Waitaki Basin (UWB) of the South Island of New Zealand are critical habitats for endangered and threatened fauna such as the black stilt. However, this habitat has degraded due to introduced predators, hydropower operations, and invasive weeds including Russell lupins. While conservation efforts have been made to restore these habitats, flood events may provide a natural mechanism for removal of invasive vegetation and re-creation of natural floodplain habitats. However, little is understood about the hydraulic effects of floods on vegetation and potential mortality in these dynamic systems. Therefore, this thesis analyzed the flood-induced processes that cause lupin mortality in a reach of the Ahuriri River in the UWB, and simulated various sized flood events to assess how and where these processes occurred. To determine the processes that cause lupin mortality, post-flood observations were utilized to develop the hypothesis that flood-induced drag, erosion, sediment deposition, inundation, and trauma were responsible. Field and laboratory experiments were conducted to evaluate and quantify these individual processes, and results showed that drag, erosion, sediment deposition and inundation could cause lupin mortality. Utilizing these mortality processes, mortality thresholds of velocity, water depth, inundation duration, and morphologic changes were estimated through data analysis and evaluation of various empirical relationships. Delft3D was the numerical model used to simulate 2-dimensional flood hydraulics in the study-reach and was calibrated in three stages for hydraulics, vegetation, and morphology. Hydraulic calibration was achieved using the study-reach topography captured by Structure-from-Motion (SfM) and various hydraulic data (depth, velocity, and water extent from aerial photographs). Vegetation inclusion in Delft3D was possible utilizing a function called ‘trachytopes’, which represented vegetation roughness and flow resistance and was calibrated utilizing data from a lupin-altered flow conveyance experiment. Morphologic calibration was achieved by simulating an observed near-mean annual flood event (209 m3 s-1) and adjusting the model parameters until the simulated morphologic changes best represented the observed morphologic changes captured by pre- and post-flood SfM digital elevation models. Calibration results showed that hydraulics were well represented, vegetation inclusion often improved the simulated water inundation extent accuracy at high flows, but that local erosion and sediment deposition were difficult to replicate. Simulation of morphological change was expected to be limited due to simplistic bank erosion prediction methods. Nevertheless, the model was considered adequate since simulated total bank erosion was comparable to that observed and realistic river characteristics (riffles, pools, and channel width) were produced. Flood events ranging from the 2- to 500-year flood were simulated with the calibrated model, and lupin mortality was estimated using simulation results with the lupin mortality thresholds. Results showed that various degrees of lupin mortality occurred for the different flood events, but that the dominant mortality processes fluctuated between erosion, drag, and inundation. Sediment deposition-induced mortality was minimal, but was likely under-represented in the modeling due to poor model sediment deposition replication and possibly over-restrictive deposition mortality thresholds. The research presented in this thesis provided greater understanding of how natural flood events restore and preserve the floodplain habitats of the UWB and can be used to aid current and future braided river conservation and restoration efforts.
67

Structure from Motion Using Optical Flow Probability Distributions

Merrell, Paul Clark 18 March 2005 (has links)
Several novel structure from motion algorithms are presented that are designed to more effectively manage the problem of noise. In many practical applications, structure from motion algorithms fail to work properly because of the noise in the optical flow values. Most structure from motion algorithms implicitly assume that the noise is identically distributed and that the noise is white. Both assumptions are false. Some points can be track more easily than others and some points can be tracked more easily in a particular direction. The accuracy of each optical flow value can be quantified using an optical flow probability distribution. By using optical flow probability distributions in place of optical flow estimates in a structure from motion algorithm, a better understanding of the noise is developed and a more accurate solution is obtained. Two different methods of calculating the optical flow probability distributions are presented. The first calculates non-Gaussian probability distributions and the second calculates Gaussian probability distributions. Three different methods for calculating structure from motion are presented that use these probability distributions. The first method works on two frames and can handle any kind of noise. The second method works on two frames and is restricted to only Gaussian noise. The final method works on multiple frames and uses Gaussian noise. A simulation was created to directly compare the performance of methods that use optical flow probability distributions and methods that do not. The simulation results show that those methods which use the probability distributions better estimate the camera motion and the structure of the scene.
68

3D Human Face Reconstruction and 2D Appearance Synthesis

Zhao, Yajie 01 January 2018 (has links)
3D human face reconstruction has been an extensive research for decades due to its wide applications, such as animation, recognition and 3D-driven appearance synthesis. Although commodity depth sensors are widely available in recent years, image based face reconstruction are significantly valuable as images are much easier to access and store. In this dissertation, we first propose three image-based face reconstruction approaches according to different assumption of inputs. In the first approach, face geometry is extracted from multiple key frames of a video sequence with different head poses. The camera should be calibrated under this assumption. As the first approach is limited to videos, we propose the second approach then focus on single image. This approach also improves the geometry by adding fine grains using shading cue. We proposed a novel albedo estimation and linear optimization algorithm in this approach. In the third approach, we further loose the constraint of the input image to arbitrary in the wild images. Our proposed approach can robustly reconstruct high quality model even with extreme expressions and large poses. We then explore the applicability of our face reconstructions on four interesting applications: video face beautification, generating personalized facial blendshape from image sequences, face video stylizing and video face replacement. We demonstrate great potentials of our reconstruction approaches on these real-world applications. In particular, with the recent surge of interests in VR/AR, it is increasingly common to see people wearing head-mounted displays. However, the large occlusion on face is a big obstacle for people to communicate in a face-to-face manner. Our another application is that we explore hardware/software solutions for synthesizing the face image with presence of HMDs. We design two setups (experimental and mobile) which integrate two near IR cameras and one color camera to solve this problem. With our algorithm and prototype, we can achieve photo-realistic results. We further propose a deep neutral network to solve the HMD removal problem considering it as a face inpainting problem. This approach doesn't need special hardware and run in real-time with satisfying results.
69

Opti-acoustic Stereo Imaging

Sac, Hakan 01 September 2012 (has links) (PDF)
In this thesis, opti-acoustic stereo imaging, which is the deployment of two-dimensional (2D) high frequency imaging sonar with the electro-optical camera in calibrated stereo configuration, is studied. Optical cameras give detailed images in clear waters. However, in dark or turbid waters, information coming from electro-optical sensor is insufficient for accurate scene perception. Imaging sonars, also known as acoustic cameras, can provide enhanced target details under these scenarios. To illustrate these visibility conditions, a 2D high frequency imaging sonar simulator as well as an underwater optical image simulator is developed. A computationally efficient algorithm is also proposed for the post-processing of the returned sonar signals. Where optical visibility allows, integration of the sonar and optical images effectively provides binocular stereo vision capability and enables the recovery of three-dimensional (3D) structural information. This requires solving the feature correspondence problem for these completely different sensing modalities. Geometrical interpretation of this problem is examined on the simulated optical and sonar images. Matching the features manually, 3D reconstruction performance of opti-acoustic system is also investigated. In addition, motion estimation from opti-acoustic image sequences is studied. Finally, a method is proposed to improve the degraded optical images with the help of sonar images. First, a nonlinear mapping is found to match local the features in opti-acoustical images. Next, features in the sonar image is mapped to the optical image using the transformation. Performance of the mapping is evaluated for different scene geometries.
70

Rigid Partitioning Techniques for Efficiently Generating 3D Reconstructions from Images

Steedly, Drew 01 December 2004 (has links)
This thesis explores efficient techniques for generating 3D reconstructions from imagery. Non-linear optimization is one of the core techniques used when computing a reconstruction and is a computational bottleneck for large sets of images. Since non-linear optimization requires a good initialization to avoid getting stuck in local minima, robust systems for generating reconstructions from images build up the reconstruction incrementally. A hierarchical approach is to split up the images into small subsets, reconstruct each subset independently and then hierarchically merge the subsets. Rigidly locking together portions of the reconstructions reduces the number of parameters needed to represent them when merging, thereby lowering the computational cost of the optimization. We present two techniques that involve optimizing with parts of the reconstruction rigidly locked together. In the first, we start by rigidly grouping the cameras and scene features from each of the reconstructions being merged into separate groups. Cameras and scene features are then incrementally unlocked and optimized until the reconstruction is close to the minimum energy. This technique is most effective when the influence of the new measurements is restricted to a small set of parameters. Measurements that stitch together weakly coupled portions of the reconstruction, though, tend to cause deformations in the low error modes of the reconstruction and cannot be efficiently incorporated with the previous technique. To address this, we present a spectral technique for clustering the tightly coupled portions of a reconstruction into rigid groups. Reconstructions partitioned in this manner can closely mimic the poorly conditioned, low error modes, and therefore efficiently incorporate measurements that stitch together weakly coupled portions of the reconstruction. We explain how this technique can be used to scalably and efficiently generate reconstructions from large sets of images.

Page generated in 0.1176 seconds