• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 485
  • 142
  • 95
  • 60
  • 52
  • 30
  • 25
  • 15
  • 12
  • 11
  • 6
  • 4
  • 3
  • 3
  • 3
  • Tagged with
  • 1130
  • 178
  • 169
  • 161
  • 119
  • 117
  • 114
  • 104
  • 94
  • 89
  • 81
  • 81
  • 74
  • 73
  • 66
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
691

A Comparison of Lidar Generated Channel Features with Ground-Surveyed Channel Features in the Little Creek Watershed

Hilburn, Ryan M 01 June 2010 (has links)
Detecting change in stream channel features over time is important in understanding channel morphology and the effects of both natural and anthropogenic influences. Channel features historically, and now currently, are being measured using a variety of ground survey techniques. These surveys require substantial time commitments and funding to complete. Light Detection and Ranging (LiDAR) is an airborne laser mapping technology that holds promise to provide an alternative to ground-based survey methods. For this study, ground surveys were used to verify the accuracy of data collected using airborne LiDAR. Fifty nine cross-sectional profiles were surveyed in the Little Creek watershed at Cal Poly’s Swanton Pacific Ranch and compared to LiDAR-generated profiles of the same location. LiDAR data were collected in two flights during April and May of 2002. The vertical accuracy of LiDAR elevations was determined to be 0.610 m RSME based on a point-to-point comparison of the elevation of ground survey points in each cross-sectional profile to the corresponding LiDAR elevation. The average ground spacing of the LiDAR survey within the study area was one point every 5.2 square meters. In comparison to ground surveys it was found that with this level of vertical precision and horizontal resolution it would be difficult to detect change in bankfull channel characteristics of a relatively small channel, such as Little Creek. These difficulties are largely attributed to poor point coverage in forested, steep, and mountainous terrain, along with technological limitations of LiDAR that have since improved.
692

Remote Sensing of Forests: Analyzing Biomass Stocks, Changes and Variability with Empirical Data and Simulations

Knapp, Nikolai 02 October 2019 (has links)
Forests are an important component in the earth system. They cover nearly one third of the land surface, store about as much carbon as the entire atmosphere and host more than half of the planet’s biodiversity. Forests provide ecosystem services such as climate regulation and water cycling and they supply resources. However, forests are increasingly at risk worldwide, due to anthropogenic deforestation, degradation and climate change. Concepts for counteracting this development require abilities to monitor forests and predict possible future developments. Given the vast size of forest cover along with the variety of forest types, field measurements and experiments alone cannot provide the solution for this task. Remote sensing and forest modeling enable a broader and deeper understanding of the processes that shape our planet’s forests. Remote sensing from airborne and spaceborne platforms can provide detailed measurements of forest attributes ranging from landscape to global scale. The challenge is to interpret the measurements in an appropriate way and derive biophysical properties. This requires a good understanding of the interaction between radiation and the vegetation. Forest models are tools that synthesize our knowledge about processes, such as tree growth, competition, disturbances and mortality. They allow simulation experiments which go beyond the spatial and temporal scales of field experiments. In this thesis, several major challenges in forest ecology and remote sensing were addressed. The main variable of interest was forest biomass, as it is the most important variable for forest carbon mapping and for understanding the role of vegetation in the global carbon cycle. For the purpose of biomass estimation, remote sensing derived canopy height and structure measurements were combined with field data, forest simulations and remote sensing simulations. The goals were: 1) to integrate remote sensing measurements into a forest model; 2) to understand the effects of spatial scale and disturbances on biomass estimation using a variety of remote sensing metrics; 3) to develop approaches for quantifying biomass changes over time with remote sensing and 4) to overcome differences among forest types by considering several structural aspects in the biomass estimation function. In the first study, a light detection and ranging (lidar) simulator was developed and integrated in the forest model FORMIND. The model was parameterized for the tropical rainforest on Barro Colorado Island (BCI, Panama). The output of the lidar simulator was validated against real airborne lidar data from BCI. Undisturbed and disturbed forests were simulated with FORMIND to identify the most well suited lidar metric for biomass estimation. The objective hereby was to achieve a low normalized root mean squared error (nRMSE) over the entire range of forest structures caused by disturbances and succession. Results identified the mean top-of-canopy height (TCH) as the best lidar-derived predictor. The accuracy strongly depended on spatial scale and relative errors < 10% could be achieved if the spatial resolution of the produced biomass map was ≥ 100 m and the spatial resolution of the remote sensing input was ≤ 10 m. These results could provide guidance for biomass mapping efforts. In the second study, forest simulations were used to explore approaches for estimating changes in forest biomass over time based on observed changes in canopy height. In an ideal situation, remote sensing provides measurements of canopy height above ground which allows the estimation of biomass stocks and changes. However, this requires sensors which are able to detect canopy surface and terrain elevation, and some sensors can only detect the surface (e.g., X-band radar). In such cases, biomass change has to be estimated from height change using a direct relationship. Unfortunately, such a relationship is not constant for forests in different successional stages, which can lead to considerable biases in the estimates of biomass change. A solution to this problem was found, where missing information of canopy height was compensated by integrating metrics of canopy texture. Applying this improved approach enables estimations of biomass losses and gains after disturbances at 1-ha resolution. In mature forests with very small changes in height and biomass all tested approaches have limited capabilities, as was revealed by an application using TanDEM-X derived canopy height from BCI. In the third study, a general biomass estimation function, which links remote sensing-derived structure metrics to forest biomass, was developed. General in this context means that it can be applied in different forest types and different biomes. For this purpose a set of predictor metrics was explored, with each predictor representing one of the following structural aspects: mean canopy height, maximal possible canopy height, maximal possible stand density, vertical canopy structure and wood density. The derived general equation resulted in almost equally accurate biomass estimates across the five considered sites (nRMSE = 12.4%, R² = 0.74) as site-specific equations (nRMSE = 11.7%, R²= 0.77). The contributions of the predictors provide a better understanding of the variability in the height-to-biomass relationship observed across forest types. The thesis has laid foundations for a close link between remote sensing, forest modeling and forest inventories. Several ongoing projects carry this further, by 1) disentangling and quantifying the uncertainty in biomass remote sensing, 2) trying to predict forest productivity based on structure and 3) detecting single trees from lidar to be used as forest model input. These methods can in the future lead to an integrated forest monitoring and information system, which assimilates remote sensing measurements and produces predictions about forest development. Such tools are urgently needed to reduce the risks forests are facing worldwide.
693

Geodetic methods of mapping earthquake-induced ground deformation and building damage

Diederichs, Anna K. 25 August 2020 (has links)
I use temporal lidar and radar to reveal fault rupture kinematics and to test a method of mapping earthquake-induced structural damage. Using pre- and post-event data, these applications of remote technology offer unique perspectives of earthquake effects. Lidar point clouds can produce high resolution, three-dimensional terrain maps, so subtle landscape shifts can be discerned through temporal analysis, providing detailed imagery of co-seismic ground displacement and faulting. All-weather radar systems record back-scattered signal amplitude and phase. Pre- and post-event comparisons of phase can illuminate co-seismic structural damage using an oblique look angle, most sensitive to changes in building heights. Extracted information from these geodetic methods may be used to inform decisions on future earthquake modeling and emergency response. In the first major section of this thesis, I calculate co-seismic 3D ground deformation produced by the Papatea fault using differential lidar. I demonstrate that this fault - a key element within the 2016 Mw 7.8 Kaikoura earthquake - has a distinctly non-planar geometry, far exceeded typical co-seismic slip-to-length ratios, and defied Andersonian mechanics by slipping vertically at steep angles. Its surface deformation is poorly reproduced by elastic dislocation models, suggesting the Papatea fault did not release stored strain energy as typically assumed, perhaps explaining its seismic quiescence in back-projections. Instead, it slipped in response to neighboring fault movements, creating a localized space problem, accounting for its anelastic deformation field. Thus, modeling complex, multiple-fault earthquakes as slip on planar faults embedded in an elastic medium may not always be appropriate. For the second major part of this thesis, I compare mean values of interferometric synthetic aperture radar (InSAR) coherence change across four case studies of earthquake-induced building damage. These include the 2016 Amatrice earthquake, the 2017 Puebla-Morelos earthquake, the 2017 Sarpol-e-Zahab earthquake, and the 2018 Anchorage earthquake. I examine the influences of environmental and urban characteristics on co-seismic coherence change using Sentinel-1 imagery and compare the outcomes of various damage levels. I do not find consistent values of mean coherence change to distinguish levels of damage across the case studies, indicating coherence change values vary with location, environment, and damage pattern. However, this method of damage mapping shows potential as a useful tool in earthquake emergency response, capable of quickly identifying localized areas of high damage in areas with low snow and vegetation cover. Given the large spatial coverage and relatively quick, low-cost acquisition of SAR imagery, this method could provide damage estimates for unsafe or remote regions or for areas unable to self-report damage. / Graduate
694

Weather Influence on LiDAR Signals using the Transient Radiative Transfer and LiDAR Equations

Hedlund, Marcus January 2020 (has links)
The ongoing development of self driving cars requires accurate measuring devices and the objective of this thesis was to investigate how di↵erent weather will affect one of these devices, known as a LiDAR. A LiDAR uses pulsed laser light to measure the distance to an object. The main goal of this thesis was to solve the transient radiative transfer equation (TRTE) that describes the propagation of radiation in a scattering, absorbing and emitting media. The TRTE was solved in the frequency domain using the discrete ordinate method (DOM) and a matrix formulation. An alternative model to estimate the amplitude of the return pulse is to use the LiDAR equation which describes the attenuation of a laser pulse in a similar way as Beer-Lamberts law. The difference between the models are that the TRTE accounts for multiple scattering whereas the LiDAR equation only accounts for single scattering. This has the effect that the LiDAR equation only models the change in amplitude of the return pulse whereas the TRTE also models the broadening and shift of the pulse. Experiments were performed with a LiDAR in foggy, rainy and clear weather conditions and compared with the theoretical models. The results from the measurements showed how the amplitude of the pulse decreased in denser fog. However, no tendency to a change in pulse shift and pulse width could be seen from the measured data. Additionally, the measurements showed the effect of ambient light and temperature to the LiDAR signal and also that, even after averaging 300 waveforms, noisy data were a problem. The results from the transient radiative transfer equation showed that in a medium with large optical depth the shift and width of the pulse are highly affected. It was also shown that the amplitude of the pulse calculated with the TRTE seemed to better approximate the experimental data in fog than the LiDAR equation.
695

Rayleigh-Lidar Observations of Mesospheric Gravity Wave Activity above Logan, Utah

Kafle, Durga N. 01 May 2009 (has links)
A Rayleigh-scatter lidar operated from Utah State University (41.7°N, 111.8°W) for a period spanning 11 years ― 1993 through 2004. Of the 900 nights observed, data on 150 extended to 90 km or above. They were the ones used in these studies related to atmospheric gravity waves (AGWs) between 45 and 90 km. This is the first study of AGWs with an extensive data set that spans the whole mesosphere. Using the temperature and temperature gradient profiles, we produced a climatology of the Brunt-Väisälä (buoyancy) angular frequency squared, N2 (rad/s)2. The minimum and maximum values of N2 vary between 2.2×10-4 (rad/s)2 and 9.0×10-4 (rad/s)2. The corresponding buoyancy periods vary between 7.0 and 3.5 minutes. While for long averages the atmosphere above Logan, Utah, is convectively stable, all-night and hourly profiles showed periods of convective instability (i.e., negative N2). The N2 values were often significantly different from values derived from the NRL-MSISe00 model atmosphere because of the effects of inversion layers and semiannual variability in the lidar data. Relative density fluctuation profiles with 3-km altitude resolution and 1-hour temporal resolution showed the presence of monochromatic gravity waves on almost every night throughout the mesosphere. The prevalent values of vertical wavelength and vertical phase velocity were 12-16 km and 0.5-0.6 m/s, respectively. However, the latter has the significant seasonal variation. Using these two observed parameters, buoyancy periods, and the AGW dispersion relation, we derived the ranges of horizontal wavelength, phase velocity, and source distance. The prevalent values were 550-950 km, 32-35 m/s, and 2500-3500 km, respectively. The potential energy per unit mass Ep showed great night-to-night variability, up to a factor of 20, at all heights. Ep grew at approximately the adiabatic rate below 55-65 km and above 75-80 km. Step function decreases in Ep imply that the AGWs in between gave up considerable energy to the background atmosphere. In addition, Ep varies seasonally. Below 70 km, it has a semiannual variation with a maximum in winter and minima in the equinoxes. At the highest altitudes it has an annual variation with a maximum in winter and a minimum in summer.
696

An Analysis of Airborne Data Collection Methods for Updating Highway Feature Inventory

He, Yi 01 May 2016 (has links)
Highway assets, including traffic signs, traffic signals, light poles, and guardrails, are important components of transportation networks. They guide, warn and protect drivers, and regulate traffic. To manage and maintain the regular operation of the highway system, state departments of transportation (DOTs) need reliable and up-to-date information about the location and condition of highway assets. Different methodologies have been employed to collect road inventory data. Currently, ground-based technologies are widely used to help DOTs to continually update their road database, while air-based methods are not commonly used. One possible reason is that the initial investment for air-based methods is relatively high; another is the lack of a systematic and effective approach to extract road features from raw airborne light detection and ranging (LiDAR) data and aerial image data. However, for large-area inventories (e.g., a whole state highway inventory), the total cost of using aerial mapping is actually much lower than other methods considering the time and personnel needed. Moreover, unmanned aerial vehicles (UAVs) are easily accessible and inexpensive, which makes it possible to reduce costs for aerial mapping. The focus of this project is to analyze the capability and strengths of airborne data collection system in highway inventory data collection. In this research, a field experiment was conducted by the Remote Sensing Service Laboratory (RSSL), Utah State University (USU), to collect airborne data. Two kinds of methodologies were proposed for data processing, namely ArcGIS-based algorithm for airborne LiDAR data, and MATLAB-based procedure for aerial photography. The results proved the feasibility and high efficiency of airborne data collection method for updating highway inventory database.
697

Investigating Simultaneous Localization and Mapping for an Automated Guided Vehicle

Manhed, Joar January 2019 (has links)
The aim of the thesis is to apply simultaneous localization and mapping (SLAM) to automated guided vehicles (AGVs) in a Robot Operating System (ROS) environment. Different sensor setups are used and evaluated. The SLAM applications used is the open-source solution Cartographer as well as Intel's own commercial SLAM in their T265 tracking camera. The different sensor setups are evaluated based on how well the localization will give the exact pose of the AGV in comparison to another positioning system acting as ground truth.
698

3-D Scene Reconstruction for Passive Ranging Using Depth from Defocus and Deep Learning

Emerson, David R. 08 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Depth estimation is increasingly becoming more important in computer vision. The requirement for autonomous systems to gauge their surroundings is of the utmost importance in order to avoid obstacles, preventing damage to itself and/or other systems or people. Depth measuring/estimation systems that use multiple cameras from multiple views can be expensive and extremely complex. And as these autonomous systems decrease in size and available power, the supporting sensors required to estimate depth must also shrink in size and power consumption. This research will concentrate on a single passive method known as Depth from Defocus (DfD), which uses an in-focus and out-of-focus image to infer the depth of objects in a scene. The major contribution of this research is the introduction of a new Deep Learning (DL) architecture to process the the in-focus and out-of-focus images to produce a depth map for the scene improving both speed and performance over a range of lighting conditions. Compared to the previous state-of-the-art multi-label graph cuts algorithms applied to the synthetically blurred dataset the DfD-Net produced a 34.30% improvement in the average Normalized Root Mean Square Error (NRMSE). Similarly the DfD-Net architecture produced a 76.69% improvement in the average Normalized Mean Absolute Error (NMAE). Only the Structural Similarity Index (SSIM) had a small average decrease of 2.68% when compared to the graph cuts algorithm. This slight reduction in the SSIM value is a result of the SSIM metric penalizing images that appear to be noisy. In some instances the DfD-Net output is mottled, which is interpreted as noise by the SSIM metric. This research introduces two methods of deep learning architecture optimization. The first method employs the use of a variant of the Particle Swarm Optimization (PSO) algorithm to improve the performance of the DfD-Net architecture. The PSO algorithm was able to find a combination of the number of convolutional filters, the size of the filters, the activation layers used, the use of a batch normalization layer between filters and the size of the input image used during training to produce a network architecture that resulted in an average NRMSE that was approximately 6.25% better than the baseline DfD-Net average NRMSE. This optimized architecture also resulted in an average NMAE that was 5.25% better than the baseline DfD-Net average NMAE. Only the SSIM metric did not see a gain in performance, dropping by 0.26% when compared to the baseline DfD-Net average SSIM value. The second method illustrates the use of a Self Organizing Map clustering method to reduce the number convolutional filters in the DfD-Net to reduce the overall run time of the architecture while still retaining the network performance exhibited prior to the reduction. This method produces a reduced DfD-Net architecture that has a run time decrease of between 14.91% and 44.85% depending on the hardware architecture that is running the network. The final reduced DfD-Net resulted in a network architecture that had an overall decrease in the average NRMSE value of approximately 3.4% when compared to the baseline, unaltered DfD-Net, mean NRMSE value. The NMAE and the SSIM results for the reduced architecture were 0.65% and 0.13% below the baseline results respectively. This illustrates that reducing the network architecture complexity does not necessarily reduce the reduction in performance. Finally, this research introduced a new, real world dataset that was captured using a camera and a voltage controlled microfluidic lens to capture the visual data and a 2-D scanning LIDAR to capture the ground truth data. The visual data consists of images captured at seven different exposure times and 17 discrete voltage steps per exposure time. The objects in this dataset were divided into four repeating scene patterns in which the same surfaces were used. These scenes were located between 1.5 and 2.5 meters from the camera and LIDAR. This was done so any of the deep learning algorithms tested would see the same texture at multiple depths and multiple blurs. The DfD-Net architecture was employed in two separate tests using the real world dataset. The first test was the synthetic blurring of the real world dataset and assessing the performance of the DfD-Net trained on the Middlebury dataset. The results of the real world dataset for the scenes that were between 1.5 and 2.2 meters from the camera the DfD-Net trained on the Middlebury dataset produced an average NRMSE, NMAE and SSIM value that exceeded the test results of the DfD-Net tested on the Middlebury test set. The second test conducted was the training and testing solely on the real world dataset. Analysis of the camera and lens behavior led to an optimal lens voltage step configuration of 141 and 129. Using this configuration, training the DfD-Net resulted in an average NRMSE, NMAE and SSIM of 0.0660, 0.0517 and 0.8028 with a standard deviation of 0.0173, 0.0186 and 0.0641 respectively.
699

Data Acquisition and Processing Pipeline for E-Scooter Tracking Using 3d Lidar and Multi-Camera Setup

Betrabet, Siddhant S. 12 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Analyzing behaviors of objects on the road is a complex task that requires data from various sensors and their fusion to recreate the movement of objects with a high degree of accuracy. A data collection and processing system are thus needed to track the objects accurately in order to make an accurate and clear map of the trajectories of objects relative to various coordinate frame(s) of interest in the map. Detection and tracking moving objects (DATMO) and Simultaneous localization and mapping (SLAM) are the tasks that needs to be achieved in conjunction to create a clear map of the road comprising of the moving and static objects. These computational problems are commonly solved and used to aid scenario reconstruction for the objects of interest. The tracking of objects can be done in various ways, utilizing sensors such as monocular or stereo cameras, Light Detection and Ranging (LIDAR) sensors as well as Inertial Navigation systems (INS) systems. One relatively common method for solving DATMO and SLAM involves utilizing a 3D LIDAR with multiple monocular cameras in conjunction with an inertial measurement unit (IMU) allows for redundancies to maintain object classification and tracking with the help of sensor fusion in cases when sensor specific traditional algorithms prove to be ineffectual when either sensor falls short due to their limitations. The usage of the IMU and sensor fusion methods relatively eliminates the need for having an expensive INS rig. Fusion of these sensors allows for more effectual tracking to utilize the maximum potential of each sensor while allowing for methods to increase perceptional accuracy. The focus of this thesis will be the dock-less e-scooter and the primary goal will be to track its movements effectively and accurately with respect to cars on the road and the world. Since it is relatively more common to observe a car on the road than e-scooters, we propose a data collection system that can be built on top of an e-scooter and an offline processing pipeline that can be used to collect data in order to understand the behaviors of the e-scooters themselves. In this thesis, we plan to explore a data collection system involving a 3D LIDAR sensor and multiple monocular cameras and an IMU on an e-scooter as well as an offline method for processing the data to generate data to aid scenario reconstruction.
700

Marknadsanalys över tekniska lösningar för avsyning av lavinterräng samt ett analysverktyg för att förstå hur och var laviner har gått

Leijonhufvud, Wilhelm January 2020 (has links)
I Kiruna kommun mellan Abisko - Björkliden, på berget Nuoljas ostliga sluttning går det ett flertal laviner varje säsong. Sedan 2013 har Trafikverket installerat ett lavinbekämpningssystem bestående av Gazex kanoner som med en stor smäll startar kontrollerade laviner. Genom regelbunden sprängning motverkar man att större snömassor byggs upp och därmed tar man bort risken för stora laviner att gå. Stora laviner riskerar att nå fram till väg- och järnvägen och kan medföra längre stopp för malmtrafiken. Före varje sprängning vill man vara säker på att varken människor eller djur befinner sig i terrängen, idag gör man en visuell bedömning innan man inleder en sprängning. Genom ny teknik kan man säkerhetsställa att ingen befinner sig i riskzonen vilket gör att sprängningen kan ske med större säkerhet. Rapporten har belyst olika tekniker som radar, drönare och infraröda rörelsedetektorer och hur dessa kan implementeras på Nuolja. Drönare används mer och mer runt om i världen tack vare deras mångsidiga användningsområden. Att använda en drönare för att skanna lavinterräng är tideffektivt men inte optimalt i alla väderförhållanden. IR-rörelsedetektorer är ett annat alternativ som med stor precision känner av den minsta rörelsen från djur eller människor. Till följd av undersökningen av industrier och skidanläggningar runt om i världen (bland annat alperna och Nordamerika) kan man dra slutsatsen, att radar är det som används för att hitta människor och djur i lavinterräng. Rapporten behandlar även ett analysverktyg som efterfrågades av Trafikverket avseende hur man kan studera och analysera laviner som har gått. Med LiDAR-Laser kan man få fram samma underlag och information som lavintekniker får genom fältundersökningar. Med LiDAR får man aktuell information från ett säkert avstånd vilket minskar exponeringen av personal i riskzonen. Man kan även använda LiDAR för att studera snötäcket i förebyggande arbete mot laviner.

Page generated in 0.0367 seconds