• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 250
  • 85
  • 36
  • 33
  • 21
  • 5
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 525
  • 525
  • 231
  • 172
  • 161
  • 159
  • 97
  • 75
  • 73
  • 66
  • 62
  • 61
  • 59
  • 58
  • 54
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
291

Kvantifikace přirozené vodní retenční schopnosti krajiny ve vybraných povodích / Quantification of natural water retention capacity in selected watersheds

Pavlík, František January 2014 (has links)
The aim of this work is to quantify the natural water retention capacity of the landscape in selected catchments and determine the significance of selected climatic and geographic basin factors on components of retention capacity. 18 rainfall-runoff events were selected in 11 catchments for quantification of the natural water retention capacity of landscape. Land cover, geomorphological, pedological and hydrological conditions were analyzed for this basin using GIS tools. Historical rainfall-runoff events for which were restored historical land cover were also evaluated in this work. Two different methods were used for the quantification of retention capacity, one of them was used in two variants. A rainfall-runoff models and simulated scenarios of land cover (positive and negative) were constructed in two selected catchments in HEC-HMS. The work also use previously performed simulation scenarios of land cover formulated by other authors. Total final set of 33 rainfall-runoff events were subjected to statistical correlation and regression analysis. The goal of these analyses was to determine the significance of individual parameters assessed in relation to the components of natural water retention of catchment.
292

Drivers of Land Cover Change via Deforestation in Selected Post-Soviet Russian Cities

Dyne, Matthew Aaron 07 March 2019 (has links)
No description available.
293

Modeling Land-Cover/Land-Use Change: A Case Study of a Dynamic Agricultural Landscape in An Giang and Dong Thap, Vietnam

Haynes, Keelin 31 July 2020 (has links)
No description available.
294

Damage Assessment of the 2022 Tongatapu Tsunami : With Remote Sensing / Skadebedömning av 2022 Tongatapu Tsunamin : Med Fjärranalys

Larsson, Milton January 2022 (has links)
The Island of Tongatapu, Tonga, was struck by a tsunami on January 15, 2022. Internet was cut off from the island, which made remote sensing a valuable tool for the assessment of damages. Through land cover classification, change vector analysis and log-ratio image differencing, damages caused by the tsunami were assessed remotely in this thesis. Damage assessment is a vital part of both assessing the need for humanitarian aid after a tsunami, but also lays the foundation for preventative measurements and reconstruction. The objective of this thesis was to assess damage in terms of square kilometers and create damage maps. It was also vital to assess the different methods and evaluate their accuracy. Results from this study could theoretically be combined with other damage assessments to evaluate different aspects of damage. It was also important to evaluate which methods would be good to use in a similar event. In this study Sentinel-1, Sentinel-2 and high-resolution Planet Imagery were used to conduct a damage assessment. Evaluating both moderate and high-resolution imagery in combination with SAR yielded plausible, but flawed results. Land cover was computed for moderate and high-resolution imagery using three types of classifiers. It was found that the Random Forest classifier outperforms both CART and Support Vector Machine classification for this study area.  Land cover composite image differencing for pre-and-post tsunami Sentinel-2 images achieved an accuracy of around 85%. Damage was estimated to be about 10.5 km^2. Land cover classification with high-resolution images gave higher accuracy. The total estimated damaged area was about 18 km^2. The high-resolution image classification was deemed to be the better method of urban damage assessment, with moderate-resolution imagery working well for regional damage assessment.  Change vector analysis provided plausible results when using Sentinel-2 with NDVI, NDMI, SAVI and BSI. NDVI was found to be the most comprehensive change indicator when compared to the other tested indices. The total estimated damage using all tested indices was roughly 7.6 km^2. Using the same method for Sentinel-1's VV and VH bands, the total damage was estimated to be 0.4 and 2.6 km^2 respectively. Log ratio for Sentinel-1 did not work well compared to change vector analysis. Issues with false positives occurred. Both log-ratios of VV and VH gave a similar total estimated damage of roughly 5.2 km^2.  Problems were caused by cloud cover and ash deposits. The analysis could have been improved by being consistent with the choice of dates for satellite images. Also, balancing classification samples and using high-resolution land cover classification on specific areas of interest indicated by regional methods. This would circumvent problems with ash, as reducing the study area would make more high-resolution imagery available.
295

Mapping Wetlands Using GIS and Remote Sensing Techniques, A Case Study of Wetlands in Greater Accra, Ghana

Amoah, Michael Kofi Mborah 19 December 2022 (has links)
No description available.
296

Feature Extraction and FeatureSelection for Object-based LandCover Classification : Optimisation of Support Vector Machines in aCloud Computing Environment

Stromann, Oliver January 2018 (has links)
Mapping the Earth’s surface and its rapid changes with remotely sensed data is a crucial tool to un-derstand the impact of an increasingly urban world population on the environment. However, the impressive amount of freely available Copernicus data is only marginally exploited in common clas-sifications. One of the reasons is that measuring the properties of training samples, the so-called ‘fea-tures’, is costly and tedious. Furthermore, handling large feature sets is not easy in most image clas-sification software. This often leads to the manual choice of few, allegedly promising features. In this Master’s thesis degree project, I use the computational power of Google Earth Engine and Google Cloud Platform to generate an oversized feature set in which I explore feature importance and analyse the influence of dimensionality reduction methods. I use Support Vector Machines (SVMs) for object-based classification of satellite images - a commonly used method. A large feature set is evaluated to find the most relevant features to discriminate the classes and thereby contribute most to high clas-sification accuracy. In doing so, one can bypass the sensitive knowledge-based but sometimes arbi-trary selection of input features.Two kinds of dimensionality reduction methods are investigated. The feature extraction methods, Linear Discriminant Analysis (LDA) and Independent Component Analysis (ICA), which transform the original feature space into a projected space of lower dimensionality. And the filter-based feature selection methods, chi-squared test, mutual information and Fisher-criterion, which rank and filter the features according to a chosen statistic. I compare these methods against the default SVM in terms of classification accuracy and computational performance. The classification accuracy is measured in overall accuracy, prediction stability, inter-rater agreement and the sensitivity to training set sizes. The computational performance is measured in the decrease in training and prediction times and the compression factor of the input data. I conclude on the best performing classifier with the most effec-tive feature set based on this analysis.In a case study of mapping urban land cover in Stockholm, Sweden, based on multitemporal stacks of Sentinel-1 and Sentinel-2 imagery, I demonstrate the integration of Google Earth Engine and Google Cloud Platform for an optimised supervised land cover classification. I use dimensionality reduction methods provided in the open source scikit-learn library and show how they can improve classification accuracy and reduce the data load. At the same time, this project gives an indication of how the exploitation of big earth observation data can be approached in a cloud computing environ-ment.The preliminary results highlighted the effectiveness and necessity of dimensionality reduction methods but also strengthened the need for inter-comparable object-based land cover classification benchmarks to fully assess the quality of the derived products. To facilitate this need and encourage further research, I plan to publish the datasets (i.e. imagery, training and test data) and provide access to the developed Google Earth Engine and Python scripts as Free and Open Source Software (FOSS). / Kartläggning av jordens yta och dess snabba förändringar med fjärranalyserad data är ett viktigt verktyg för att förstå effekterna av en alltmer urban världsbefolkning har på miljön. Den imponerande mängden jordobservationsdata som är fritt och öppet tillgänglig idag utnyttjas dock endast marginellt i klassifikationer. Att hantera ett set av många variabler är inte lätt i standardprogram för bildklassificering. Detta leder ofta till manuellt val av få, antagligen lovande variabler. I det här arbetet använde jag Google Earth Engines och Google Cloud Platforms beräkningsstyrkan för att skapa ett överdimensionerat set av variabler i vilket jag undersöker variablernas betydelse och analyserar påverkan av dimensionsreducering. Jag använde stödvektormaskiner (SVM) för objektbaserad klassificering av segmenterade satellitbilder – en vanlig metod inom fjärranalys. Ett stort antal variabler utvärderas för att hitta de viktigaste och mest relevanta för att diskriminera klasserna och vilka därigenom mest bidrar till klassifikationens exakthet. Genom detta slipper man det känsliga kunskapsbaserade men ibland godtyckliga urvalet av variabler.Två typer av dimensionsreduceringsmetoder tillämpades. Å ena sidan är det extraktionsmetoder, Linjär diskriminantanalys (LDA) och oberoende komponentanalys (ICA), som omvandlar de ursprungliga variablers rum till ett projicerat rum med färre dimensioner. Å andra sidan är det filterbaserade selektionsmetoder, chi-två-test, ömsesidig information och Fisher-kriterium, som rangordnar och filtrerar variablerna enligt deras förmåga att diskriminera klasserna. Jag utvärderade dessa metoder mot standard SVM när det gäller exakthet och beräkningsmässiga prestanda.I en fallstudie av en marktäckeskarta över Stockholm, baserat på Sentinel-1 och Sentinel-2-bilder, demonstrerade jag integrationen av Google Earth Engine och Google Cloud Platform för en optimerad övervakad marktäckesklassifikation. Jag använde dimensionsreduceringsmetoder som tillhandahålls i open source scikit-learn-biblioteket och visade hur de kan förbättra klassificeringsexaktheten och minska databelastningen. Samtidigt gav detta projekt en indikation på hur utnyttjandet av stora jordobservationsdata kan nås i en molntjänstmiljö.Resultaten visar att dimensionsreducering är effektiv och nödvändig. Men resultaten stärker också behovet av ett jämförbart riktmärke för objektbaserad klassificering av marktäcket för att fullständigt och självständigt bedöma kvaliteten på de härledda produkterna. Som ett första steg för att möta detta behov och för att uppmuntra till ytterligare forskning publicerade jag dataseten och ger tillgång till källkoderna i Google Earth Engine och Python-skript som jag utvecklade i denna avhandling.
297

Geospatial Analysis of the Impact of Land-Use and Land Cover Change on Maize Yield in Central Nigeria

Wegbebu, Reynolds 05 June 2023 (has links)
No description available.
298

Urban Landscape Assessment of the Mississippi and Alabama Gulf Coast using Landsat Imagery 1973-2015

Sherif, Abdalla R 10 August 2018 (has links)
This study aims to conduct an assessment of the land cover change of the Mississippi and Alabama coastal region, an integral part of the Gulf Coast ecological makeup. Landsat satellite data were used to perform a supervised classification using the imagery captured by Landsat sensors including Landsat 1-2 Multispectral Scanner (MSS), Landsat 4-5 Thematic Mapper (TM), Landsat 7 Enhanced Thematic Mapper (ETM+), and Landsat 8 Operational Land Imager (OLI) from 1973 to 2015. The objective of this study is to build a long-term assessment of urban development and land cover change over the past four decades for the Alabama and Mississippi Gulf Coast and to characterize these changes using Landscape Metrics (LM). The findings of this study indicate that the urban land cover doubled in size between 1973 and 2015. This expansion was accompanied by a high degree of urban fragmentation during the first half of the study period and then a gradual leveling off. Local, state, and federal authorities can use the results of this study to build mitigation plans, coastal development planning, and serve as the primary evaluation of the current urban development for city planners, environmental advocates, and community leaders to reduce degradation for this environmentally sensitive coastal region.
299

The Effect of Wetland Size and Surrounding Land Use on Wetland Quality along an Urbanization Gradient in the Rocky River Watershed

Gunsch, Marilyn S. 29 October 2008 (has links)
No description available.
300

GULF OF MAINE LAND COVER AND LAND USE CHANGE ANALYSIS UTILIZING RANDOM FOREST CLASSIFICATION: TO BE USED IN HYDROLOGICAL AND ECOLOGICAL MODELING OF TERRESTRIAL CARBON EXPORT TO THE GULF OF MAINE VIA RIVERINE SYSTEMS

Mordini, Michael B. 14 August 2013 (has links)
No description available.

Page generated in 0.7004 seconds