• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 44
  • 13
  • Tagged with
  • 57
  • 56
  • 56
  • 12
  • 11
  • 10
  • 9
  • 8
  • 8
  • 8
  • 8
  • 7
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Determination of an Optimum Sector Size for Plan Position Indicator Measurements using a Long Range Coherent Scanning Atmospheric Doppler LiDAR

Simon, Elliot January 2015 (has links)
As wind energy plants continue to grow in size and complexity, advanced measurement technologies such as scanning Doppler LiDAR are essential for assessing site conditions and prospecting new development areas.   The RUNE project was initiated to determine best practices for the use of scanning LiDARs in resource assessments for near shore wind farms. The purpose of this thesis is to determine the optimum configuration for the plan position indicator (PPI) scan type of a scanning LiDAR. A task specific Automated Analysis Software (AAS) is created, and the sensitivity of the integrated velocity azimuth process (iVAP) reconstruction algorithm is examined using sector sizes ranging from 4 to 60 degrees. Further, a comparison to simultaneous dual Doppler measurement is presented in order to determine the necessity of deploying two LiDARs rather than one.    DTU has developed a coordinated long range coherent scanning multi-LiDAR array (the WindScanner system) based on modified Leosphere WindCube 200S devices and an application specific software framework and communication protocol. The long range WindScanner system was deployed at DTU’s test station in Høvsøre, Denmark and measurement data was collected over a period of 7 days. One WindScanner was performing 60 degree sector scans, while two others were placed in staring dual Doppler mode. All three beams were configured to converge atop a 116.5m instrumented meteorological mast.   A significant result was discovered which indicates that the accuracy of the reconstructed measurements do not differ significantly between sector sizes of 30 and 60 degrees. Using the smallest sector size which does not introduce systematic error has numerous benefits including: increasing the scan speed, measurement distance and angular resolution.   When comparing collocated dual Doppler, sector scan and in-situ met-mast instrumentation, we find very good agreement between all techniques. Dual Doppler is able to measure wind speeds within 0.1%, and 60 degree sector scan within 0.2% on average of the reference values. For retrieval of wind direction, the sector scan approach performs particularly well. This is likely attributable to lower errors introduced by the assumption of flow field homogeneity over the scanned area, in contract to wind direction which tends to be more non-uniform. For applications such as site resource assessments, where generally accurate 10 minute wind speed and direction values are required, a scanning LiDAR performing PPI scans with a sector size of between 30 and 38 degrees is recommended. The laser’s line of sight path should be directed parallel to the predominant wind direction and at the lowest elevation angle possible. / RUNE
32

Forest Change Mapping in Southwestern Madagascar using Landsat-5 TM Imagery, 1990 –2010

Grift, Jeroen January 2016 (has links)
The main goal of this study was to map and measure forest change in the southwestern part of Madagascar near the city of Toliara in the period 1990-2010. Recent studies show that forest change in Madagascar on a regional scale does not only deal with forest loss, but also with forest growth However, it is unclear how the study area is dealing with these patterns. In order to select the right classification method, pixel-based classification was compared with object-based classification. The results of this study shows that the object-based classification method was the most suitable method for this landscape. However, the pixel-based approaches also resulted in accurate results. Furthermore, the study shows that in the period 1990–2010, 42% of the forest cover disappeared and was converted into bare soil and savannahs. Next to the change in forest, stable forest regions were fragmented. This has negative effects on the amount of suitable habitats for Malagasy fauna. Finally, the scaling structure in landscape patches was investigated. The study shows that the patch size distribution has long-tail properties and that these properties do not change in periods of deforestation.
33

Fjärranalys av skogsskador efter stormen Gudrun : Skogens återhämtning efter den värsta stormen i modern tid / Remote sensing of forest damage after the storm Gudrun : The recovery of the forest since the worst storm in modern time

Nilsson, Jessica January 2017 (has links)
Den 8:e januari 2005 inträffade en av de mest förödande stormarna i Sveriges historia då hundratusentals blev strömlösa och sju personer miste livet. Stormen Gudrun drabbade centrala Götaland värst och uppemot nio årsavverkningar skog beräknas ha fällts i vissa områden. Tidigare studier av stormen har genomförts på uppdrag av Skogsstyrelsen där resultaten visar att andel stormfälld skogsmarksareal var 11 % i värst drabbade Ljungby kommun, och ca 80 % av all den stormfällda skogen var gran, 18 % var tall och 2 % var löv. Syftet med arbetet är att undersöka mängden stormfälld skog efter stormen Gudrun genom analys av satellitburen fjärranalysdata. Även andelen stormfälld barr- och lövskog beräknades och resultaten jämfördes med de rapporter skrivna för Skogsstyrelsen. Även andelen stormfälld skog som är återbeskogad år 2016 beräknades. En förändringsanalys med satellitbilder från Landsat 5, tagna åren 2004 och 2005, genomfördes vilken inkluderade en skogsmask som skapades genom övervakad MLC-klassificering. Skogsmasken användes för att utesluta ointressanta områden i analyserna. Resultatet användes sedan för analys av andelen stormfälld barr- och lövskog samt för analys av återbeskogade områden år 2016. I den sistnämnda skapades en skogsmask med en satellitbild från Landsat 8 och som sedan användes i analysen. Resultaten från analyserna visar att ca 15,8 % av skogen stormfälldes, varav 78 % var barrskog och 13 % var lövskog. År 2016 hade ca 25 % av de stormfällda områdena återbeskogats. Noggrannheten på resultaten är generellt höga men skiljer sig trots detta väsentligt från resultaten i studierna som gjorts för Skogsstyrelsen. Anledningen till att resultaten skiljer sig åt kan bero på vilka satellitbilder och program som använts i analyserna, samt felkällor som uppkommit i samband med analyserna i denna studie. / On January 8th, 2005 one of the most devastating storms in Sweden’s history occurred, where hundreds of thousands became powerless and seven people lost their lives. The storm Gudrun hit central Götaland worst and nearly nine years’ professional felling of forests was estimated to have fallen in some areas. Previous studies of the storm were carried out on behalf of the Swedish Forest Agency, where the results show that the proportion of windthrown forest area was 11 % in the worst affected municipality of Ljungby. About 80 % of all damaged forests were spruce, 18 % were pine and 2 % were deciduous.                        The aim of this thesis is to investigate the amount of windthrown forest after the storm Gudrun through analysis of satellite remote sensing data. The proportion of windthrown coniferous and deciduous forest was calculated and the results were compared to the reports written on behalf of the Swedish Forest Agency. Furthermore, the proportion of reforested areas in 2016 was calculated. A change analysis based on satellite data from Landsat 5 from 2004 and 2005 was performed which included a forest mask created by supervised MLC classification. The forest mask was used to exclude uninteresting areas in the analyses. The result was then used for the analysis of the proportion of windthrown coniferous and deciduous forest and for the analysis of reforested areas in 2016. In the latter, a forest mask based on Landsat 8 data was used. The results from the analyses show that about 15.8 % of the forest was windthrown, of which 78 % were coniferous and 13 % were deciduous forest. By 2016, 25% of the windthrown areas had been reforested. The accuracy of the results is generally high, but despite this, it substantially differs from the results of earlier studies. The reason for this could be differences in satellite images and programs and additional error sources in conjunction with the analyses.
34

Radar and Optical Data Fusion for Object Based Urban Land Cover Mapping / Radar och optisk datafusion för objektbaserad kartering av urbant marktäcke

Jacob, Alexander January 2011 (has links)
The creation and classification of segments for object based urban land cover mapping is the key goal of this master thesis. An algorithm based on region growing and merging was developed, implemented and tested. The synergy effects of a fused data set of SAR and optical imagery were evaluated based on the classification results. The testing was mainly performed with data of the city of Beijing China. The dataset consists of SAR and optical data and the classified land cover/use maps were evaluated using standard methods for accuracy assessment like confusion matrices, kappa values and overall accuracy. The classification for the testing consists of 9 classes which are low density buildup, high density buildup, road, park, water, golf course, forest, agricultural crop and airport. The development was performed in JAVA and a suitable graphical interface for user friendly interaction was created parallel to the development of the algorithm. This was really useful during the period of extensive testing of the parameter which easily could be entered through the dialogs of the interface. The algorithm itself treats the pixels as a connected graph of pixels which can always merge with their direct neighbors, meaning sharing an edge with those. There are three criteria that can be used in the current state of the algorithm, a mean based spectral homogeneity measure, a variance based textural homogeneity measure and fragmentation test as a shape measure. The algorithm has 3 key parameters which are the minimum and maximum segments size as well as a homogeneity threshold measure which is based on a weighted combination of relative change due to merging two segments. The growing and merging is divided into two phases the first one is based on mutual best partner merging and the second one on the homogeneity threshold. In both phases it is possible to use all three criteria for merging in arbitrary weighting constellations. A third step is the check for the fulfillment of minimum size which can be performed prior to or after the other two steps. The segments can then in a supervised manner be labeled interactively using once again the graphical user interface for creating a training sample set. This training set can be used to derive a support vector machine which is based on a radial base function kernel. The optimal settings for the required parameters of this SVM training process can be found from a cross-validation grid search process which is implemented within the program as well. The SVM algorithm is based on the LibSVM java implementation. Once training is completed the SVM can be used to predict the whole dataset to get a classified land-cover map. It can be exported in form of a vector dataset. The results yield that the incorporation of texture features already in the segmentation is superior to spectral information alone especially when working with unfiltered SAR data. The incorporation of the suggested shape feature however doesn’t seem to be of advantage, especially when taking the much longer processing time into account, when incorporating this criterion. From the classification results it is also evident, that the fusion of SAR and optical data is beneficial for urban land cover mapping. Especially the distinction of urban areas and agricultural crops has been improved greatly but also the confusion between high and low density could be reduced due to the fusion. / Dragon 2 Project
35

Utveckling av metoder för att säkerställa kvaliteten på höjddata insamlad med UAV : Fastställande av tillvägagångssätt vid luftburen datainsamling / Development of methods to ensure the quality elevation data collected with UAV : Establishment of procedures for airborne data collection

Lindström, Simon January 2021 (has links)
Företaget Team Exact levererar mätningstekniska tjänster, där den främsta verksamheten är riktad mot byggnads- och markindustrin. Företaget använder UAS och levererar tjänster till kunder med ortofoto och DEM som kan användas till kartläggning, volymberäkningar och planering. Team Exact använder konsultföretagets SkyMap’s webbaserade plattform i fotogrammetrisk bearbetning av UAV genererade flygbilder. DEM behöver uppnå HMK-standardnivå 3 för att användas som underlag till bygghandlingar. För att uppnå HMK-standardnivå 3 så krävs det en lägesosäkerhet på 0,02–0,05 m/ 0,03–0,07 m (plan/höjd). Team Exact uppnår god lägesosäkerhet i plan men har varierande resultat i höjdåtergivningen. Studien har således en målsättning att hitta metoder för att säkerställa höjden inom ett studieområde med varierande topografi, terräng och markytor. Faktorer som ska undersökas är markstödspunkter, RTK-data, flygstråk, kamerainställningar och tänkvärda åtgärder i skiftande topografi samt att se tendenser hur höjdåtergivningen varierar på olika markytor.  Ett stomnät etablerades över studieområdet med tre fastställda koordinatsatta stompunkter, punkterna var inmätta med statisk NRTK mätning under 1 minut. Nätet jämnades ut med totalstation och därefter blev kontrollpunkter, profiler, ytor och markstödspunkter inmätta. Studien utredde lägesosäkerheten med 0, 5, 9 och 12 markstödspunkter. Den UAV som användes i studien är försedd med en RTK-modul och förväntades därav tillhandahålla positioneringsdata som var av värde att utreda. Markstödspunkternas utplacering planerades med fyra konstanta i studieområdets yttrehörn och en femte konstant på studieområdets högsta höjd. Resterande punkter placerades ut i en jämnfördelning över områdets toppar och dalar.  Flygmetoderna som utvärderades var förankrade i tidigare studier. Gemensamma inställningar över samtliga metoder var studieområdets avgränsning, en flyghöjd på 40 m samt flyghastigheten på 3 m/s. Resterande var flytande parametrar som var av värde att utreda. Studien justerade parametrarna gällande flygstråk, övertäckning, kameravinkel och kamerainställningar. Totalt blev det tre flygmetoder där de fyra olika markstödskombinationerna undersöktes vilket gav 12 processer att utvärdera. Utvärderingen utfördes mot 77 kontrollpunkter där RMSE-värde för höjd och plan undersöktes. Kontrollpunkterna var jämnt fördelade över ytan och marktyperna. En ytterligare analys utfördes med volymberäkningar mellan referens terrängmodeller och de genererade terrängmodellerna.  Flygmetod 3 gav bästa resultat där fotogrammetriinställningen Double Grid användes och överlappningen var 80/60 % samt att kameran tiltades till -70°. Sensorkänsligheten var inställd på ISO100, bländaren ett öppningsvärde f/5 och slutartiden var inställd på 1/500s. Studiens resultat visar att flygmetod 3 som blockutjämnats med 12 markstödspunkter genererade bästa resultat på en lägesosäkerhet i plan på 0,015 m samt 0,035 m i höjd. / The company Team Exact delivers measurement technical services, and the main business is aimed at the construction and land industry. The company uses UAS and offers services to customers and delivers products such as orthophotos and DEMs that can be used for mapping, volume calculations and planning. Team Exact uses the consulting company SkyMap’s web-based platform for photogrammetric processing of UAV-generated aerial images. DEM needs to achieve good positional uncertainty, to achieve HMK standard level 3, it is required that the basis for construction documents has a positional uncertainty of 0.02–0.05 m / 0.03–0.07 m (level / height). Team Exact achieves good positional uncertainty in horizontal coordinates but has varying results in height reproduction. The study thus aims to find methods to ensure the height within a study area with varying topography, terrain and ground surfaces. Factors to be investigated are ground control points, RTK data, flight paths, camera settings and conceivable measures in varying topography, as well as seeing trends in how the height representation differs on different ground surfaces. A coordinate network was established over the study area with three established coordinate reference points, the points were measured with static NRTK measurement 1 minute. The network was levelled with the total station and then control points, profiles, surfaces, and ground control points were measured. The study investigated the location uncertainty with 0, 5, 9 and 12 ground control points. The UAV used in the study is equipped with an RTK module and was therefore expected to provide positioning data that was worth investigating. The placement of the ground support points was planned with four constants in the outer corner of the study area and a fifth constant at the highest level of the study area. The remaining points were placed in an even distribution over the area’s peaks and valleys. The evaluated flight methods were rooted in previous studies. Common settings across all methods were the study area delimitation, 40 m flight altitude and the flight speed of 3 m/s. Remaining were floating parameters that were of value to investigate. The study adjusted the parameters regarding flight path, coverage, camera angle and camera settings. In total, there were three flight methods where the four different ground support combinations were examined, which gave 12 processes to evaluate. The evaluation was performed against 77 control points where the RMSE value for height and plane was examined. The control points were evenly distributed over the surface and soil types. A further analysis was performed with volume calculations between the reference terrain models and the generated terrain models. Flight method 3 gave the best results where the photogrammetry setting Double Grid was used and the overlap was 80/60 % and the camera was tilted to -70 °. The sensor sensitivity was set to ISO100, the shutter had an aperture value of f/5 and the shutter speed was set to 1/500s. The results of the study indicate that flight method 3, which was levelled with 12 ground support points, generated the best results on a positional uncertainty in horizontal coordinates of 0,015 m and 0,035 m in height.
36

DEVELOPING METHODS FOR WATER QUALITY MEASUREMENT : Using machine learning and remote sensing to predict absorbance with multispectral imaging

Bratt, Ola January 2023 (has links)
Water resources play an important role in society and fulfill various functions such as providing drinking water, supporting industrial production and enhancing the overall landscape. Water bodies, such as rivers and lakes, are particularly important in this context. However, as societies and economies develop, the demand for water increases significantly. This also leads to the release of domestic, agricultural and industrial wastewater, which often exceeds the self-purification capacity of water bodies. Consequently, rivers and lakes are getting more and more polluted, endangering the safety of drinking water and causing ecological damage, affecting human health and biodiversity.  Water quality monitoring plays a crucial role in evaluating the state of water bodies. Traditional monitoring methods involve labor-intensive field sampling and expensive construction and maintenance of automatic stations. Although these methods provide accurate results, they are limited to specific sampling points and struggle to meet the demands of monitoring water quality across entire surfaces of rivers and lakes. This degree project aim at developing a method that can predict absorbance in water with the aim of remote sensing. Along with multispectral imaging and machine learning this work proves that this is possible. The result from multivariate analysis is an optimal model that can predict absorbance at 420 nm with RSQ of 0,996 and RMSE of 0,00081.
37

Urban Space Recreation for Pedestrians through Smart Lighting Control Systems

Karagöz, Hande January 2018 (has links)
Connected public lighting for more sustainable and liveable cities is highly demanding research in lighting design field through human centred design approach. While following this understanding, this thesis aims to answer the question “How a networked public lighting can be created in order to enhance the needs of the pedestrians in Fredhällspark?”. To investigate this study, a background research was studied in the relevant topics of urban lighting, followed by the study of human safety regarding to this topic and lastly the possible new lighting technologies. The main study is involved in a pedestrian path at Fredhällspark in Stockholm, Sweden, in two months duration in the spring time of 2018 by conducting user surveys and taking the lighting measurements. Based on the results the study showed, a lighting design proposal is developed with a site-specific approach in order to make it up-to date and sustainable for future urban environments while complying with the requirements of the users.
38

Feature Extraction and FeatureSelection for Object-based LandCover Classification : Optimisation of Support Vector Machines in aCloud Computing Environment

Stromann, Oliver January 2018 (has links)
Mapping the Earth’s surface and its rapid changes with remotely sensed data is a crucial tool to un-derstand the impact of an increasingly urban world population on the environment. However, the impressive amount of freely available Copernicus data is only marginally exploited in common clas-sifications. One of the reasons is that measuring the properties of training samples, the so-called ‘fea-tures’, is costly and tedious. Furthermore, handling large feature sets is not easy in most image clas-sification software. This often leads to the manual choice of few, allegedly promising features. In this Master’s thesis degree project, I use the computational power of Google Earth Engine and Google Cloud Platform to generate an oversized feature set in which I explore feature importance and analyse the influence of dimensionality reduction methods. I use Support Vector Machines (SVMs) for object-based classification of satellite images - a commonly used method. A large feature set is evaluated to find the most relevant features to discriminate the classes and thereby contribute most to high clas-sification accuracy. In doing so, one can bypass the sensitive knowledge-based but sometimes arbi-trary selection of input features.Two kinds of dimensionality reduction methods are investigated. The feature extraction methods, Linear Discriminant Analysis (LDA) and Independent Component Analysis (ICA), which transform the original feature space into a projected space of lower dimensionality. And the filter-based feature selection methods, chi-squared test, mutual information and Fisher-criterion, which rank and filter the features according to a chosen statistic. I compare these methods against the default SVM in terms of classification accuracy and computational performance. The classification accuracy is measured in overall accuracy, prediction stability, inter-rater agreement and the sensitivity to training set sizes. The computational performance is measured in the decrease in training and prediction times and the compression factor of the input data. I conclude on the best performing classifier with the most effec-tive feature set based on this analysis.In a case study of mapping urban land cover in Stockholm, Sweden, based on multitemporal stacks of Sentinel-1 and Sentinel-2 imagery, I demonstrate the integration of Google Earth Engine and Google Cloud Platform for an optimised supervised land cover classification. I use dimensionality reduction methods provided in the open source scikit-learn library and show how they can improve classification accuracy and reduce the data load. At the same time, this project gives an indication of how the exploitation of big earth observation data can be approached in a cloud computing environ-ment.The preliminary results highlighted the effectiveness and necessity of dimensionality reduction methods but also strengthened the need for inter-comparable object-based land cover classification benchmarks to fully assess the quality of the derived products. To facilitate this need and encourage further research, I plan to publish the datasets (i.e. imagery, training and test data) and provide access to the developed Google Earth Engine and Python scripts as Free and Open Source Software (FOSS). / Kartläggning av jordens yta och dess snabba förändringar med fjärranalyserad data är ett viktigt verktyg för att förstå effekterna av en alltmer urban världsbefolkning har på miljön. Den imponerande mängden jordobservationsdata som är fritt och öppet tillgänglig idag utnyttjas dock endast marginellt i klassifikationer. Att hantera ett set av många variabler är inte lätt i standardprogram för bildklassificering. Detta leder ofta till manuellt val av få, antagligen lovande variabler. I det här arbetet använde jag Google Earth Engines och Google Cloud Platforms beräkningsstyrkan för att skapa ett överdimensionerat set av variabler i vilket jag undersöker variablernas betydelse och analyserar påverkan av dimensionsreducering. Jag använde stödvektormaskiner (SVM) för objektbaserad klassificering av segmenterade satellitbilder – en vanlig metod inom fjärranalys. Ett stort antal variabler utvärderas för att hitta de viktigaste och mest relevanta för att diskriminera klasserna och vilka därigenom mest bidrar till klassifikationens exakthet. Genom detta slipper man det känsliga kunskapsbaserade men ibland godtyckliga urvalet av variabler.Två typer av dimensionsreduceringsmetoder tillämpades. Å ena sidan är det extraktionsmetoder, Linjär diskriminantanalys (LDA) och oberoende komponentanalys (ICA), som omvandlar de ursprungliga variablers rum till ett projicerat rum med färre dimensioner. Å andra sidan är det filterbaserade selektionsmetoder, chi-två-test, ömsesidig information och Fisher-kriterium, som rangordnar och filtrerar variablerna enligt deras förmåga att diskriminera klasserna. Jag utvärderade dessa metoder mot standard SVM när det gäller exakthet och beräkningsmässiga prestanda.I en fallstudie av en marktäckeskarta över Stockholm, baserat på Sentinel-1 och Sentinel-2-bilder, demonstrerade jag integrationen av Google Earth Engine och Google Cloud Platform för en optimerad övervakad marktäckesklassifikation. Jag använde dimensionsreduceringsmetoder som tillhandahålls i open source scikit-learn-biblioteket och visade hur de kan förbättra klassificeringsexaktheten och minska databelastningen. Samtidigt gav detta projekt en indikation på hur utnyttjandet av stora jordobservationsdata kan nås i en molntjänstmiljö.Resultaten visar att dimensionsreducering är effektiv och nödvändig. Men resultaten stärker också behovet av ett jämförbart riktmärke för objektbaserad klassificering av marktäcket för att fullständigt och självständigt bedöma kvaliteten på de härledda produkterna. Som ett första steg för att möta detta behov och för att uppmuntra till ytterligare forskning publicerade jag dataseten och ger tillgång till källkoderna i Google Earth Engine och Python-skript som jag utvecklade i denna avhandling.
39

Feasibility study of dam deformation monitoring in northern Sweden using Sentinel1 SAR interferometry

Borghero, Cecilia January 2018 (has links)
Dams are man-made structures that in order to keep functioning and to be considered structurally healthy need constant monitoring. Assessing the deformation of dams can be time consuming and economically costly. Recently, the technique of Interferometric Synthetic Aperture Radar (InSAR) has proved its potential to measure ground and structural deformation. This geodetic method represents a cost-effective way to monitor millimetre-level displacements and can be used as supplemental analysis to detect movements in the structure and its surroundings. The objective of this work is to assess the practicality of the method through the analysis of the surface deformation of the Ajaure dam located in northern Sweden, in the period 2014-2017, using the freely available Sentinel-1A images. The scenes, 51 in ascending and 47 in descending mode, were processed exploiting the Persistent Scatterer (PS) technique and deformation trends, and time series were produced. Built in the 60’s, the Ajaure embankment dam is considered as high consequence, meaning that a failure would cause socio-economic damages to the communities involved and, for this reason, the dam needs constant attention. So far, a program of automatic measurements in situ has been collecting data, which have been used partly to compare with InSAR results. Results of the multi temporal analysis of the limited PS points on/around the dam show that the dam has been subsiding more intensely toward the centre, where maximum values are of approximately 5 ± 1.25 mm/year (descending) and 2 ± 1.27 mm/year (ascending) at different locations (separated of approximately 70 m). Outermost points instead show values within -0.7 and 0.9 mm/year, describing a stable behaviour. The decomposition of the rate has furthermore revealed that the crest in the observation period has laterally moved toward the reservoir. It has been observed that the operation of loading and unloading the reservoir influence the dam behaviour. The movements recorded by the PS points on the dam also correlate with the air temperature (i.e. seasonal cycle). The research revealed that the snow cover and the vegetation could have interfered with the signal, that resulted in a relative low correlation. Therefore, the number of PS points on and around the dam is limited, and comparison with the geodetic data is only based on a few points. The comparison shows general agreement, showing the capacities of the InSAR method. The study constitutes a starting point for further improvements, for example observation in longer period when more Sentinel1 images of the study area are collected. Installation of corner reflectors at the dam site and/or by use of high resolution SAR data is also suggested.
40

Remote sensing analysis of wetland dynamics and NDVI : A case study of Kristianstad's Vattenrike

Herstedt, Evelina January 2024 (has links)
Wetlands are vital ecosystems providing essential services to both humans and the environment, yet they face threats from human activities leading to loss and disturbance. This study utilizes remote sensing (RS) methods, including object-based image analysis (OBIA), to map and assess wetland health in Kristianstad’s Vattenrike in the southernmost part of Sweden between 2015 and 2023. Objectives include exploring RS capabilities in detecting wetlands and changes, deriving wetland health indicators, and assessing classification accuracy. The study uses Sentinel-2 imagery, elevation data, and high-resolution aerial images to focus on wetlands along the river Helge å. Detection and classifications were based on Sentinel-2 imagery and elevation data, and the eCognition software was employed. The health assessment was based on the spectral indices Normalized Difference Vegetation Index (NDVI) and Modified Normalized Difference Water Index (mNDWI). Validation was conducted through aerial photo interpretation. The derived classifications demonstrate acceptable accuracy levels and the analysis reveals relatively stable wetland conditions, with an increase in wetland area attributed to the construction of new wetlands. Changes in wetland composition, such as an increase in open meadows and swamp forests, were observed. However, an overall decline in NDVI values across the study area indicates potential degradation, attributed to factors like bare soil exposure and water presence. These findings provide insights into the local changes in wetland extent, composition, and health between the study years.

Page generated in 0.1243 seconds