• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 4
  • 4
  • 3
  • Tagged with
  • 24
  • 24
  • 9
  • 7
  • 7
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Damage Assessment of the 2022 Tongatapu Tsunami : With Remote Sensing / Skadebedömning av 2022 Tongatapu Tsunamin : Med Fjärranalys

Larsson, Milton January 2022 (has links)
The Island of Tongatapu, Tonga, was struck by a tsunami on January 15, 2022. Internet was cut off from the island, which made remote sensing a valuable tool for the assessment of damages. Through land cover classification, change vector analysis and log-ratio image differencing, damages caused by the tsunami were assessed remotely in this thesis. Damage assessment is a vital part of both assessing the need for humanitarian aid after a tsunami, but also lays the foundation for preventative measurements and reconstruction. The objective of this thesis was to assess damage in terms of square kilometers and create damage maps. It was also vital to assess the different methods and evaluate their accuracy. Results from this study could theoretically be combined with other damage assessments to evaluate different aspects of damage. It was also important to evaluate which methods would be good to use in a similar event. In this study Sentinel-1, Sentinel-2 and high-resolution Planet Imagery were used to conduct a damage assessment. Evaluating both moderate and high-resolution imagery in combination with SAR yielded plausible, but flawed results. Land cover was computed for moderate and high-resolution imagery using three types of classifiers. It was found that the Random Forest classifier outperforms both CART and Support Vector Machine classification for this study area.  Land cover composite image differencing for pre-and-post tsunami Sentinel-2 images achieved an accuracy of around 85%. Damage was estimated to be about 10.5 km^2. Land cover classification with high-resolution images gave higher accuracy. The total estimated damaged area was about 18 km^2. The high-resolution image classification was deemed to be the better method of urban damage assessment, with moderate-resolution imagery working well for regional damage assessment.  Change vector analysis provided plausible results when using Sentinel-2 with NDVI, NDMI, SAVI and BSI. NDVI was found to be the most comprehensive change indicator when compared to the other tested indices. The total estimated damage using all tested indices was roughly 7.6 km^2. Using the same method for Sentinel-1's VV and VH bands, the total damage was estimated to be 0.4 and 2.6 km^2 respectively. Log ratio for Sentinel-1 did not work well compared to change vector analysis. Issues with false positives occurred. Both log-ratios of VV and VH gave a similar total estimated damage of roughly 5.2 km^2.  Problems were caused by cloud cover and ash deposits. The analysis could have been improved by being consistent with the choice of dates for satellite images. Also, balancing classification samples and using high-resolution land cover classification on specific areas of interest indicated by regional methods. This would circumvent problems with ash, as reducing the study area would make more high-resolution imagery available.
22

Combining remote sensing data at different spatial, temporal and spectral resolutions to characterise semi-natural grassland habitats for large herbivores in a heterogeneous landscape

Raab, Christoph Benjamin 04 July 2019 (has links)
No description available.
23

An Automated Approach to Mapping Ocean Front Features Using Sentinel-1 with Examples from the Gulf Stream and Agulhas Current

Newall, Andrew 19 April 2023 (has links)
This study examines the efficacy of Sentinel-1 Radial Velocity (RVL) imagery at determining the position of ocean current front features, using the Gulf Stream (GS) and Agulhas Current (AC) as case studies. Fronts derived from RVL imagery are compared to fronts derived from Sea Surface Temperature (SST) imagery, specifically Multi-scale Ultra-high Resolution Sea Surface Temperature Analysis (MURSST) data. In the case of the GS, front locations from the Naval Oceanographic Office (NAVOCEANO) were also used for comparison. Only the northern walls of ocean current features are considered in this study, which is broken into three main steps: Preprocessing, front extraction, and front comparison. First, RVL imagery is selected from Sentinel-1 ocean products, preprocessed to remove antenna mispointing artifacts, and all products from the same orbit are combined into a single swath. Second, front features are extracted from both the RVL and MURSST imagery using a ridge detection algorithm, the main ocean current is chosen from all ridge features using a ranking algorithm, and the northern wall of this current is extracted. Third, the RVL, SST, and in the case of the GS, NAVOCEANO GS locations, features are compared using a symmetric Hausdorff Distance (HD) measure, and Mean Hausdorff Distance (MHD). In some cases, the automatic front extraction failed by either misclassifying an eddy or similar ocean feature as the ocean current in either the RVL or SST image or failed to extract the entire length of the front visible within the image. All the SST and RVL fronts were classified manually to determine the success rate of the automatic front extraction and to exclude failed front extractions from the analysis, as they are not accurate representations of the SST and RVL data’s ability to detect fronts. In special cases, the RVL image itself does not detect the entire ocean current, such that there are noticeable gaps in the ocean current. Similarly, in special cases the MURSST does not detect the entire ocean current. The automatic front extraction succeeded 65% of the time, including the special cases. The results demonstrated that RVL products were effective at determining the location of ocean fronts where the angle of the front's normal vector is within approximately 40° of the sensor’s azimuthal heading. A mean HD of 31.9 km and a mean MHD of 13.2 km was calculated for all front pairs over all study areas. The RVL fronts appeared consistently to the north of the SST fronts, with an average offset of 25.4 km between the centroids of the SST and RVL fronts. Positive correlations were noted between cloud coverage and MURSST error in both study regions. Several RVL images detected ocean currents in regions of high MURSST error where the MURSST did not detect the ocean currents, suggesting that RVL may provide more accuracy than SST-based products in clouded regions where there is no auxiliary data.
24

Using Satellite Images and Deep Learning to Detect Water Hidden Under the Vegetation : A cross-modal knowledge distillation-based method to reduce manual annotation work / Användning Satellitbilder och Djupinlärning för att Upptäcka Vatten Gömt Under Vegetationen : En tvärmodal kunskapsdestillationsbaserad metod för att minska manuellt anteckningsarbete

Cristofoli, Ezio January 2024 (has links)
Detecting water under vegetation is critical to tracking the status of geological ecosystems like wetlands. Researchers use different methods to estimate water presence, avoiding costly on-site measurements. Optical satellite imagery allows the automatic delineation of water using the concept of the Normalised Difference Water Index (NDWI). Still, optical imagery is subject to visibility conditions and cannot detect water under the vegetation, a typical situation for wetlands. Synthetic Aperture Radar (SAR) imagery works under all visibility conditions. It can detect water under vegetation but requires deep network algorithms to segment water presence, and manual annotation work is required to train the deep models. This project uses DEEPAQUA, a cross-modal knowledge distillation method, to eliminate the manual annotation needed to extract water presence from SAR imagery with deep neural networks. In this method, a deep student model (e.g., UNET) is trained to segment water in SAR imagery. The student model uses the NDWI algorithm as the non-parametric, cross-modal teacher. The key prerequisite is that NDWI works on the optical imagery taken from the exact location and simultaneously as the SAR. Three different deep architectures are tested in this project: UNET, SegNet, and UNET++, and the Otsu method is used as the baseline. Experiments on imagery from Swedish wetlands in 2020-2022 show that cross-modal distillation consistently achieved better segmentation performances across architectures than the baseline. Additionally, the UNET family of algorithms performed better than SegNet with a confidence of 95%. The UNET++ model achieved the highest Intersection Over Union (IOU) performance. However, no statistical evidence emerged that UNET++ performs better than UNET, with a confidence of 95%. In conclusion, this project shows that cross-modal knowledge distillation works well across architectures and removes tedious and expensive manual work hours when detecting water from SAR imagery. Further research could evaluate performances on other datasets and student architectures. / Att upptäcka vatten under vegetation är avgörande för att hålla koll på statusen på geologiska ekosystem som våtmarker. Forskare använder olika metoder för att uppskatta vattennärvaro vilket undviker kostsamma mätningar på plats. Optiska satellitbilder tillåter automatisk avgränsning av vatten med hjälp av konceptet Normalised Difference Water Index (NDWI). Optiska bilder fortfarande beroende av siktförhållanden och kan inte upptäcka vatten under vegetationen, en typisk situation för våtmarker. Synthetic Aperture Radar (SAR)-bilder fungerar under alla siktförhållanden. Den kan detektera vatten under vegetation men kräver djupa nätverksalgoritmer för att segmentera vattennärvaro, och manuellt anteckningsarbete krävs för att träna de djupa modellerna. Detta projekt använder DEEPAQUA, en cross-modal kunskapsdestillationsmetod, för att eliminera det manuella annoteringsarbete som behövs för att extrahera vattennärvaro från SAR-bilder med djupa neurala nätverk. I denna metod tränas en djup studentmodell (t.ex. UNET) att segmentera vatten i SAR-bilder semantiskt. Elevmodellen använder NDWI, som fungerar på de optiska bilderna tagna från den exakta platsen och samtidigt som SAR, som den icke-parametriska, cross-modal lärarmodellen. Tre olika djupa arkitekturer testas i detta examensarbete: UNET, SegNet och UNET++, och Otsu-metoden används som baslinje. Experiment på bilder tagna på svenska våtmarker 2020-2022 visar att cross-modal destillation konsekvent uppnådde bättre segmenteringsprestanda över olika arkitekturer jämfört med baslinjen. Dessutom presterade UNET-familjen av algoritmer bättre än SegNet med en konfidens på 95%. UNET++-modellen uppnådde högsta prestanda för Intersection Over Union (IOU). Det framkom dock inga statistiska bevis för att UNET++ presterar bättre än UNET, med en konfidens på 95%. Sammanfattningsvis visar detta projekt att cross-modal kunskapsdestillation fungerar bra över olika arkitekturer och tar bort tidskrävande och kostsamma manuella arbetstimmar vid detektering av vatten från SAR-bilder. Ytterligare forskning skulle kunna utvärdera prestanda på andra datamängder och studentarkitekturer.

Page generated in 0.0667 seconds