• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 168
  • 34
  • 14
  • 9
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 284
  • 284
  • 78
  • 75
  • 46
  • 43
  • 38
  • 33
  • 33
  • 32
  • 31
  • 28
  • 27
  • 26
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Urban change detection on satellites using deep learning : A case of moving AI into space for improved Earth observation

Petri, Oliver January 2021 (has links)
Change detection using satellite imagery has applications in urban development, disaster response and precision agriculture. Current deep learning models show promising results. However, on-board computers are typically highly constrained which poses a challenge for deployment. On-board processing is desirable for saving bandwidth by downlinking only novel and valuable data. The goal of this work is to determine what change detection models are most technically feasible for on-board use in satellites. The novel patch based model MobileGoNogo is evaluated along current state-of-the-art models. Technical feasibility was determined by observing accuracy, inference time, storage buildup, memory usage and resolution on a satellite computer tasked with detecting changes in buildings from the SpaceNet 7 dataset. Three high level approaches were taken; direct classification, post classification and patch-based change detection. None of the models compared in the study fulfilled all requirements for general technical feasibility. Direct classification models were highly resource intensive and slow. Post classification model had critically low accuracy but desirable storage characteristics. Patch based MobileGoNogo performed better by all metrics except in resolution where it is significantly lower than any other model. We conclude that the novel model offers a feasible solution for low resolution, noncritical applications. / Upptäckt av förändringar med hjälp av satellitbilder har tillämpningar inom bl.a. stadsutveckling, katastrofinsatser och precisionsjordbruk. De nuvarande modellerna för djupinlärning visar lovande resultat. Datorerna ombord satelliter är dock vanligtvis mycket begränsade, vilket innebär en utmaning för användningen av dessa modeller. Databehandling ombord är önskvärd för att spara bandbredd genom att endast skicka ner nya och värdefulla data. Målet med detta arbete är att fastställa vilka modeller för upptäckt av förändringar som är mest tekniskt genomförbara för användning ombord på satelliter. Den nya bildfältbaserade modellen MobileGoNogo utvärderas tillsammans med de senaste modellerna. Den tekniska genomförbarheten fastställdes genom att observera träffsäkerhet, inferenstid, lagring, minnesanvändning och upplösning på en satellitdator med uppgift att upptäcka förändringar i byggnader från SpaceNet 7dataset. Tre tillvägagångssätt på hög nivå användes: direkt klassificering, postklassificering och fältbaserad klassificering. Ingen av de modeller som jämfördes i studien uppfyllde alla krav på allmän teknisk genomförbarhet. Direkta klassificeringsmodeller var mycket resurskrävande och långsamma. Postklassificeringsmodellen hade kritiskt låg träffsäkerhet men önskvärda lagringsegenskaper. Den bildfältbaserade MobileGoNogo-modellen var bättre i alla mätvärden utom i upplösningen, där den var betydligt lägre än någon annan modell. Vi drar slutsatsen att den nya modellen erbjuder en genomförbar lösning för icke-kritiska tillämpningar med låg upplösning.
242

Mapping the Effects of Blast and Chemical Fishing in the Sabalana Archipelago, South Sulawesi, Indonesia, 1991-2006

Hlavacs, Lauri A. 01 October 2008 (has links)
No description available.
243

Evaluation of a change detection approach to map global flood extents using Sentinel-1 / Utvärdering av översvämningskartering genom att upptäcka skillnader i satellitbilder från Sentinel-1

Risling, Axel January 2022 (has links)
Floods are the most frequent disaster in the world and flood exposure is increasing globally. Flood mapping of past events can be a useful aid in not only disaster risk management, but also in evaluating and validating global flood models (GFMs) which are being used to assess and predict these floods. There are however numerous ways of mapping floods, and it is uncertain how well these perform as validation data. In this paper, a change detection approach based on a combination of synthetic aperture radar (SAR) and cloud-computing to map past flood events is evaluated (hereinafter CD- SAR). Eight flood events were chosen over a wide range of hydroclimatic conditions, regions, and flood types. These eight events were mapped with CD-SAR and compared to four GFM outputs and two flood maps of past events from two commonly used databases. The spatial agreement between CD-SAR and the GFMs showed a considerable variation between regions and models. The agreement was however shown to share a similar interval to previous validation studies, albeit in the lower portion. CD-SAR also showed similar performance to the comparison between the GFMs and the outputs from the databases of mapped past flood events. The results were also analysed for how the flood extent and flood-edge distribution of CD-SAR compare to both the GFMs and the database outputs. The results showed a similar variation in distribution as the spatial agreement but did not follow the same trend for all regions. The flexibility and high resolution of CD-SAR allow it to cover events over a wider range of regions and of any size and it can be a viable tool to map past flood events and could be used to evaluate GFMs. However, CD-SAR needs further evaluation as uncertainties still exist due to the inherent characteristics of SAR and the revisit times of the satellites using SAR. / Översvämningsrelaterade katastrofer är de mest förekommande i världen och exponeringen för översvämningar ökar globalt. Kartläggningar av tidigare översvämningshändelser kan vara ett användbart verktyg för katastrofriskhantering samt för att utvärdera och validera globala översvämningsmodeller som används för att uppskatta och förutse översvämningar. Det finns ett flertal sätt att kartlägga översvämningar men det är osäkert hur bra de fungerar som valideringsdata. Denna studie utvärderar en metod för att kartlägga översvämningar genom förändringsdetektering i syntetiska aperturradarbilder (SAR) med hjälp av molntjänster (härefter CD-SAR). Åtta översvämningshändelser valdes över en rad hydroklimatiska förhållanden, regioner och översvämningstyper. De här översvämningshändelserna kartlades med hjälp av CD-SAR och jämfördes med fyra globala översvämningsmodeller och två översvämningskartor från två väl använda databaser. CD-SAR och de fyra globala översvämningsmodellerna visade betydliga skillnader i överenstämmelse mellan både regioner och modeller. Överenstämmelsen var dock inom samma intervall som påvisats i tidigare studier om än något lägre. CD-SAR visade sig också ha en liknande överensstämmelse som jämförelsen mellan modellerna och databaserna. En analys av CD- SAR jämfört med modellerna och databaserna genomfördes över utbredningen av översvämningarna och dess fördelningar. Resultaten visade en liknande variation som överenstämmelsen men följde inte samma trender för alla regioner. CD-SAR:s flexibilitet och dess höga upplösning gör att den kan omfatta ett stort antal översvämningshändelser över en rad olika regioner. Det gör det möjligt för CD- SAR att genomföra översvämningskartläggningar och för att utvärdera globala översvämningsmodeller. CD-SAR behöver dock utvärderas ytterligare då metoden fortfarande innehar vissa osäkerheter. Osäkerheterna är till mestadels på grund av de inneboende egenskaperna hos SAR samt täckningen för satelliterna som använder SAR.
244

Data-driven Infrastructure Inspection

Bianchi, Eric Loran 18 January 2022 (has links)
Bridge inspection and infrastructure inspection are critical steps in the lifecycle of the built environment. Emerging technologies and data are driving factors which are disrupting the traditional processes for conducting these inspections. Because inspections are mainly conducted visually by human inspectors, this paper focuses on improving the visual inspection process with data-driven approaches. Data driven approaches, however, require significant data, which was sparse in the existing literature. Therefore, this research first examined the present state of the existing data in the research domain. We reviewed hundreds of image-based visual inspection papers which used machine learning to augment the inspection process and from this, we compiled a comprehensive catalog of over forty available datasets in the literature and identified promising, emerging techniques and trends in the field. Based on our findings in our review we contributed six significant datasets to target gaps in data in the field. The six datasets comprised of structural material segmentation, corrosion condition state segmentation, crack detection, structural detail detection, and bearing condition state classification. The contributed datasets used novel annotation guidelines and benefitted from a novel semi-automated annotation process for both object detection and pixel-level detection models. Using the data obtained from our collected sources, task-appropriate deep learning models were trained. From these datasets and models, we developed a change detection algorithm to monitor damage evolution between two inspection videos and trained a GAN-Inversion model which generated hyper-realistic synthetic bridge inspection image data and could forecast a future deterioration state of an existing bridge element. While the application of machine learning techniques in civil engineering is not wide-spread yet, this research provides impactful contribution which demonstrates the advantages that data driven sciences can provide to more economically and efficiently inspect structures, catalog deterioration, and forecast potential outcomes. / Doctor of Philosophy / Bridge inspection and infrastructure inspection are critical steps in the lifecycle of the built environment. Emerging technologies and data are driving factors which are disrupting the traditional processes for conducting these inspections. Because inspections are mainly conducted visually by human inspectors, this paper focuses on improving the visual inspection process with data-driven approaches. Data driven approaches, however, require significant data, which was sparse in the existing literature. Therefore, this research first examined the present state of the existing data in the research domain. We reviewed hundreds of image-based visual inspection papers which used machine learning to augment the inspection process and from this, we compiled a comprehensive catalog of over forty available datasets in the literature and identified promising, emerging techniques and trends in the field. Based on our findings in our review we contributed six significant datasets to target gaps in data in the field. The six datasets comprised of structural material detection, corrosion condition state identification, crack detection, structural detail detection, and bearing condition state classification. The contributed datasets used novel labeling guidelines and benefitted from a novel semi-automated labeling process for the artificial intelligence models. Using the data obtained from our collected sources, task-appropriate artificial intelligence models were trained. From these datasets and models, we developed a change detection algorithm to monitor damage evolution between two inspection videos and trained a generative model which generated hyper-realistic synthetic bridge inspection image data and could forecast a future deterioration state of an existing bridge element. While the application of machine learning techniques in civil engineering is not widespread yet, this research provides impactful contribution which demonstrates the advantages that data driven sciences can provide to more economically and efficiently inspect structures, catalog deterioration, and forecast potential outcomes.
245

ADVANCED METHODS FOR LAND COVER MAPPING AND CHANGE DETECTION IN HIGH RESOLUTION SATELLITE IMAGE TIME SERIES

Meshkini, Khatereh 04 April 2024 (has links)
New satellite missions have provided High Resolution (HR) Satellite Image Time Series (SITS), offering detailed spatial, spectral, and temporal information for effective monitoring of diverse Earth features including weather, landforms, oceans, vegetation, and agricultural practices. SITS can be used for an accurate understanding of the Land Cover (LC) behavior and providing the possibility of precise mapping of LCs. Moreover, HR SITS presents an unprecedented possibility for the creation and modification of HR Land Cover Change (LCC) and Land Cover Transition (LCT) maps. For the long-term scale, spanning multiple years, it becomes feasible to analyze LCC and the LCTs occurring between consecutive years. Existing methods in literature often analyze bi-temporal images and miss the valuable multi-temporal/multi-annual information of SITS that is crucial for an accurate SITS analysis. As a result, HR SITS necessitates a paradigm shift in processing and methodology development, introducing new challenges in data handling. Yet, the creation of techniques that can effectively manage the high spatial correlation and complementary temporal resolutions of pixels remains paramount. Moreover, the temporal availability of HR data across historical and current archives varies significantly, creating the need for an effective preprocessing to account for factors like atmospheric and radiometric conditions that can affect image reflectance and their applicability in SITS analysis. Flexible and automatic SITS analysis methods can be developed by paying special attention to handling big amounts of data and modeling the correlation and characterization of SITS in space and time. Novel methods should deal with data preparation and pre-processing at large-scale from end-to-end by introducing a set of steps that guarantee reliable SITS analysis while upholding the computational efficiency for a feasible SITS analysis. In this context, the recent strides in deep learning-based frameworks have demonstrated their potential across various image processing tasks, and thus the high relevance for addressing SITS analysis. Deep learning-based methods can be supervised or unsupervised considering their learning process. Supervised deep learning methods rely on labeled training data, which can be impractical for large-scale multi-temporal datasets, due to the challenges of manual labeling. In contrast, unsupervised deep learning methods are favored as they can automatically discover temporal patterns and changes without the need for labeled samples, thereby reducing the computational load, making them more suitable for handling extensive SITS. In this scenario, the objectives of this thesis are mainly three. Firstly, it seeks to establish a robust and reliable framework for the precise mapping of LCs by designing novel techniques for time series analysis. Secondly, it aims to utilize the capacities of unsupervised deep learning methods, such as pretrained Convolutional Neural Networks (CNNs), to construct a comprehensive methodology for Change Detection (CD), thereby mitigating complexity and reducing computational requirements in comparison with supervised methods. This involves the efficient extraction of spatial, spectral, and temporal features from complex multi-temporal, multi-spectral SITS. Lastly, the thesis endeavors to develop novel methods for analyzing LCCs occurring over extended time periods, spanning multiple years. This multifaceted approach encompasses the detection of changes, timing identification, and classification of the specific types of LCTs. The efficacy of the innovative methodologies and associated techniques is showcased through a series of experiments conducted on HR SITS datasets, including those from Sentinel-2 and Landsat. These experiments reveal significant enhancements when compared to existing methods that represent the current state-of-the-art.
246

Optimum Event Detection In Wireless Sensor Networks

Karumbu, Premkumar 11 1900 (has links) (PDF)
We investigate sequential event detection problems arising in Wireless Sensor Networks (WSNs). A number of battery–powered sensor nodes of the same sensing modality are deployed in a region of interest(ROI). By an event we mean a random time(and, for spatial events, a random location) after which the random process being observed by the sensor field experiences a change in its probability law. The sensors make measurements at periodic time instants, perform some computations, and then communicate the results of their computations to the fusion centre. The decision making algorithm in the fusion centre employs a procedure that makes a decision on whether the event has occurred or not based on the information it has received until the current decision instant. We seek event detection algorithms in various scenarios, that are optimal in the sense that the mean detection delay (delay between the event occurrence time and the alarm time) is minimum under certain detection error constraints. In the first part of the thesis, we study event detection problems in a small extent network where the sensing coverage of any sensor includes the ROI. In particular, we are interested in the following problems: 1) quickest event detection with optimal control of the number of sensors that make observations(while the others sleep),2) quickest event detection on wireless ad hoc networks, and3) optimal transient change detection. In the second part of the thesis, we study the problem of quickest detection and isolation of an event in a large extent sensor network where the sensing coverage of any sensor is only a small portion of the ROI. One of the major applications envisioned for WSNs is detecting any abnormal activity or intrusions in the ROI. An intrusion is typically a rare event, and hence, much of the energy of sensors gets drained away in the pre–intrusion period. Hence, keeping all the sensors in the awake state is wasteful of resources and reduces the lifetime of the WSN. This motivates us to consider the problem of sleep–wake scheduling of sensors along with quickest event detection. We formulate the Bayesian quickest event detection problem with the objective of minimising the expected total cost due to i)the detection delay and ii) the usage of sensors, subject to the constraint that the probability of false alarm is upper bounded by .We obtain optimal event detection procedures, along with optimal closed loop and open loop control for the sleep–wake scheduling of sensors. In the classical change detection problem, at each sampling instant, a batch of samples(where is the number of sensors deployed in the ROI) is generated at the sensors and reaches the fusion centre instantaneously. However, in practice, the communication between the sensors and the fusion centre is facilitated by a wireless ad hoc network based on a random access mechanism such as in IEEE802.11 or IEEE802.15.4. Because of the medium access control(MAC)protocol of the wireless network employed, different samples of the same batch reach the fusion centre after random delays. The problem is to detect the occurrence of an event as early as possible subject to a false alarm constraint. In this more realistic situation, we consider a design in which the fusion centre comprises a sequencer followed by a decision maker. In earlier work from our research group, a Network Oblivious Decision Making (NODM) was considered. In NODM, the decision maker in the fusion centre is presented with complete batches of observations as if the network was not present and makes a decision only at instants at which these batches are presented. In this thesis, we consider the design in which the decision maker makes a decision at all time instants based on the samples of all the complete batches received thus far, and the samples, if any, that it has received from the next (partial) batch. We show that for optimal decision making the network–state is required by the decision maker. Hence, we call this setting Network Aware Decision Making (NADM). Also, we obtain a mean delay optimal NADM procedure, and show that it is a network–state dependent threshold rule on the a posteriori probability of change. In the classical change detection problem, the change is persistent, i.e., after the change–point, the state of nature remains in the in–change state for ever. However, in applications like intrusion detection, the event which causes the change disappears after a finite time, and the system goes to an out–of–change state. The distribution of observations in the out–of–change state is the same as that in the pre–change state. We call this short–lived change a transient change. We are interested in detecting whether a change has occurred, even after the change has disappeared at the time of detection. We model the transient change and formulate the problem of quickest transient change detection under the constraint that the probability of false alarm is bounded by . We also formulate a change detection problem which maximizes the probability of detection (i.e., probability of stopping in the in–change state) subject to the probability of false alarm being bounded by . We obtain optimal detection rules and show that they are threshold d rules on the a posteriori probability of pre–change, where the threshold depends on the a posteriori probabilities of pre–change, in–change, and out–of–change states. Finally, we consider the problem of detecting an event in a large extent WSN, where the event influences the observations of sensors only in the vicinity of where it occurs. Thus, in addition to the problem of event detection, we are faced with the problem of locating the event, also called the isolation problem. Since the distance of the sensor from the event affects the mean signal level that the sensor node senses, we consider a realistic signal propagation model in which the signal strength decays with distance. Thus, the post–change mean of the distribution of observations across sensors is different, and is unknown as the location of the event is unknown, making the problem highly challenging. Also, for a large extent WSN, a distributed solution is desirable. Thus, we are interested in obtaining distributed detection/isolation procedures which are detection delay optimal subject to false alarm and false isolation constraints. For this problem, we propose the following local decision rules, MAX, HALL, and ALL, which are based on the CUSUM statistic, at each of the sensor nodes. We identify corroborating sets of sensor nodes for event location, and propose a global rule for detection/isolation based on the local decisions of sensors in the corroborating sets. Also, we show the minimax detection delay optimality of the procedures HALL and ALL.
247

Automatic Change Detection in Visual Scenes

Brolin, Morgan January 2021 (has links)
This thesis proposes a Visual Scene Change Detector(VSCD) system which is a system which involves four parts, image retrieval, image registration, image change detection and panorama creation. Two prestudies are conducted in order to find a proposed image registration method and a image retrieval method. The two found methods are then combined with a proposed image registration method and a proposed panorama creation method to form the proposed VSCD. The image retrieval prestudy evaluates a SIFT related method with a bag of words related method and finds the SIFT related method to be the superior method. The image change detection prestudy evaluates 8 different image change detection methods. Result from the image change detection prestudy shows that the methods performance is dependent on the image category and an ensemble method is the least dependent on the category of images. An ensemble method is found to be the best performing method followed by a range filter method and then a Convolutional Neural Network (CNN) method. Using a combination of the 2 image retrieval methods and the 8 image change detection method 16 different VSCD are formed and tested. The final result show that the VSCD comprised of the best methods from the prestudies is the best performing method. / Detta exjobb föreslår ett Visual Scene Change Detector(VSCD) system vilket är ett system som har 4 delar, image retrieval, image registration, image change detection och panorama creation. Två förstudier görs för att hitta en föreslagen image registration metod och en föreslagen panorama creation metod. De två föreslagna delarna kombineras med en föreslagen image registration och en föreslagen panorama creation metod för att utgöra det föreslagna VSCD systemet. Image retrieval förstudien evaluerar en ScaleInvariant Feature Transform (SIFT) relaterad method med en Bag of Words (BoW) relaterad metod och hittar att den SIFT relaterade methoden är bäst. Image change detection förstudie visar att metodernas prestanda är beroende av catagorin av bilder och att en enemble metod är minst beroende av categorin av bilder. Enemble metoden är hittad att vara den bästa presterande metoden följt av en range filter metod och sedan av en CNN metod. Genom att använda de 2 image retrieval metoder kombinerat med de 8 image change detection metoder är 16 st VSCD system skapade och testade. Sista resultatet visar att den VSCD som använder de bästa metoderna från förstudien är den bäst presterande VSCD.
248

Self-organizing map quantization error approach for detecting temporal variations in image sets / Détection automatisée de variations critiques dans des séries temporelles d'images par algorithmes non-supervisées de Kohonen

Wandeto, John Mwangi 14 September 2018 (has links)
Une nouvelle approche du traitement de l'image, appelée SOM-QE, qui exploite quantization error (QE) des self-organizing maps (SOM) est proposée dans cette thèse. Les SOM produisent des représentations discrètes de faible dimension des données d'entrée de haute dimension. QE est déterminée à partir des résultats du processus d'apprentissage non supervisé du SOM et des données d'entrée. SOM-QE d'une série chronologique d'images peut être utilisé comme indicateur de changements dans la série chronologique. Pour configurer SOM, on détermine la taille de la carte, la distance du voisinage, le rythme d'apprentissage et le nombre d'itérations dans le processus d'apprentissage. La combinaison de ces paramètres, qui donne la valeur la plus faible de QE, est considérée comme le jeu de paramètres optimal et est utilisée pour transformer l'ensemble de données. C'est l'utilisation de l'assouplissement quantitatif. La nouveauté de la technique SOM-QE est quadruple : d'abord dans l'usage. SOM-QE utilise un SOM pour déterminer la QE de différentes images - typiquement, dans un ensemble de données de séries temporelles - contrairement à l'utilisation traditionnelle où différents SOMs sont appliqués sur un ensemble de données. Deuxièmement, la valeur SOM-QE est introduite pour mesurer l'uniformité de l'image. Troisièmement, la valeur SOM-QE devient une étiquette spéciale et unique pour l'image dans l'ensemble de données et quatrièmement, cette étiquette est utilisée pour suivre les changements qui se produisent dans les images suivantes de la même scène. Ainsi, SOM-QE fournit une mesure des variations à l'intérieur de l'image à une instance dans le temps, et lorsqu'il est comparé aux valeurs des images subséquentes de la même scène, il révèle une visualisation transitoire des changements dans la scène à l'étude. Dans cette recherche, l'approche a été appliquée à l'imagerie artificielle, médicale et géographique pour démontrer sa performance. Les scientifiques et les ingénieurs s'intéressent aux changements qui se produisent dans les scènes géographiques d'intérêt, comme la construction de nouveaux bâtiments dans une ville ou le recul des lésions dans les images médicales. La technique SOM-QE offre un nouveau moyen de détection automatique de la croissance dans les espaces urbains ou de la progression des maladies, fournissant des informations opportunes pour une planification ou un traitement approprié. Dans ce travail, il est démontré que SOM-QE peut capturer de très petits changements dans les images. Les résultats confirment également qu'il est rapide et moins coûteux de faire la distinction entre le contenu modifié et le contenu inchangé dans les grands ensembles de données d'images. La corrélation de Pearson a confirmé qu'il y avait des corrélations statistiquement significatives entre les valeurs SOM-QE et les données réelles de vérité de terrain. Sur le plan de l'évaluation, cette technique a donné de meilleurs résultats que les autres approches existantes. Ce travail est important car il introduit une nouvelle façon d'envisager la détection rapide et automatique des changements, même lorsqu'il s'agit de petits changements locaux dans les images. Il introduit également une nouvelle méthode de détermination de QE, et les données qu'il génère peuvent être utilisées pour prédire les changements dans un ensemble de données de séries chronologiques. / A new approach for image processing, dubbed SOM-QE, that exploits the quantization error (QE) from self-organizing maps (SOM) is proposed in this thesis. SOM produce low-dimensional discrete representations of high-dimensional input data. QE is determined from the results of the unsupervised learning process of SOM and the input data. SOM-QE from a time-series of images can be used as an indicator of changes in the time series. To set-up SOM, a map size, the neighbourhood distance, the learning rate and the number of iterations in the learning process are determined. The combination of these parameters that gives the lowest value of QE, is taken to be the optimal parameter set and it is used to transform the dataset. This has been the use of QE. The novelty in SOM-QE technique is fourfold: first, in the usage. SOM-QE employs a SOM to determine QE for different images - typically, in a time series dataset - unlike the traditional usage where different SOMs are applied on one dataset. Secondly, the SOM-QE value is introduced as a measure of uniformity within the image. Thirdly, the SOM-QE value becomes a special, unique label for the image within the dataset and fourthly, this label is used to track changes that occur in subsequent images of the same scene. Thus, SOM-QE provides a measure of variations within the image at an instance in time, and when compared with the values from subsequent images of the same scene, it reveals a transient visualization of changes in the scene of study. In this research the approach was applied to artificial, medical and geographic imagery to demonstrate its performance. Changes that occur in geographic scenes of interest, such as new buildings being put up in a city or lesions receding in medical images are of interest to scientists and engineers. The SOM-QE technique provides a new way for automatic detection of growth in urban spaces or the progressions of diseases, giving timely information for appropriate planning or treatment. In this work, it is demonstrated that SOM-QE can capture very small changes in images. Results also confirm it to be fast and less computationally expensive in discriminating between changed and unchanged contents in large image datasets. Pearson's correlation confirmed that there was statistically significant correlations between SOM-QE values and the actual ground truth data. On evaluation, this technique performed better compared to other existing approaches. This work is important as it introduces a new way of looking at fast, automatic change detection even when dealing with small local changes within images. It also introduces a new method of determining QE, and the data it generates can be used to predict changes in a time series dataset.
249

Spatio-temporal dynamics in land use and habit fragmentation in Sandveld, South Africa

James Takawira Magidi January 2010 (has links)
<p>This research assessed landuse changes and trends in vegetation cover in the Sandveld, using remote sensing images. Landsat TM satellite images of 1990, 2004 and 2007 were classified using the maximum likelihood classifier into seven landuse classes, namely water, agriculture, fire patches, natural vegetation, wetlands, disturbed veld, and open sands. Change detection using remote sensing algorithms and landscape metrics was performed on these multi-temporal landuse maps using the Land Change Modeller and Patch Analyst respectively. Markov stochastic modelling techniques were used to predict future scenarios in landuse change based on the classified images and their transitional probabilities. MODIS NDVI multi-temporal datasets with a 16day temporal resolution were used to assess seasonal and annual trends in vegetation cover using time series analysis (PCA and time profiling).Results indicated that natural vegetation decreased from 46% to 31% of the total landscape between 1990 and 2007 and these biodiversity losses were attributed to an increasing agriculture footprint. Predicted future scenario based on transitional probabilities revealed a continual loss in natural habitat and increase in the agricultural footprint. Time series analysis results (principal components and temporal profiles) suggested that the landscape has a high degree of overall dynamic change with pronounced inter and intra-annual changes and there was an overall increase in greenness associated with increase in agricultural activity. The study concluded that without future conservation interventions natural habitats would continue to disappear, a condition that will impact heavily on biodiversity and significant waterdependent ecosystems such as wetlands. This has significant implications for the long-term provision of water from ground water reserves and for the overall sustainability of current agricultural practices.</p>
250

多源遙測影像於海岸變遷之研究 / Coastal changes detection using multi-source remote sensing images

梁平, Liang, Ping Unknown Date (has links)
本研究以不同時期之航遙測影像偵測宜蘭海岸濱線變遷,影像來源包含1947年之舊航照影像、1971年的美國Corona衛星影像、1985年的像片基本圖、2003年的SPOT-5衛星影像及2009年以Z/I DMC(Digital Mapping Camera)航空數位相機所拍攝之高解像力航照影像。 由於影像獲取的時間與感測器皆有所差異,故本研究透過不同的方式處理資料,將影像地理對位,並利用地理資訊系統(Geographic Information Systems, GIS)軟體數化濱線及沙灘(丘),且以套疊分析觀察不同時期濱線與沙灘變遷之情形,最後收集宜蘭地區的自然或人文資料如潮汐、降雨量與輸沙量等,分析宜蘭海岸變遷的原因。而在濱線萃取方面,由於以人工數化方式太耗時間與人力,故嘗試以半自動化方式如影像分類或影像分割萃取濱線,並與人工數化結果比較。研究結果顯示,利用多時期之遙測影像,並結合GIS之空間分析功能,確可有效掌握濱線與沙灘(丘)的歷史變化概況。 / This study used multi-temporal remote sensing images to detect shoreline changes along the Yilan coast. Various types of remote sensing images were used in this study, including old aerial images taken in 1947, Corona satellite images acquired in 1971, photo base map produced in 1985, SPOT-5 satellite images obtained in 2003, and high-resolution aerial images taken in 2009 by using Z/I DMC (Digital Mapping Camera). Because these images were taken in different time using different sensors, different procedures were applied to process the data and georeference the images to a common coordinate system. GIS (Geographic information Systems) software was used to digitize shoreline and the beach area, and overlay analysis was applied to find the shoreline changes in different time periods. Then various ancillary data such as tides, precipitation, and sediment load was collected to analyze the causes of coastal changes in Yilan. For shoreline extraction, manual digitization required a lot of time and manpower. Therefore, semi-automatic method such as image classification and image segmentation was applied to extract shoreline. The results show that, by using multi-temporal remote sensing images and spatial analysis functionalities of GIS, the historical changes of shoreline and beach area can be detected effectively.

Page generated in 0.1181 seconds