191 |
Advanced deep learning based multi-temporal remote sensing image analysisSaha, Sudipan 29 May 2020 (has links)
Multi-temporal image analysis has been widely used in many applications such as urban monitoring, disaster management, and agriculture. With the development of the remote sensing technology, the new generation remote sensing satellite images with High/ Very High spatial resolution (HR/VHR) are now available. Compared to the traditional low/medium spatial resolution images, the detailed information of ground objects can be clearly analyzed in the HR/VHR images. Classical methods of multi-temporal image analysis deal with the images at pixel level and have worked well on low/medium resolution images. However, they provide sub-optimal results on new generation images due to their limited capability of modeling complex spatial and spectral information in the new generation products. Although significant number of object-based methods have been proposed in the last decade, they depend on suitable segmentation scale for diverse kinds of objects present in each temporal image. Thus their capability to express contextual information is limited. Typical spatial properties of last generation images emphasize the need of having more flexible models for object representation. Another drawback of the traditional methods is the difficulty in transferring knowledge learned from one specific problem to another. In the last few years, an interesting development is observed in the machine learning/computer vision field. Deep learning, especially Convolution Neural Networks (CNNs) have shown excellent capability to capture object level information and in transfer learning. By 2015, deep learning achieved state-of-the-art performance in most computer vision tasks. Inspite of its success in computer vision fields, the application of deep learning in multi-temporal image analysis saw slow progress due to the requirement of large labeled datasets to train deep learning models. However, by the start of this PhD activity, few works in the computer vision literature showed that deep learning possesses capability of transfer learning and training without labeled data. Thus, inspired by the success of deep learning, this thesis focuses on developing deep learning based methods for unsupervised/semi-supervised multi-temporal image analysis. This thesis is aimed towards developing methods that combine the benefits of deep learning with the traditional methods of multi-temporal image analysis. Towards this direction, the thesis first explores the research challenges that incorporates deep learning into the popular unsupervised change detection (CD) method - Change Vector Analysis (CVA) and further investigates the possibility of using deep learning for multi-temporal information extraction. The thesis specifically: i) extends the paradigm of unsupervised CVA to novel Deep CVA (DCVA) by using a pre-trained network as deep feature extractor; ii) extends DCVA by exploiting Generative Adversarial Network (GAN) to remove necessity of having a pre-trained deep network; iii) revisits the problem of semi-supervised CD by exploiting Graph Convolutional Network (GCN) for label propagation from the labeled pixels to the unlabeled ones; and iv) extends the problem statement of semantic segmentation to multi-temporal domain via unsupervised deep clustering. The effectiveness of the proposed novel approaches and related techniques is demonstrated on several experiments involving passive VHR (including Pleiades), passive HR (Sentinel-2), and active VHR (COSMO-SkyMed) datasets. A substantial improvement is observed over the state-of-the-art shallow methods.
|
192 |
Zero-shot Segmentation for Change Detection : Change Detection in Synthetic Aperture Sonar Imagery Using Segment Anything ModelHedlund, William January 2024 (has links)
The advancement of foundation models have opened up new possibilities in deep learning. These models can be adapted to new tasks and unseen domains from minimal or even no training data, making them well-suited for applications where labelled data is scarce or costly to collect. Lack of data has meant that deep learning for change detection in sonar imagery has not been used. Reliable methods for change detection of underwater environments is critical for a range of fields such as marine research and object search. Previous work in change detection for sonar imagery focus on non-deep learning methods. In this paper, we explore the use of a foundation model (Segment Anything Model) for performing change detection in imagery collected with synthetic aperture sonar (SAS). This thesis is the first case of applying Segment Anything Model to change detection in SAS imagery. The proposed method segments bi-temporal images, and change detection is then performed on the segments. In addition to a set of bi-temporal images containing real change, the model is also tested on a set of synthetic images. The proposed method shows promising results on both a real and synthetic data set.
|
193 |
Monocular Camera-based Localization and Mapping for Autonomous MobilityShyam Sundar Kannan (6630713) 10 October 2024 (has links)
<p dir="ltr">Visual localization is a crucial component for autonomous vehicles and robots, enabling them to navigate effectively by interpreting visual cues from their surroundings. In visual localization, the agent estimates its six degrees of freedom camera pose using images captured by onboard cameras. However, the operating environment of the agent can undergo various changes, such as variations in illumination, time of day, seasonal shifts, and structural modifications, all of which can significantly affect the performance of vision-based localization systems. To ensure robust localization in dynamic conditions, it is vital to develop methods that can adapt to these variations.</p><p dir="ltr">This dissertation presents a suite of methods designed to enhance the robustness and accuracy of visual localization for autonomous agents, addressing the challenges posed by environmental changes. First, we introduce a visual place recognition system that aids the autonomous agent in identifying its location within a large-scale map by retrieving a reference image closely matching the query image captured by the camera. This system employs a vision transformer to extract both global and patch-level descriptors from the images. Global descriptors, which are compact vectors devoid of geometric details, facilitate the rapid retrieval of candidate images from the reference dataset. Patch-level descriptors, derived from the patch tokens of the transformer, are subsequently used for geometric verification, re-ranking the candidate images to pinpoint the reference image that most closely matches the query.</p><p dir="ltr">Building on place recognition, we present a method for pose refinement and relocalization that integrates the environment's 3D point cloud with the set of reference images. The closest image retrieved in the initial place recognition step provides a coarse pose estimate of the query image, which is then refined to compute a precise six degrees of freedom pose. This refinement process involves extracting features from the query image and the closest reference image and then regressing these features using a transformer-based network that estimates the pose of the query image. The features are appended with 2D and 3D positional embeddings that are calculated based on the camera parameters and the 3D point cloud of the environment. These embeddings add spatial awareness to the regression model, hence enhancing the accuracy of the pose estimation. The resulting refined pose can serve as a robust initialization for various localization frameworks or be used for localization on the go. </p><p dir="ltr">Recognizing that the operating environment may undergo permanent changes, such as structural modifications that can render existing reference maps outdated, we also introduce a zero-shot visual change detection framework. This framework identifies and localizes changes by comparing current images with historical images from the same locality on the map, leveraging foundational vision models to operate without extensive annotated training data. It accurately detects changes and classifies them as temporary or permanent, enabling timely and informed updates to reference maps. This capability is essential for maintaining the accuracy and robustness of visual localization systems over time, particularly in dynamic environments.</p><p dir="ltr">Collectively, the contributions of this dissertation in place recognition, pose refinement, and change detection advance the state of visual localization, providing a comprehensive and adaptable framework that supports the evolving requirements of autonomous mobility. By enhancing the accuracy, robustness, and adaptability of visual localization, these methods contribute significantly to the development and deployment of fully autonomous systems capable of navigating complex and changing environments with high reliability.</p>
|
194 |
Digital State Models for Infrastructure Condition Assessment and Structural TestingLama Salomon, Abraham 10 February 2017 (has links)
This research introduces and applies the concept of digital state models for civil infrastructure condition assessment and structural testing. Digital state models are defined herein as any transient or permanent 3D model of an object (e.g. textured meshes and point clouds) combined with any electromagnetic radiation (e.g., visible light, infrared, X-ray) or other two-dimensional image-like representation. In this study, digital state models are built using visible light and used to document the transient state of a wide variety of structures (ranging from concrete elements to cold-formed steel columns and hot-rolled steel shear-walls) and civil infrastructures (bridges). The accuracy of digital state models was validated in comparison to traditional sensors (e.g., digital caliper, crack microscope, wire potentiometer). Overall, features measured from the 3D point clouds data presented a maximum error of ±0.10 in. (±2.5 mm); and surface features (i.e., crack widths) measured from the texture information in textured polygon meshes had a maximum error of ±0.010 in. (±0.25 mm). Results showed that digital state models have a similar performance between all specimen surface types and between laboratory and field experiments. Also, it is shown that digital state models have great potential for structural assessment by significantly improving data collection, automation, change detection, visualization, and augmented reality, with significant opportunities for commercial development. Algorithms to analyze and extract information from digital state models such as cracks, displacement, and buckling deformation are developed and tested. Finally, the extensive data sets collected in this effort are shared for research development in computer vision-based infrastructure condition assessment, eliminating the major obstacle for advancing in this field, the absence of publicly available data sets. / Ph. D. / This research introduces and tests new concepts for civil infrastructure condition assessment and structural testing. In this study, 3D models or <i>digital state models</i> were reconstructed purely from images to represent the as-built geometry of a wide variety of structural specimens and civil infrastructures (bridges). These 3D models were validated using traditional methods and proved to have a consistent perfomance independently of the structure type (e.g., concrete, steel). Results show that <i>digital state models</i> have great potential within the structural engineering industry, both for laboratory tests and automated field inspections. Finally, the extensive datasets collected in this effort are shared for research development, eliminating the major obstacle for advancing in this field, the absence of publicly available data sets.
|
195 |
A century of landscape-level changes in the Bow watershed, Alberta, Canada, and implications for flood managementTaggart-Hodge, Tanya 09 December 2016 (has links)
This study used a comparison of one hundred and forty-eight historical (1888-1913) and current (2008-2014) oblique photographs from thirty-two stations to identify land cover changes that have occurred in portions of the Bow and Elbow valleys as well as surrounding Kananaskis Country region. Implications of these changes for flooding and flood management were explored. Forest cover was found to have drastically increased over the past century, particularly in the Bow valley, as did areas of direct human development. In the same time period, grasslands increased in the Elbow valley but decreased in the Bow, while regenerating areas decreased uniformly throughout both valleys. An analysis of pre (2008)-and-post (2014) flood conditions demonstrated no change in coniferous forest cover in both valleys over the 6-year period, but uncovered a decline of 20% in the Elbow and 3% in the Bow in the broadleaf/mixedwood category. The Elbow’s channel zone was larger in 2014 compared to 2008, whereas the extent of the Bow’s channel zone remained constant. However, both the Bow and Elbow’s bare exposed bars increased substantially, most likely as a result of the 2013 flood. The major source of water flows that contributed to the 2013 flood event originated in high elevation rock and scree areas, which, unlike floodplains, are elements of the watershed that cannot be manipulated over time. It is now recognized that forest cover should act as a buffer to floods. Nevertheless, the 2013 flood event occurred despite the massive buffering effect of a huge increase in older forest stands across the study area. The final discussion includes recommendations for improving flood management in the area. / Graduate / 0329, 0768, 0478 / tanya.taggarthodge@gmail.com
|
196 |
Détection de points chauds de déforestation à Bornéo de 2000 à 2009 à partir d'images MODISDorais, Alexis 01 1900 (has links)
Ce travail s’inscrit dans le cadre d’un programme de recherches appuyé par le Conseil de recherches en sciences humaines du Canada. / Les forêts de Bornéo sont inestimables. En plus d’une faune et d’une flore riche et diversifiée, ses milieux naturels constituent d’efficaces réservoirs de carbone. En outre, la matière ligneuse qui y est abondante fait l’objet d’une exploitation intensive. Par contre, c’est le potentiel agricole de l’île qui crée le plus d’enthousiasme, principalement en ce qui concerne la culture du palmier à huile. Pour tenter de mieux comprendre et surveiller le phénomène, nous avons développé des méthodes de détection de la déforestation et de la dégradation des forêts. Ces méthodes doivent tenir compte des caractéristiques propres à l’île. C’est que Bornéo est abondamment affectée par une nébulosité constante qui complexifie considérablement son observation à partir des satellites. Malgré ces contraintes, nous avons produit une série chronologique annuelle des points chauds de déforestation et de dégradation des forêts pour les années 2000 à 2009. / Borneo’s forests are priceless. Beyond the richness and diversity of its fauna and flora, its natural habitats constitute efficient carbon reservoirs. Unfortunately, the vast forests of the island are rapidly being cut down, both by the forestry industry and the rapidly expanding oil palm industry. In this context, we’ve developed methods to detect deforestation and forest degradation in order to better understand and monitor the phenomena. In doing so, the peculiarities of Borneo, such as the persistent cloud cover, had to be accounted for. Nevertheless, we succeeded in producing a time series of the yearly forest degradation and deforestations hotspots for the year 2000 through the year 2009.
|
197 |
REFORESTATION OF RED SPRUCE (PICEA RUBENS) ON THE CHEAT MOUNTAIN RANGE, WEST VIRGINIAMadron, Justin 29 April 2013 (has links)
The (Plethodon nettingi) Cheat Mountain Salamander is a rare and endangered species that relies heavily on (Picea rubens) Red Spruce for habitat. P. rubens communities on the Cheat Mountain range in West Virginia have been disturbed by fires and logging, and regeneration of P. rubens stands are central to the survival of the P. netting. A supervised and unsupervised landscape classification of three Landsat images over the past 26 years was conducted to analyze change in P. rubens communities on Cheat Mountain Range. Change detection results revealed that from 1986-2012 a 52% growth increase of P. rubens stands, 18% loss, and 29% stayed the same over the last 26 years. P. rubens stands are vital habitat to the rare and endangered P. netting and regrowth of P. rubens is vital in restoring the habitat of the salamander on the Cheat Mountain. The regrowth of P. rubens on the Cheat Mountain range is critical to the survival of the P. nettingi. Identifying critical forest as it relates to salamander habitat is essential for conservation efforts. Since not all P. rubens stands are of equal significance to the P. nettingi, it is important to identify and map those that adhere to their stringent habitat needs as defined by forest fragmentation, aspect, slope, and lithology. I used spatial analysis and remote sensing techniques to define critical forest characteristics by applying a forest fragmentation model utilizing morphological image analysis, northeast and southwest aspects, moderate slopes, and limestone lithology. Patches were ranked based on this quantitative model and key P. rubens stands identified using spatial statistics. The results could aid in prioritizing research areas as well as conservation planning in regards to P. rubens and the P. nettingi. In this study, the MaxEnt modeling framework was used to predict habitat suitability for P. rubens under current conditions and under two future climate change scenarios. P. rubens distribution data was acquired from the U.S Geological Survey. Both the IPCC A1B and A2 emission scenarios of the HadCM3 global circulation model were projected to years 2040-2069 and 2070-2099. Results showed that a substantial decline in the suitability of future P. rubens habitat on the Cheat Mountain is likely under both climate change scenarios, particularly at lower elevations. By the end of the century, P. rubens is likely to be extirpated from the Cheat Mountain Range. By the end of century, the A1B and A2 scenarios predict the average habitat suitability for P. rubens on Cheat Mountain will be 0.0002 and 0.00004 respectively. Conservation as well as species migration efforts for P. rubens should be focused on areas such as Cheat Mountain to preserve this vital habitat.
|
198 |
Détection de mouvement par modèle biologique de fusion de donnée inspiré de la rétine humaineRoux, Sylvain 08 1900 (has links)
Ce mémoire s'intéresse à la détection de mouvement dans une séquence d'images acquises à l'aide d'une caméra fixe. Dans ce problème, la difficulté vient du fait que les mouvements récurrents ou non significatifs de la scène tels que les oscillations d'une branche, l'ombre d'un objet ou les remous d'une surface d'eau doivent être ignorés et classés comme appartenant aux régions statiques de la scène. La plupart des méthodes de détection de mouvement utilisées à ce jour reposent en fait sur le principe bas-niveau de la modélisation puis la soustraction de l'arrière-plan. Ces méthodes sont simples et rapides mais aussi limitées dans les cas où l'arrière-plan est complexe ou bruité (neige, pluie, ombres, etc.). Cette recherche consiste à proposer une technique d'amélioration de ces algorithmes dont l'idée principale est d'exploiter et mimer deux caractéristiques essentielles du système de vision humain. Pour assurer une vision nette de l’objet (qu’il soit fixe ou mobile) puis l'analyser et l'identifier, l'œil ne parcourt pas la scène de façon continue, mais opère par une série de ``balayages'' ou de saccades autour (des points caractéristiques) de l'objet en question. Pour chaque fixation pendant laquelle l'œil reste relativement immobile, l'image est projetée au niveau de la rétine puis interprétée en coordonnées log polaires dont le centre est l'endroit fixé par l'oeil. Les traitements bas-niveau de détection de mouvement doivent donc s'opérer sur cette image transformée qui est centrée pour un point (de vue) particulier de la scène. L'étape suivante (intégration trans-saccadique du Système Visuel Humain (SVH)) consiste ensuite à combiner ces détections de mouvement obtenues pour les différents centres de cette transformée pour fusionner les différentes interprétations visuelles obtenues selon ses différents points de vue. / This master thesis revolves around motion detection in sequences recorded from a fixed camera. This situation is challenging since we must ignore insignificant recurring motions such as oscillating branches, shadows, or waves on the surface of the water. Those must be classified as belonging to the background and static. Most motion detection techniques used nowadays are based on the simple and low level principle of background modeling and subtraction. These techniques are simple and fast but they reach their limit when they have to deal with complex or noisy images (from snowy, rainy, sunny weather, etc.). This research consist of proposing a technique aiming to improve those algorithms by mimicking two essential characteristics of the Human Visual System (HVS). To obtain a clear vision of an object (static or mobile) and then to analyse and identify it, our eye doesn't analyse the scene continuously but operates through several sweeping motions, or saccades, across the object. During each moment when the eye stays fixed, the image is projected on the retina and then interpreted as described by log-polar coordinates, where the center is the point fixed by the eye. Low level detection treatment should then operate on this transformed image which is centered on a particular point of view of the scene. The second step (the trans-saccadic integration of the HVS) is to combine all those data gathered from different points of view.
|
199 |
The Relationship between Near Shore Hardbottom Exposure and Benthic Community Composition and Distribution in Palm Beach County, FLCumming, Kristen A 07 March 2017 (has links)
Anthropogenic changes to the landscape, storm events and sea level rise are contributing to the erosion of beaches leading to an increase of the sediment load in near shore marine environments. Palm Beach, Florida is host to unique near shore hardbottom habitats. These areas are distinct from the vast expanses of surrounding sediments and play and important role of habitat and shelter for many different species. In this study, remotely sensed images from 2000-2015 were used to look at the movement of sediment and how it contributes to exposure rates of near shore hardbottom habitats in Palm Beach, Florida and how these factors affect the benthic community.
GIS was used to determine areas of hardbottom with high exposure (exposed in >60% of aerial images), medium exposure (40-60%), and low exposure (
I strived to determine if one can detect a successional relationship of benthic communities in a dynamic environment with annual mapping. I also examined if areas with higher exposure rates have more complex successive communities than those with lower exposure rates, and what implications this has on near shore benthic communities. In situ surveys conducted at 117 sites determined the community structure (corals, octocorals, macroalgae, and hydroids).
This study confirmed that periodic mapping was successful in identifying hardbottom burial and exposure, which fluctuate both spatially and temporally. This periodic mapping along with manual delineation did identify hardbottom burials and exposures that fluctuate between years and relate to benthic community differences. The near shore hardbottom coral reef communities aligned with the observed exposure categories with the greater coral species richness and octocoral morphologies found at sites classified as highly exposed. Statistical analyses showed differences in communities shallower and deeper than three meters’ depth. Increasing the frequency of imagery captures and in situ observation would further increase our comprehension of the metrics of hardbottom exposures in reference to community structure.
|
200 |
New statistical modeling of multi-sensor images with application to change detection / Nouvelle modélisation statistique des images multi-capteurs et son application à la détection des changementsPrendes, Jorge 22 October 2015 (has links)
Les images de télédétection sont des images de la surface de la Terre acquises par des satellites ou des avions. Ces images sont de plus en plus disponibles et leur technologies évoluent rapidement. On peut observer une amélioration des capteurs existants, mais de nouveaux types de capteurs ont également vu le jour et ont montré des propriétés intéressantes pour le traitement d'images. Ainsi, les images multispectrales et radar sont devenues très classiques.La disponibilité de différents capteurs est très intéressante car elle permet de capturer une grande variété de propriétés des objets. Ces propriétés peuvent être exploitées pour extraire des informations plus riches sur les objets. Une des applications majeures de la télédétection est la détection de changements entre des images multi-temporelles (images de la même scène acquise à des instants différents). Détecter des changements entre des images acquises par des capteurs homogènes est un problème classique. Mais le problème de la détection de changements entre images acquises par des capteurs hétérogènes est un problème beaucoup plus difficile.Avoir des méthodes de détection de changements adaptées aux images issues de capteurs hétérogènes est nécessaire pour le traitement de catastrophes naturelles. Des bases de données constituées d'images optiques sont disponible, mais il est nécessaire d'avoir de bonnes conditions climatiques pour les acquérir. En revanche, les images radar sont accessibles rapidement quelles que soient les conditions climatiques et peuvent même être acquises de nuit. Ainsi, détecter des changements entre des images optiques et radar est un problème d'un grand intérêt en télédétection.L'intérêt de cette thèse est d'étudier des méthodes statistiques de détention de changements adaptés aux images issues de capteurs hétérogènes.Chapitre 1 rappelle ce qu'on entend par une image de télédétection et résume rapidement quelques méthodes de détection de changements disponibles dans la littérature. Les motivations à développer des méthodes de détection de changements adaptées aux images hétérogènes et les difficultés associiées sont présentés.Chapitre 2 étudie les propriétés statistiques des images en l'absence de changements. Un modèle de mélange de lois adapté aux ces images est introduit. La performance des méthodes classiques de détection de changements est également étudiée. Dans plusieurs cas, ce modèle permet d'expliquer certains défauts de certaines méthodes de la literature.Chapitre 3 étudie les propriétés des paramètres du modèle introduit au chapitre 2 en faisant l'hypothèse qu'ils appartiennent à une variété en l'absence de changements. Cette hypothèse est utilisée pour définir une mesure de similarité qui permet d'éviter les défauts des approches statistiques classiques. Une méthode permettant d'estimer cette mesure de similarité est présentée. Enfin, la stratégie de détection de changements basée sur cette mesure est validée à l'aide d'images synthétiques.Chapitre 4 étudie un algorithme Bayésien non-paramétrique (BNP) qui permet d'améliorer l'estimation de la variété introduite au chapitre 3, qui est basé sur un processus de restaurant Chinois (CRP) et un champs de Markov qui exploite la corrélation spatiale entre des pixels voisins de l'image. Une nouvelle loi a priori de Jeffrey pour le paramètre de concentration de ce CRP est définit. L'estimation des paramètres de ce nouveau modèle est effectuée à l'aide d'un échantillonneur de Gibbs de type "collapsed Gibbs sampler". La stratégie de détection de changement issue de ce modèle non-paramétrique est validée à l'aide d'images synthétiques.Le dernier chapitre est destiné à la validation des algorithmes de détection de changements développés sur des jeux d'images réelles montrant des résultats encourageant pour tous les cas d'étude. Le modèle BNP permet d'obtenir de meilleurs performances que le modèle paramétrique, mais ceci se fait au prix d'une complexité calculatoire plus importante. / Remote sensing images are images of the Earth surface acquired from satellites or air-borne equipment. These images are becoming widely available nowadays and its sensor technology is evolving fast. Classical sensors are improving in terms of resolution and noise level, while new kinds of sensors are proving to be useful. Multispectral image sensors are standard nowadays and synthetic aperture radar (SAR) images are very popular.The availability of different kind of sensors is very advantageous since it allows us to capture a wide variety of properties of the objects contained in a scene. These properties can be exploited to extract richer information about these objects. One of the main applications of remote sensing images is the detection of changes in multitemporal datasets (images of the same area acquired at different times). Change detection for images acquired by homogeneous sensors has been of interest for a long time. However the wide range of different sensors found in remote sensing makes the detection of changes in images acquired by heterogeneous sensors an interesting challenge.Accurate change detectors adapted to heterogeneous sensors are needed for the management of natural disasters. Databases of optical images are readily available for an extensive catalog of locations, but, good climate conditions and daylight are required to capture them. On the other hand, SAR images can be quickly captured, regardless of the weather conditions or the daytime. For these reasons, optical and SAR images are of specific interest for tracking natural disasters, by detecting the changes before and after the event.The main interest of this thesis is to study statistical approaches to detect changes in images acquired by heterogeneous sensors. Chapter 1 presents an introduction to remote sensing images. It also briefly reviews the different change detection methods proposed in the literature. Additionally, this chapter presents the motivation to detect changes between heterogeneous sensors and its difficulties.Chapter 2 studies the statistical properties of co-registered images in the absence of change, in particular for optical and SAR images. In this chapter a finite mixture model is proposed to describe the statistics of these images. The performance of classical statistical change detection methods is also studied by taking into account the proposed statistical model. In several situations it is found that these classical methods fail for change detection.Chapter 3 studies the properties of the parameters associated with the proposed statistical mixture model. We assume that the model parameters belong to a manifold in the absence of change, which is then used to construct a new similarity measure overcoming the limitations of classic statistical approaches. Furthermore, an approach to estimate the proposed similarity measure is described. Finally, the proposed change detection strategy is validated on synthetic images and compared with previous strategies.Chapter 4 studies Bayesian non parametric algorithm to improve the estimation of the proposed similarity measure. This algorithm is based on a Chinese restaurant process and a Markov random field taking advantage of the spatial correlations between adjacent pixels of the image. This chapter also defines a new Jeffreys prior for the concentration parameter of this Chinese restaurant process. The estimation of the different model parameters is conducted using a collapsed Gibbs sampler. The proposed strategy is validated on synthetic images and compared with the previously proposed strategy. Finally, Chapter 5 is dedicated to the validation of the proposed change detection framework on real datasets, where encouraging results are obtained in all cases. Including the Bayesian non parametric model into the change detection strategy improves change detection performance at the expenses of an increased computational cost.
|
Page generated in 0.0221 seconds