Spelling suggestions: "subject:"spectral mixing"" "subject:"spectral intermixing""
11 |
Dimensionality Reduction of Hyperspectral Imagery Using Random ProjectionsMenon, Vineetha 09 December 2016 (has links)
Hyperspectral imagery is often associated with high storage and transmission costs. Dimensionality reduction aims to reduce the time and space complexity of hyperspectral imagery by projecting data into a low-dimensional space such that all the important information in the data is preserved. Dimensionality-reduction methods based on transforms are widely used and give a data-dependent representation that is unfortunately costly to compute. Recently, there has been a growing interest in data-independent representations for dimensionality reduction; of particular prominence are random projections which are attractive due to their computational efficiency and simplicity of implementation. This dissertation concentrates on exploring the realm of computationally fast and efficient random projections by considering projections based on a random Hadamard matrix. These Hadamard-based projections are offered as an alternative to more widely used random projections based on dense Gaussian matrices. Such Hadamard matrices are then coupled with a fast singular value decomposition in order to implement a two-stage dimensionality reduction that marries the computational benefits of the data-independent random projection to the structure-capturing capability of the data-dependent singular value transform. Finally, random projections are applied in conjunction with nonnegative least squares to provide a computationally lightweight methodology for the well-known spectral-unmixing problem. Overall, it is seen that random projections offer a computationally efficient framework for dimensionality reduction that permits hyperspectral-analysis tasks such as unmixing and classification to be conducted in a lower-dimensional space without sacrificing analysis performance while reducing computational costs significantly.
|
12 |
Land Cover Quantification using Autoencoder based Unsupervised Deep LearningManjunatha Bharadwaj, Sandhya 27 August 2020 (has links)
This work aims to develop a deep learning model for land cover quantification through hyperspectral unmixing using an unsupervised autoencoder. Land cover identification and classification is instrumental in urban planning, environmental monitoring and land management. With the technological advancements in remote sensing, hyperspectral imagery which captures high resolution images of the earth's surface across hundreds of wavelength bands, is becoming increasingly popular. The high spectral information in these images can be analyzed to identify the various target materials present in the image scene based on their unique reflectance patterns. An autoencoder is a deep learning model that can perform spectral unmixing by decomposing the complex image spectra into its constituent materials and estimating their abundance compositions. The advantage of using this technique for land cover quantification is that it is completely unsupervised and eliminates the need for labelled data which generally requires years of field survey and formulation of detailed maps. We evaluate the performance of the autoencoder on various synthetic and real hyperspectral images consisting of different land covers using similarity metrics and abundance maps. The scalability of the technique with respect to landscapes is assessed by evaluating its performance on hyperspectral images spanning across 100m x 100m, 200m x 200m, 1000m x 1000m, 4000m x 4000m and 5000m x 5000m regions. Finally, we analyze the performance of this technique by comparing it to several supervised learning methods like Support Vector Machine (SVM), Random Forest (RF) and multilayer perceptron using F1-score, Precision and Recall metrics and other unsupervised techniques like K-Means, N-Findr, and VCA using cosine similarity, mean square error and estimated abundances. The land cover classification obtained using this technique is compared to the existing United States National Land Cover Database (NLCD) classification standard. / Master of Science / This work aims to develop an automated deep learning model for identifying and estimating the composition of the different land covers in a region using hyperspectral remote sensing imagery. With the technological advancements in remote sensing, hyperspectral imagery which captures high resolution images of the earth's surface across hundreds of wavelength bands, is becoming increasingly popular. As every surface has a unique reflectance pattern, the high spectral information contained in these images can be analyzed to identify the various target materials present in the image scene. An autoencoder is a deep learning model that can perform spectral unmixing by decomposing the complex image spectra into its constituent materials and estimate their percent compositions. The advantage of this method in land cover quantification is that it is an unsupervised technique which does not require labelled data which generally requires years of field survey and formulation of detailed maps. The performance of this technique is evaluated on various synthetic and real hyperspectral datasets consisting of different land covers. We assess the scalability of the model by evaluating its performance on images of different sizes spanning over a few hundred square meters to thousands of square meters. Finally, we compare the performance of the autoencoder based approach with other supervised and unsupervised deep learning techniques and with the current land cover classification standard.
|
13 |
Déconvolution et séparation d'images hyperspectrales en microscopie / Deconvolution and separation of hyperspectral images : applications to microscopyHenrot, Simon 27 November 2013 (has links)
L'imagerie hyperspectrale consiste à acquérir une scène spatiale à plusieurs longueurs d'onde, e.g. en microscopie. Cependant, lorsque l'image est observée à une résolution suffisamment fine, elle est dégradée par un flou (convolution) et une procédure de déconvolution doit être utilisée pour restaurer l'image originale. Ce problème inverse, par opposition au problème direct modélisant la dégradation de l'image observée, est étudié dans la première partie . Un autre problème inverse important en imagerie, la séparation de sources, consiste à extraire les spectres des composants purs de l'image (sources) et à estimer les contributions de chaque source à l'image. La deuxième partie propose des contributions algorithmiques en restauration d'images hyperspectrales. Le problème est formulé comme la minimisation d'un critère pénalisé et résolu à l'aide d'une structure de calcul rapide. La méthode est adaptée à la prise en compte de différents a priori sur l'image, tels que sa positivité ou la préservation des contours. Les performances des techniques proposées sont évaluées sur des images de biocapteurs bactériens en microscopie confocale de fluorescence. La troisième partie est axée sur le problème de séparation de sources, abordé dans un cadre géométrique. Nous proposons une nouvelle condition suffisante d'identifiabilité des sources à partir des coefficients de mélange. Une étude innovante couplant le modèle d'observation avec le mélange de sources permet de montrer l'intérêt de la déconvolution comme étape préliminaire de la séparation. Ce couplage est validé sur des données acquises en spectroscopie Raman / Hyperspectral imaging refers to the acquisition of spatial images at many spectral bands, e.g. in microscopy. Processing such data is often challenging due to the blur caused by the observation system, mathematically expressed as a convolution. The operation of deconvolution is thus necessary to restore the original image. Image restoration falls into the class of inverse problems, as opposed to the direct problem which consists in modeling the image degradation process, treated in part 1 of the thesis. Another inverse problem with many applications in hyperspectral imaging consists in extracting the pure materials making up the image, called endmembers, and their fractional contribution to the data or abundances. This problem is termed spectral unmixing and its resolution accounts for the nonnegativity of the endmembers and abundances. Part 2 presents algorithms designed to efficiently solve the hyperspectral image restoration problem, formulated as the minimization of a composite criterion. The methods are based on a common framework allowing to account for several a priori assumptions on the solution, including a nonnegativity constraint and the preservation of edges in the image. The performance of the proposed algorithms are demonstrated on fluorescence confocal images of bacterial biosensors. Part 3 deals with the spectral unmixing problem from a geometrical viewpoint. A sufficient condition on abundance coefficients for the identifiability of endmembers is proposed. We derive and study a joint observation model and mixing model and demonstrate the interest of performing deconvolution as a prior step to spectral unmixing on confocal Raman microscopy data
|
14 |
Accelerated Hyperspectral Unmixing with Endmember Variability via the Sum-Product AlgorithmPuladas, Charan 26 May 2016 (has links)
No description available.
|
15 |
Chemical identification under a poisson model for Raman spectroscopyPalkki, Ryan D. 14 November 2011 (has links)
Raman spectroscopy provides a powerful means of chemical identification in a variety of fields, partly because of its non-contact nature and the speed at which measurements can be taken. The development of powerful, inexpensive lasers and sensitive charge-coupled device (CCD) detectors has led to widespread use of commercial and scientific Raman systems. However, relatively little work has been done developing physics-based probabilistic models for Raman measurement systems and crafting inference algorithms within the framework of statistical estimation and detection theory.
The objective of this thesis is to develop algorithms and performance bounds for the identification of chemicals from their Raman spectra. First, a Poisson measurement model based on the physics of a dispersive Raman device is presented. The problem is then expressed as one of deterministic parameter estimation, and several methods are analyzed for computing the maximum-likelihood (ML) estimates of the mixing coefficients under our data model. The performance of these algorithms is compared against the Cramer-Rao lower bound (CRLB).
Next, the Raman detection problem is formulated as one of multiple hypothesis detection (MHD), and an approximation to the optimal decision rule is presented. The resulting approximations are related to the minimum description length (MDL) approach to inference.
In our simulations, this method is seen to outperform two common general detection approaches, the spectral unmixing approach and the generalized likelihood ratio test (GLRT). The MHD framework is applied naturally to both the detection of individual target chemicals and to the detection of chemicals from a given class.
The common, yet vexing, scenario is then considered in which chemicals are present that are not in the known reference library. A novel variation of nonnegative matrix factorization (NMF) is developed to address this problem. Our simulations indicate that this algorithm gives better estimation performance than the standard two-stage NMF approach and the fully supervised approach when there are chemicals present that are not in the library. Finally, estimation algorithms are developed that take into account errors that may be present in the reference library. In particular, an algorithm is presented for ML estimation under a Poisson errors-in-variables (EIV) model. It is shown that this same basic approach can also be applied to the nonnegative total least squares (NNTLS) problem.
Most of the techniques developed in this thesis are applicable to other problems in which an object is to be identified by comparing some measurement of it to a library of known constituent signatures.
|
16 |
Methodological developments for mapping soil constituents using imaging spectroscopyBayer, Anita January 2012 (has links)
Climatic variations and human activity now and increasingly in the future cause land cover changes and introduce perturbations in the terrestrial carbon reservoirs in vegetation, soil and detritus. Optical remote sensing and in particular Imaging Spectroscopy has shown the potential to quantify land surface parameters over large areas, which is accomplished by taking advantage of the characteristic interactions of incident radiation and the physico-chemical properties of a material.
The objective of this thesis is to quantify key soil parameters, including soil organic carbon, using field and Imaging Spectroscopy. Organic carbon, iron oxides and clay content are selected to be analyzed to provide indicators for ecosystem function in relation to land degradation, and additionally to facilitate a quantification of carbon inventories in semiarid soils. The semiarid Albany Thicket Biome in the Eastern Cape Province of South Africa is chosen as study site. It provides a regional example for a semiarid ecosystem that currently undergoes land changes due to unadapted management practices and furthermore has to face climate change induced land changes in the future.
The thesis is divided in three methodical steps. Based on reflectance spectra measured in the field and chemically determined constituents of the upper topsoil, physically based models are developed to quantify soil organic carbon, iron oxides and clay content. Taking account of the benefits limitations of existing methods, the approach is based on the direct application of known diagnostic spectral features and their combination with multivariate statistical approaches. It benefits from the collinearity of several diagnostic features and a number of their properties to reduce signal disturbances by influences of other spectral features.
In a following step, the acquired hyperspectral image data are prepared for an analysis of soil constituents. The data show a large spatial heterogeneity that is caused by the patchiness of the natural vegetation in the study area that is inherent to most semiarid landscapes. Spectral mixture analysis is performed and used to deconvolve non-homogenous pixels into their constituent components. For soil dominated pixels, the subpixel information is used to remove the spectral influence of vegetation and to approximate the pure spectral signature coming from the soil. This step is an integral part when working in natural non-agricultural areas where pure bare soil pixels are rare. It is identified as the largest benefit within the multi-stage methodology, providing the basis for a successful and unbiased prediction of soil constituents from hyperspectral imagery. With the proposed approach it is possible (1) to significantly increase the spatial extent of derived information of soil constituents to areas with about 40 % vegetation coverage and (2) to reduce the influence of materials such as vegetation on the quantification of soil constituents to a minimum.
Subsequently, soil parameter quantities are predicted by the application of the feature-based soil prediction models to the maps of locally approximated soil signatures. Thematic maps showing the spatial distribution of the three considered soil parameters in October 2009 are produced for the Albany Thicket Biome of South Africa. The maps are evaluated for their potential to detect erosion affected areas as effects of land changes and to identify degradation hot spots in regard to support local restoration efforts. A regional validation, carried out using available ground truth sites, suggests remaining factors disturbing the correlation of spectral characteristics and chemical soil constituents.
The approach is developed for semiarid areas in general and not adapted to specific conditions in the study area. All processing steps of the developed methodology are implemented in software modules, where crucial steps of the workflow are fully automated. The transferability of the methodology is shown for simulated data of the future EnMAP hyperspectral satellite. Soil parameters are successfully predicted from these data despite intense spectral mixing within the lower spatial resolution EnMAP pixels.
This study shows an innovative approach to use Imaging Spectroscopy for mapping of key soil constituents, including soil organic carbon, for large areas in a non-agricultural ecosystem and under consideration of a partially vegetation coverage. It can contribute to a better assessment of soil constituents that describe ecosystem processes relevant to detect and monitor land changes. The maps further provide an assessment of the current carbon inventory in soils, valuable for carbon balances and carbon mitigation products. / Klimatische und anthropogene Faktoren verursachen bereits jetzt und verstärkt in Zukunft Änderungen der Landbedeckung und Landnutzung natürlicher Ökosysteme, die sich direkt auf die terrestrischen Kohlenstoffspeicher in Vegetation, Böden und biogenen Resten auswirken. Optische Fernerkundung und im Besonderen die Abbildende Spektroskopie sind etablierte Methoden, die basierend auf der charakteristischen Wechselwirkung der Sonnenstrahlung mit physikalisch-chemischen Materialeigenschaften eine quantitative Abschätzung degradationsrelevanter Parameter der Landoberfläche erlauben.
Das Ziel dieser Arbeit ist die Quantifizierung maßgeblicher Bodeninhaltsstoffe unter Verwendung von Feld- und abbildender Spektroskopie. Dabei stehen organischer Kohlenstoff, Eisenoxide und Ton im Fokus der Betrachtung, da ihre Gehalte im Boden als Indikatoren für Landoberflächenveränderungen verwendet werden können und ihre Analyse gleichzeitig eine direkte Abschätzung des bodengebundenen Kohlenstoffreservoirs ermöglicht. Das semiaride Albany Thicket in der östlichen Kapprovinz Südafrikas wurde als Arbeitsgebiet ausgewählt. Es steht beispielhaft für einen Naturraum, der sich gegenwärtig durch nicht angepasste Landnutzung verändert und der voraussichtlich auch in Zukunft hochfrequenten, durch den Klimawandel bedingten, Schwankungen unterliegen wird.
Die Arbeit ist in drei methodische Schritte untergliedert. Die einzelnen Prozessierungsschritte der entwickelten Methodik sind in Softwaremodulen umgesetzt, in denen die wichtigsten Schritte voll automatisiert sind. Unter Verwendung von im Feld gemessenen Reflektanzspektren und chemisch bestimmten Gehalten der obersten Bodenschicht wird ein Modell zur Bestimmung der drei ausgewählten Bodenparameter erstellt. Der gewählte Ansatz basiert auf der direkten Verwendung bekannter spektraler Merkmale in Verbindung mit multivariaten Verfahren.
In nächsten Schritt werden die großflächig aufgenommenen Hyperspektraldaten vorbereitet, die die für semiaride Räume typischen kleinräumigen Landbedeckungsänderungen wiederspiegeln. Auf subpixel-Basis erlaubt eine spektrale Entmischungsanalyse die Zerlegung nicht homogener Bildspektren in ihre spektralen Bestandteile. Dadurch kann für Pixel, die signifikante Anteile an unbedecktem Boden aufweisen, die reine spektrale Signatur des Bodens in Näherung bestimmt werden. Diese Vorgehensweise kennzeichnet einen wesentlichen Gewinn, da er eine Anwendung auf heterogene Naturräume abseits landwirtschaftlicher Flächen erlaubt, die Ausdehnung des Gültigkeitsbereichs, für den Bodeneigenschaften vorhergesagt werden können, deutlich steigert und den Einfluss von Fremdmaterialien wie Vegetation auf eine Bestimmung minimiert.
Daran anknüpfend erfolgt die Vorhersage von Bodeninhaltsstoffen. Die räumliche Verteilung von organischem Kohlenstoff, Eisenoxiden und der Tongehalte wie sie sich im Oktober 2009 im südafrikanischen Albany Thicket darstellte, wurde in thematischen Karten erfasst. Sie wurden hinsichtlich ihres Potentials ausgewertet, Bereiche zu erkennen, die in Folge von Landbedeckungsänderungen von Erosion betroffen sind.
Die vorliegende Arbeit zeigt einen innovativen Ansatz zur Verwendung Abbildender Spektroskopie zur Kartierung wichtiger Bodeneigenschaften in einem semiariden Naturraum. Die Methodik liefert einen Beitrag zur verbesserten Abschätzung ökosystemrelevanter Bodeneigenschaften sowie eine direkte Abschätzung vorhandener Kohlenstoffspeicher im Boden, Parameter, die zur Erkennung und Überwachung von Landbedeckungsänderungen verwendet werden können.
|
17 |
Satellite Estimates of Tree and Grass Cover Using MODIS Vegetation-Indices and ASTER Surface-ReflectanceMr Tony Gill Unknown Date (has links)
No description available.
|
18 |
Endmember Variability in hyperspectral image unmixing / Variabilité spectrale dans le démélange d'images hyperspectralesDrumetz, Lucas 25 October 2016 (has links)
La finesse de la résolution spectrale des images hyperspectrales en télédétection permet une analyse précise de la scène observée, mais leur résolution spatiale est limitée, et un pixel acquis par le capteur est souvent un mélange des contributions de différents matériaux. Le démélange spectral permet d'estimer les spectres des matériaux purs (endmembers) de la scène, et leurs abondances dans chaque pixel. Les endmembers sont souvent supposés être parfaitement représentés par un seul spectre, une hypothèse fausse en pratique, chaque matériau ayant une variabilité intra-classe non négligeable. Le but de cette thèse est de développer des algorithmes prenant mieux en compte ce phénomène. Nous effectuons le démélange localement, dans des régions bien choisies de l'image où les effets de la variabilité sont moindres, en éliminant automatiquement les endmembers non pertinents grâce à de la parcimonie collaborative. Dans une autre approche, nous raffinons l'estimation des abondances en utilisant la structure de groupe d'un dictionnaire d'endmembers extrait depuis les données. Ensuite, nous proposons un modèle de mélange linéaire étendu, basé sur des considérations physiques, qui modélise la variabilité spectrale par des facteurs d'échelle, et développons des algorithmes d'optimisation pour en estimer les paramètres. Ce modèle donne des résultats facilement interprétables et de meilleures performances que d'autres approches de la littérature. Nous étudions enfin deux applications de ce modèle pour confirmer sa pertinence. / The fine spectral resolution of hyperspectral remote sensing images allows an accurate analysis of the imaged scene, but due to their limited spatial resolution, a pixel acquired by the sensor is often a mixture of the contributions of several materials. Spectral unmixing aims at estimating the spectra of the pure materials (called endmembers) in the scene, and their abundances in each pixel. The endmembers are usually assumed to be perfectly represented by a single spectrum, which is wrong in practice since each material exhibits a significant intra-class variability. This thesis aims at designing unmixing algorithms to better handle this phenomenon. First, we perform the unmixing locally in well chosen regions of the image where variability effects are less important, and automatically discard wrongly estimated local endmembers using collaborative sparsity. In another approach, we refine the abundance estimation of the materials by taking into account the group structure of an image-derived endmember dictionary. Second, we introduce an extended linear mixing model, based on physical considerations, modeling spectral variability in the form of scaling factors, and develop optimization algorithms to estimate its parameters. This model provides easily interpretable results and outperforms other state-of-the-art approaches. We finally investigate two applications of this model to confirm its relevance.
|
19 |
Méthodes pour l'analyse des champs profonds extragalactiques MUSE : démélange et fusion de données hyperspectrales ;détection de sources étendues par inférence à grande échelle / Methods for the analysis of extragalactic MUSE deep fields : hyperspectral unmixing and data fusion;detection of extented sources with large-scale inferenceBacher, Raphael 08 November 2017 (has links)
Ces travaux se placent dans le contexte de l'étude des champs profonds hyperspectraux produits par l'instrument d'observation céleste MUSE. Ces données permettent de sonder l'Univers lointain et d'étudier les propriétés physiques et chimiques des premières structures galactiques et extra-galactiques. La première problématique abordée dans cette thèse est l'attribution d'une signature spectrale pour chaque source galactique. MUSE étant un instrument au sol, la turbulence atmosphérique dégrade fortement le pouvoir de résolution spatiale de l'instrument, ce qui génère des situations de mélange spectral pour un grand nombre de sources. Pour lever cette limitation, des approches de fusion de données, s'appuyant sur les données complémentaires du télescope spatial Hubble et d'un modèle de mélange linéaire, sont proposées, permettant la séparation spectrale des sources du champ. Le second objectif de cette thèse est la détection du Circum-Galactic Medium (CGM). Le CGM, milieu gazeux s'étendant autour de certaines galaxies, se caractérise par une signature spatialement diffuse et de faible intensité spectrale. Une méthode de détection de cette signature par test d'hypothèses est développée, basée sur une stratégie de max-test sur un dictionnaire et un apprentissage des statistiques de test sur les données. Cette méthode est ensuite étendue pour prendre en compte la structure spatiale des sources et ainsi améliorer la puissance de détection tout en conservant un contrôle global des erreurs. Les codes développés sont intégrés dans la bibliothèque logicielle du consortium MUSE afin d'être utilisables par l'ensemble de la communauté. De plus, si ces travaux sont particulièrement adaptés aux données MUSE, ils peuvent être étendus à d'autres applications dans les domaines de la séparation de sources et de la détection de sources faibles et étendues. / This work takes place in the context of the study of hyperspectral deep fields produced by the European 3D spectrograph MUSE. These fields allow to explore the young remote Universe and to study the physical and chemical properties of the first galactical and extra-galactical structures.The first part of the thesis deals with the estimation of a spectral signature for each galaxy. As MUSE is a terrestrial instrument, the atmospheric turbulences strongly degrades the spatial resolution power of the instrument thus generating spectral mixing of multiple sources. To remove this issue, data fusion approaches, based on a linear mixing model and complementary data from the Hubble Space Telescope are proposed, allowing the spectral separation of the sources.The second goal of this thesis is to detect the Circum-Galactic Medium (CGM). This CGM, which is formed of clouds of gas surrounding some galaxies, is characterized by a spatially extended faint spectral signature. To detect this kind of signal, an hypothesis testing approach is proposed, based on a max-test strategy on a dictionary. The test statistics is learned on the data. This method is then extended to better take into account the spatial structure of the targets, thus improving the detection power, while still ensuring global error control.All these developments are integrated in the software library of the MUSE consortium in order to be used by the astrophysical community.Moreover, these works can easily be extended beyond MUSE data to other application fields that need faint extended source detection and source separation methods.
|
20 |
Particle swarm optimization methods for pattern recognition and image processingOmran, Mahamed G.H. 17 February 2005 (has links)
Pattern recognition has as its objective to classify objects into different categories and classes. It is a fundamental component of artificial intelligence and computer vision. This thesis investigates the application of an efficient optimization method, known as Particle Swarm Optimization (PSO), to the field of pattern recognition and image processing. First a clustering method that is based on PSO is proposed. The application of the proposed clustering algorithm to the problem of unsupervised classification and segmentation of images is investigated. A new automatic image generation tool tailored specifically for the verification and comparison of various unsupervised image classification algorithms is then developed. A dynamic clustering algorithm which automatically determines the "optimum" number of clusters and simultaneously clusters the data set with minimal user interference is then developed. Finally, PSO-based approaches are proposed to tackle the color image quantization and spectral unmixing problems. In all the proposed approaches, the influence of PSO parameters on the performance of the proposed algorithms is evaluated. / Thesis (PhD)--University of Pretoria, 2006. / Computer Science / unrestricted
|
Page generated in 0.113 seconds