Spelling suggestions: "subject:"hyperspectral image"" "subject:"hyperspectrale image""
11 |
Estimação dos parâmetros do kernel em um classificador SVM na classificação de imagens hiperespectrais em uma abordagem multiclasseBonesso, Diego January 2013 (has links)
Nessa dissertação é investigada e testada uma metodologia para otimizar os parâmetros do kernel do classificador Support Vector Machines (SVM). Experimentos são realizados utilizando dados de imagens em alta dimensão. Imagens em alta dimensão abrem novas possibilidades para a classificação de imagens de sensoriamento remoto que capturam cenas naturais. É sabido que classes que são espectralmente muito similares, i.e, classes que possuem vetores de média muito próximos podem não obstante serem separadas com alto grau de acurácia em espaço de alta dimensão, desde que a matriz de covariância apresente diferenças significativas. O uso de dados de imagens em alta dimensão pode apresentar, no entanto, alguns desafios metodológicos quando aplicado um classificador paramétrico como o classificador de Máxima Verossimilhança Gaussiana. Conforme aumenta a dimensionalidade dos dados, o número de parâmetros a serem estimados a partir de um número geralmente limitado de amostras de treinamento também aumenta. Esse fato pode ocasionar estimativas pouco confiáveis, que por sua vez resultam em baixa acurácia na imagem classificada. Existem diversos abordagens propostas na literatura para minimizar esse problema. Os classificadores não paramétricos podem ser uma boa alternativa para mitigar esse problema. O SVM atualmente tem sido investigado na classificação de dados de imagens em alta-dimensão com número limitado de amostras de treinamento. Para que o classificador SVM seja utilizado com sucesso é necessário escolher uma função de kernel adequada, bem como os parâmetros dessa função. O kernel RBF tem sido frequentemente mencionado na literatura por obter bons resultados na classificação de imagens de sensoriamento remoto. Neste caso, dois parâmetro devem ser escolhidos para o classificador SVM: (1) O parâmetro de margem (C) que determina um ponto de equilíbrio razoável entre a maximização da margem e a minimização do erro de classificação, e (2) o parâmetro que controla o raio do kernel RBF. Estes dois parâmetros podem ser vistos como definindo um espaço de busca. O problema nesse caso consiste em procurar o ponto ótimo que maximize a acurácia do classificador SVM. O método de Busca em Grade é baseado na exploração exaustiva deste espaço de busca. Esse método é proibitivo do ponto de vista do tempo de processamento, sendo utilizado apenas com propósitos comparativos. Na prática os métodos heurísticos são a abordagem mais utilizada, proporcionado níveis aceitáveis de acurácia e tempo de processamento. Na literatura diversos métodos heurísticos são aplicados ao problema de classificação de forma global, i.e, os valores selecionados são aplicados durante todo processo de classificação. Esse processo, no entanto, não considera a diversidade das classes presentes nos dados. Nessa dissertação investigamos a aplicação da heurística Simulated Annealing (Recozimento Simulado) para um problema de múltiplas classes usando o classificador SVM estruturado como uma arvore binária. Seguindo essa abordagem, os parâmetros são estimados em cada nó da arvore binária, resultado em uma melhora na acurácia e tempo razoável de processamento. Experimentos são realizados utilizando dados de uma imagem hiperespectral disponível, cobrindo uma área de teste com controle terrestre bastante confiável. / In this dissertation we investigate and test a methodology to optimize the kernel parameters in a Support Vector Machines classifier. Experiments were carried out using remote sensing high-dimensional image data. High dimensional image data opens new possibilities in the classification of remote sensing image data covering natural scenes. It is well known that classes that are spectrally very similar, i.e., classes that show very similar mean vectors can notwithstanding be separated with an high degree of accuracy in high dimensional spaces, provided that their covariance matrices differ significantly. The use of high-dimensional image data may present, however, some drawbacks when applied in parametric classifiers such as the Gaussian Maximum Likelihood classifier. As the data dimensionality increases, so does the number of parameters to be estimated from a generally limited number of training samples. This fact results in unreliable estimates for the parameters, which in turn results in low accuracy in the classified image. There are several approaches proposed in the literature to minimize this problem. Non-parametric classifiers may provide a sensible way to overcome this problem. Support Vector Machines (SVM) have been more recently investigated in the classification of high-dimensional image data with a limited number of training samples. To achieve this end, a proper kernel function has to be implemented in the SVM classifier and the respective parameters selected properly. The RBF kernel has been frequently mentioned in the literature as providing good results in the classification of remotely sensed data. In this case, two parameters must be chosen in the SVM classification: (1) the margin parameter (C) that determines the trade-off between the maximization of the margin in the SVM and minimization of the classification error, and (2) the parameter that controls the radius in the RBF kernel. These two parameters can be seen as defining a search space, The problem here consists in finding an optimal point that maximizes the accuracy in the SVM classifier. The Grid Search approach is based on an exhaustive exploration in the search space. This approach results prohibitively time consuming and is used only for comparative purposes. In practice heuristic methods are the most commonly used approaches, providing acceptable levels of accuracy and computing time. In the literature several heuristic methods are applied to the classification problem in a global fashion, i.e., the selected values are applied to the entire classification process. This procedure, however, does not take into consideration the diversity of the classes present in the data. In this dissertation we investigate the application of Simulated Annealing to a multiclass problem using the SVM classifier structured as a binary tree. Following this proposed approach, the parameters are estimated at every level of the binary tree, resulting in better accuracy and a reasonable computing time. Experiments are done using a set of hyperspectral image data, covering a test area with very reliable ground control available.
|
12 |
Estimação dos parâmetros do kernel em um classificador SVM na classificação de imagens hiperespectrais em uma abordagem multiclasseBonesso, Diego January 2013 (has links)
Nessa dissertação é investigada e testada uma metodologia para otimizar os parâmetros do kernel do classificador Support Vector Machines (SVM). Experimentos são realizados utilizando dados de imagens em alta dimensão. Imagens em alta dimensão abrem novas possibilidades para a classificação de imagens de sensoriamento remoto que capturam cenas naturais. É sabido que classes que são espectralmente muito similares, i.e, classes que possuem vetores de média muito próximos podem não obstante serem separadas com alto grau de acurácia em espaço de alta dimensão, desde que a matriz de covariância apresente diferenças significativas. O uso de dados de imagens em alta dimensão pode apresentar, no entanto, alguns desafios metodológicos quando aplicado um classificador paramétrico como o classificador de Máxima Verossimilhança Gaussiana. Conforme aumenta a dimensionalidade dos dados, o número de parâmetros a serem estimados a partir de um número geralmente limitado de amostras de treinamento também aumenta. Esse fato pode ocasionar estimativas pouco confiáveis, que por sua vez resultam em baixa acurácia na imagem classificada. Existem diversos abordagens propostas na literatura para minimizar esse problema. Os classificadores não paramétricos podem ser uma boa alternativa para mitigar esse problema. O SVM atualmente tem sido investigado na classificação de dados de imagens em alta-dimensão com número limitado de amostras de treinamento. Para que o classificador SVM seja utilizado com sucesso é necessário escolher uma função de kernel adequada, bem como os parâmetros dessa função. O kernel RBF tem sido frequentemente mencionado na literatura por obter bons resultados na classificação de imagens de sensoriamento remoto. Neste caso, dois parâmetro devem ser escolhidos para o classificador SVM: (1) O parâmetro de margem (C) que determina um ponto de equilíbrio razoável entre a maximização da margem e a minimização do erro de classificação, e (2) o parâmetro que controla o raio do kernel RBF. Estes dois parâmetros podem ser vistos como definindo um espaço de busca. O problema nesse caso consiste em procurar o ponto ótimo que maximize a acurácia do classificador SVM. O método de Busca em Grade é baseado na exploração exaustiva deste espaço de busca. Esse método é proibitivo do ponto de vista do tempo de processamento, sendo utilizado apenas com propósitos comparativos. Na prática os métodos heurísticos são a abordagem mais utilizada, proporcionado níveis aceitáveis de acurácia e tempo de processamento. Na literatura diversos métodos heurísticos são aplicados ao problema de classificação de forma global, i.e, os valores selecionados são aplicados durante todo processo de classificação. Esse processo, no entanto, não considera a diversidade das classes presentes nos dados. Nessa dissertação investigamos a aplicação da heurística Simulated Annealing (Recozimento Simulado) para um problema de múltiplas classes usando o classificador SVM estruturado como uma arvore binária. Seguindo essa abordagem, os parâmetros são estimados em cada nó da arvore binária, resultado em uma melhora na acurácia e tempo razoável de processamento. Experimentos são realizados utilizando dados de uma imagem hiperespectral disponível, cobrindo uma área de teste com controle terrestre bastante confiável. / In this dissertation we investigate and test a methodology to optimize the kernel parameters in a Support Vector Machines classifier. Experiments were carried out using remote sensing high-dimensional image data. High dimensional image data opens new possibilities in the classification of remote sensing image data covering natural scenes. It is well known that classes that are spectrally very similar, i.e., classes that show very similar mean vectors can notwithstanding be separated with an high degree of accuracy in high dimensional spaces, provided that their covariance matrices differ significantly. The use of high-dimensional image data may present, however, some drawbacks when applied in parametric classifiers such as the Gaussian Maximum Likelihood classifier. As the data dimensionality increases, so does the number of parameters to be estimated from a generally limited number of training samples. This fact results in unreliable estimates for the parameters, which in turn results in low accuracy in the classified image. There are several approaches proposed in the literature to minimize this problem. Non-parametric classifiers may provide a sensible way to overcome this problem. Support Vector Machines (SVM) have been more recently investigated in the classification of high-dimensional image data with a limited number of training samples. To achieve this end, a proper kernel function has to be implemented in the SVM classifier and the respective parameters selected properly. The RBF kernel has been frequently mentioned in the literature as providing good results in the classification of remotely sensed data. In this case, two parameters must be chosen in the SVM classification: (1) the margin parameter (C) that determines the trade-off between the maximization of the margin in the SVM and minimization of the classification error, and (2) the parameter that controls the radius in the RBF kernel. These two parameters can be seen as defining a search space, The problem here consists in finding an optimal point that maximizes the accuracy in the SVM classifier. The Grid Search approach is based on an exhaustive exploration in the search space. This approach results prohibitively time consuming and is used only for comparative purposes. In practice heuristic methods are the most commonly used approaches, providing acceptable levels of accuracy and computing time. In the literature several heuristic methods are applied to the classification problem in a global fashion, i.e., the selected values are applied to the entire classification process. This procedure, however, does not take into consideration the diversity of the classes present in the data. In this dissertation we investigate the application of Simulated Annealing to a multiclass problem using the SVM classifier structured as a binary tree. Following this proposed approach, the parameters are estimated at every level of the binary tree, resulting in better accuracy and a reasonable computing time. Experiments are done using a set of hyperspectral image data, covering a test area with very reliable ground control available.
|
13 |
Determinação de aditivos detergentes dispersantes em gasolinautilizando a técnica do ring-oven e imagens hiperespectrais na região doinfravermelho próximoBRITO, Lívia Rodrigues e 25 August 2014 (has links)
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2016-06-29T11:48:02Z
No. of bitstreams: 2
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5)
Dissertação de Mestrado - Lívia Rodrigues e Brito.pdf: 11880513 bytes, checksum: cdf56fe284940b9c31e62271753b913f (MD5) / Made available in DSpace on 2016-06-29T11:48:02Z (GMT). No. of bitstreams: 2
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5)
Dissertação de Mestrado - Lívia Rodrigues e Brito.pdf: 11880513 bytes, checksum: cdf56fe284940b9c31e62271753b913f (MD5)
Previous issue date: 2014-08-25 / CNPq / A adição de aditivos detergentes dispersantes nas gasolinas brasileiras será obrigatória a partir de julho de 2015. É necessário, portanto, desenvolver uma metodologia que permita quantificar esses aditivos para verificar o cumprimento da lei. Neste trabalho, é proposto um método que associa a técnica do ring-oven com as imagens hiperespectrais no infravermelho próximo (NIR-HI). Como os aditivos são adicionados em baixas concentrações, a técnica do ring-oven foi empregada para concentrá-los previamente à análise por NIR-HI. Anéis foram produzidos a partir de amostras de gasolinas comum adicionadas dos aditivos (denominados G, T, W e Y) fornecidos pela Agência Nacional do Petróleo, Gás Natural e Biocombustíveis (ANP) e as imagens adquiridas utilizando uma câmera hiperespectral (SisuCHEMA). Três estratégias de extração dos espectros do anel foram testadas a fim de se escolher a mais rápida e objetiva. A estratégia escolhida se baseia nos histogramas dos escores da primeira componente principal das imagens analisadas individualmente. Modelos de calibração individuais para cada aditivo foram construídos empregando a regressão por mínimos quadrados parciais (PLS), por isso, fez-se necessária uma etapa prévia de classificação. O melhor resultado para classificação foi obtido empregando a análise discriminante linear (LDA) associada ao algoritmo genético (GA) para seleção de variáveis, o qual apresentou uma taxa de classificações corretas de 92,31 %. Observou-se que a maioria dos erros de classificação envolveram amostras dos aditivos G e W. Um único modelo de regressão para esses dois aditivos foi, então, construído e seu erro foi equivalente aos dos modelos individuais. Os modelos de regressão apresentaram erros médios de predição entre 2 e 15 %. Esses resultados mostram que a metodologia proposta pode ser utilizada para determinar as concentrações dos aditivos com confiabilidade e garantir que eles estão sendo adicionados conforme a lei. / The addition of detergent dispersant additives to Brazilian gasoline will be mandatory from July 2015. It is necessary, therefore, to develop a methodology that allows quantifying these additives to verify their compliance with the law. In this work, a method that associates the ring-oven technique with near infrared hyperspectral images (NIR-HI) is proposed. Because the additives are added in low concentrations, the ring-oven technique was employed to concentrate them prior to the NIR-HI analysis. Rings were produced from samples of gasolines without additives spiked with additives (called G, T, W and Y) provided by the National Agency of Petroleum, Natural Gas and Biofuels (ANP) and the images were acquired using a hyperspectral camera (SisuCHEMA). Three strategies for extraction of the ring spectra were tested in order to select the faster and most objective. The chosen strategy is based on the histograms of the first principal component scores of the images analyzed individually. Regression models were built for each additive using partial least squares (PLS) regression, so it was necessary to have a previous classification stage. The best classification result was obtained using the linear discriminant analysis (LDA) associated with the genetic algorithm (GA) for variable selection, which showed a correct classification rate of 92.31 %. It was observed that most of the misclassification errors involved the samples of the G and W additives. A single regression model was then built for these two additives and its error was equivalent to the errors of the individual models. The regression models showed average prediction errors between 2 and 15 %. These results show that the proposed methodology can be used to determine the additive concentrations with reliability and to ensure that they are been added according to the law.
|
14 |
Investigação do uso de imagens de sensor de sensoriamento remoto hiperespectral e com alta resolução espacial no monitoramento da condição de uso de pavimentos rodoviários. / Investigation of use hyperspectral and high spatial resolution images from remote sensing in pavement surface condition monitoring.Marcos Ribeiro Resende 24 September 2010 (has links)
Segundo a Agência Nacional de Transportes Terrestres (ANTT) em seu Anuário Estatístico dos Transportes Terrestres AETT (2008), o Brasil em todo o seu território possui 211.678 quilômetros de rodovias pavimentadas. O valor de serventia do pavimento diminui com o passar do tempo por dois fatores principais: o tráfego e as intempéries (BERNUCCI et al., 2008). Monitorar a condição de uso de toda a extensão das rodovias brasileiras é tarefa dispendiosa e demorada. A investigação de novas técnicas que permitam o levantamento da condição dos pavimentos de forma ágil e automática é parte da pesquisa deste trabalho. Nos últimos anos, um número crescente de imagens de alta resolução espacial tem surgido no mercado mundial com o aparecimento dos novos satélites e sensores aeroembarcados de sensoriamento remoto. Da mesma forma, imagens multiespectrais e até mesmo hiperespectrais estão sendo disponibilizadas comercialmente e para pesquisa científica. Neste trabalho são utilizadas imagens hiperespectrais de sensor digital aeroembarcado. Uma metodologia para identificação automática dos pavimentos asfaltados e classificação das principais ocorrências dos defeitos do asfalto foi desenvolvida. A primeira etapa da metodologia é a identificação do asfalto na imagem, utilizando uma classificação híbrida baseada inicialmente em pixel e depois refinada por objetos foi possível a extração da informação de asfalto das imagens disponíveis. A segunda etapa da metodologia é a identificação e classificação das ocorrências dos principais defeitos nos pavimentos flexíveis que são observáveis nas imagens de alta resolução espacial. Esta etapa faz uso intensivo das novas técnicas de classificação de imagens baseadas em objetos. O resultado final é a geração de índices da condição do pavimento, a partir das imagens, que possam ser comparados com os indicadores da qualidade da superfície do pavimento já normatizados pelos órgãos competentes no país. / According to Statistical Survey of Land Transportation AETT (2008) of National Agency of Land Transportation (ANTT), Brazil has in its territory 211,678 kilometers of paved roads. The pavement Present Serviceability Ratio (PSR) value decreases over time by two main factors: traffic and weather (BERNUCCI et al., 2008). Monitor the condition of use of all Brazilian roads is expensive and time consuming task. The investigation of new techniques that allow a quick and automatic survey of pavement condition is part of this research. In recent years, an increasing number of images with high spatial resolution has emerged on the world market with the advent of new remote sensing satellites and airborne sensors. Similarly, multispectral and even hyperspectral imagery are become available commercially and for scientific research nowadays. Hyperspectral images from digital airborne sensor have been used in this work. A new methodology for automatic identification of asphalted pavement and also for classification of the main defects of the asphalt has been developed. The first step of the methodology is the identification of the asphalt in the image, using hybrid classification based on pixel initially and after improved by objects. Using this approach was feasible to extract asphalt information from the available images. The second step of the methodology is the identification and classification of the main defects of flexible pavement surface that are observable in high spatial resolution imagery. This step makes intensive use of new techniques for classification of images based on objects. The goal, is the generation of pavement surface condition index from the images that can be compared with quality index of pavement surface that are already regulated by the regulatory agency in the country.
|
15 |
Monitorování chemických parametrů povrchových důlních vod z hyperspektrálních obrazových dat / Monitoring of chemical parameters of mining waters from hyperspectral image dataHladíková, Lenka January 2012 (has links)
Monitoring of Chemical Parameters of Mining Waters from Hyperspectral Image Data Abstract The thesis deals with utilization of hyperspectral image data for mining water quality monitoring. Sokolov lignite basin, facing many environmental problems caused by brown coal mining activities is the area of interest. Airborne hyperspectral image data acquired by the HyMap sensor in 2009 and 2010 and ground truth data - chemical and physical parameters of water samples are the main data sources for the thesis. Practical part aims at estimating of the amount of the dissolved iron and suspended sediments in selected water bodies. Two approaches were used to achieve this goal - the empirically derived relationship between the ground measurements and reflectance of the water bodies, and spectral unmixing method. Comparison of the two mentioned approaches and evaluation of validity to use the proposed methods for the data acquired by the same sensor one year later is also a part of this thesis.
|
16 |
A method for unbiased analysis of fluorescence microscope images of Alzheimer’s disease related amyloidsHaglund, Samuel January 2020 (has links)
Alzheimer's disease is a widespread disease that has devastating effects on the human brain and mind. Ultimately, it leads to death and there are currently no treatment methods available that can stop the disease progression. The mechanisms involved behind the disease are not fully understood although it is known that amyloid fibrils play an important role in the disease development. These fibrils are able to form plaques that can trigger neuronal death, by interacting with receptors on the cell surface and the synaptic cleft or by entering the cell and disturb important functions such as metabolic pathways. To study the plaque formation of amyloid proteins, both in vitro and in vivo methods are used to investigate the characteristics of the protein. Luminescent conjugated oligothiophene probes are able to bind in to amyloid beta fibrils and emit light when excited by an external light source. This way fibrillation properties of the protein can be studied. Developing probes that can serve as biomarkers for detection of amyloid fibrils could change the way Alzheimer's is treated. Being able to detect the disease in its early disease course, and start treatments early, is suggested to stop the progression of neural breakdown. In this project a software is developed to analyze fluorescent microscopy images, taken on tissue stained with these probes. The software is able to filter out background noise and capture parts of the picture that are of interest when studying the amyloid plaques. This software generates results similar to if the images were to be analyzed using any software where the regions to analyze are selected manually, suggesting that the software developed produce reliable results unbiased by background noise.
|
17 |
Dimension Reduction for Hyperspectral ImageryLy, Nam H (Nam Hoai) 14 December 2013 (has links)
In this dissertation, the general problem of the dimensionality reduction of hyperspectral imagery is considered. Data dimension can be reduced through compression, in which an original image is encoded into bitstream of greatly reduced size; through application of a transformation, in which a high-dimensional space is mapped into a low-dimensional space; and through a simple process of subsampling, wherein the number of pixels is reduced spatially during image acquisition. All three techniques are investigated in the course of the dissertation. For data compression, an approach to calculate an operational bitrate for JPEG2000 in conjunction with principal component analysis is proposed. It is shown that an optimal bitrate for such a lossy compression method can be estimated while maintaining both class separability as well as anomalous pixels in the original data. On the other hand, the transformation paradigm is studied for spectral dimensionality reduction; specifically, dataindependent random spectral projections are considered, while the compressive projection principal component analysis algorithm is adopted for data reconstruction. It is shown that, by incorporating both spectral and spatial partitioning of the original data, reconstruction accuracy can be improved. Additionally, a new supervised spectral dimensionality reduction approach using a sparsity-preserving graph is developed. The resulting sparse graph-based discriminant analysis is seen to yield superior classification performance at low dimensionality. Finally, for spatial dimensionality reduction, a simple spatial subsampling scheme is considered for a multitemporal hyperspectral image sequence, such that the original image is reconstructed using a sparse dictionary learned from a prior image in the sequence.
|
18 |
High-throughput single-cell imaging and sorting by stimulated Raman scattering microscopy and laser-induced ejectionZhang, Jing 18 January 2024 (has links)
Single-cell bio-analytical techniques play a pivotal role in contemporary biological and biomedical research. Among current high-throughput single-cell imaging methods, coherent Raman imaging offers both high bio-compatibility and high-throughput information-rich capabilities, offering insights into cellular composition, dynamics, and function. Coherent Raman imaging finds its value in diverse applications, ranging from live cell dynamic imaging, high-throughput drug screening, fast antimicrobial susceptibility testing, etc. In this thesis, I first present a deep learning algorithm to solve the inverse problem of getting a chemically labeled image from a single-shot femtosecond stimulated Raman scattering (SRS) image. This method allows high-speed, high-throughput tracking of lipid droplet dynamics and drug response in live cells. Second, I provide image-based single-cell analysis in an engineered Escherichia coli (E. coli) population, confirming the chemical composition and subcellular structure organization of individual engineered E. coli cells. Additionally, I unveil metabolon formation in engineered E. coli by high-speed spectroscopic SRS and two-photon fluorescence imaging.
Lastly, I present stimulated Raman-activated cell ejection (S-RACE) by integrating high-throughput SRS imaging, in situ image decomposition, and high-precision laser-induced cell ejection. I demonstrate the automatic imaging-identification-sorting workflow in S-RACE and advance its compatibility with versatile samples ranging from polymer particles, single live bacteria/fungus, and tissue sections.
Collectively, these efforts demonstrate the valuable capability of SRS in high-throughput single-cell imaging and sorting, opening opportunities for a wide range of biological and biomedical applications.
|
19 |
Hyperspectral Image Visualization Using Double And Multiple LayersCai, Shangshu 02 May 2009 (has links)
This dissertation develops new approaches for hyperspectral image visualization. Double and multiple layers are proposed to effectively convey the abundant information contained in the original high-dimensional data for practical decision-making support. The contributions of this dissertation are as follows. 1.Development of new visualization algorithms for hyperspectral imagery. Double-layer technique can display mixed pixel composition and global material distribution simultaneously. The pie-chart layer, taking advantage of the properties of non-negativity and sum-to-one abundances from linear mixture analysis of hyperspectral pixels, can be fully integrated with the background layer. Such a synergy enhances the presentation at both macro and micro scales. 2.Design of an effective visual exploration tool. The developed visualization techniques are implemented in a visualization system, which can automatically preprocess and visualize hyperspectral imagery. The interactive tool with a userriendly interface will enable viewers to display an image with any desired level of details. 3.Design of effective user studies to validate and improve visualization methods. The double-layer technique is evaluated by well designed user studies. The traditional approaches, including gray-scale side-by-side classification maps, color hard classification maps, and color soft classification maps, are compared with the proposed double-layer technique. The results of the user studies indicate that the double-layer algorithm provides the best performance in displaying mixed pixel composition in several aspects and that it has the competitive capability of displaying the global material distribution. Based on these results, a multi-layer algorithm is proposed to improve global information display.
|
20 |
Spatial-spectral analysis in dimensionality reduction for hyperspectral image classificationShah, Chiranjibi 13 May 2022 (has links)
This dissertation develops new algorithms with different techniques in utilizing spatial and spectral information for hyperspectral image classification. It is necessary to perform spatial and spectral analysis and conduct dimensionality reduction (DR) for effective feature extraction, because hyperspectral imagery consists of a large number of spatial pixels along with hundreds of spectral dimensions.
In the first proposed method, it employs spatial-aware collaboration-competition preserving graph embedding by imposing a spatial regularization term along with Tikhonov regularization in the objective function for DR of hyperspectral imagery. Moreover, Collaboration representation (CR) is an efficient classifier but without using spatial information. Thus, structure-aware collaborative representation (SaCRT) is introduced to utilize spatial information for more appropriate data representations. It is demonstrated that better classification performance can be offered by the SaCRT in this work.
For DR, collaborative and low-rank representation-based graph for discriminant analysis of hyperspectral imagery is proposed. It can generate a more informative graph by combining collaborative and low-rank representation terms. With the collaborative term, it can incorporate within-class atoms. Meanwhile, it can preserve global data structure by use of the low-rank term. Since it employs a collaborative term in the estimation of representation coefficients, its closed-form solution results in less computational complexity in comparison to sparse representation. The proposed collaborative and low-rank representation-based graph can outperform the existing sparse and low-rank representation-based graph for DR of hyperspectral imagery.
The concept of tree-based techniques and deep neural networks can be combined by use of an interpretable canonical deep tabular data learning architecture (TabNet). It uses sequential attention for choosing appropriate features at different decision steps. An efficient TabNet for hyperspectral image classification is developed in this dissertation, in which the performance of TabNet is enhanced by incorporating a 2-D convolution layer inside an attentive transformer. Additionally, better classification performance of TabNet can be obtained by utilizing structure profiles on TabNet.
|
Page generated in 0.0917 seconds