• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 17
  • 11
  • 5
  • 3
  • Tagged with
  • 43
  • 43
  • 23
  • 18
  • 17
  • 13
  • 11
  • 10
  • 10
  • 8
  • 8
  • 7
  • 6
  • 6
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Uso de Métodos Bayesianos para Confiabilidade de Redes / Using Bayesian methods for network reliability

Oliveira, Sandra Cristina de 21 May 1999 (has links)
Neste trabalho apresentamos uma análise Bayesiana para confiabilidade de sistemas de redes usando métodos de simulação de Monte Carlo via Cadeias de Markov. Assumimos diferentes densidades a priori para as confiabilidades dos componentes individuais, com o objetivo de obtermos sumários de interesse. A metodologia é exemplificada condiderando um sistema de rede com sete componentes e um caso especial de sistema complexo composto por nove componentes. Consideramos ainda confiabilidade de redes tipo k-out--of-m com alguns exemplos numéricos / In this work we present a Bayesian approach for network reliability systems using Marov Chain Monte Carlo methods. We assume different prior densities for the individual component reliabilities th to get the posterior summaries of interest. The methodology is exemplified considering a network system with seven components and a special case of complex system with nine components. We also consider k-out-of-m system reliabiility with some numerical examples
32

Estimação e diagnóstico na distribuição exponencial por partes em análise de sobrevivência com fração de cura / Estimation and diagnostics for the piecewise exponential distribution in survival analysis with fraction cure

Sibim, Alessandra Cristiane 31 March 2011 (has links)
O principal objetivo deste trabalho é desenvolver procedimentos inferências em uma perspectiva bayesiana para modelos de sobrevivência com (ou sem) fração de cura baseada na distribuição exponencial por partes. A metodologia bayesiana é baseada em métodos de Monte Carlo via Cadeias de Markov (MCMC). Para detectar observações influentes nos modelos considerados foi usado o método bayesiano de análise de influência caso a caso (Cho et al., 2009), baseados na divergência de Kullback-Leibler. Além disso, propomos o modelo destrutivo binomial negativo com fração de cura. O modelo proposto é mais geral que os modelos de sobrevivência com fração de cura, já que permitem estimar a probabilidade do número de causas que não foram eliminadas por um tratamento inicial / The main objective is to develop procedures inferences in a bayesian perspective for survival models with (or without) the cure rate based on piecewise exponential distribution. The methodology is based on bayesian methods for Markov Chain Monte Carlo (MCMC). To detect influential observations in the models considering bayesian case deletion influence diagnostics based on the Kullback-Leibler divergence (Cho et al., 2009). Furthermore, we propose the negative binomial model destructive cure rate. The proposed model is more general than the survival models with cure rate, since the probability to estimate the number of cases which were not eliminated by an initial treatment
33

Modélisation des données d'attractivité hospitalière par les modèles d'utilité / Modeling hospital attractivity data by using utility models

Saley, Issa 29 November 2017 (has links)
Savoir comment les patients choisissent les hôpitaux est d'une importance majeure non seulement pour les gestionnaires des hôpitaux mais aussi pour les décideurs. Il s'agit entre autres pour les premiers, de la gestion des flux et l'offre des soins et pour les seconds, l'implémentation des reformes dans le système de santé.Nous proposons dans cette thèse différentes modélisations des données d'admission de patients en fonction de la distance par rapport à un hôpital afin de prévoir le flux des patients et de comparer son attractivité par rapport à d'autres hôpitaux. Par exemple, nous avons utilisé des modèles bayésiens hiérarchiques pour des données de comptage avec possible dépendance spatiale. Des applications on été faites sur des données d'admission de patients dans la région de Languedoc-Roussillon.Nous avons aussi utilisé des modèles de choix discrets tels que les RUMs. Mais vu certaines limites qu'ils présentent pour notre objectif, nous avons relâché l'hypothèse de maximisation d'utilité pour une plus souple et selon laquelle un agent (patient) peut choisir un produit (donc hôpital) dès lors que l'utilité que lui procure ce produit a atteint un certain seuil de satisfaction, en considérant certains aspects. Une illustration de cette approche est faite sur trois hôpitaux de l'Hérault pour les séjours dus à l'asthme en 2009 pour calculer l'envergure territorial d'un hôpital donné . / Understanding how patients choose hospitals is of utmost importance for both hospitals administrators and healthcare decision makers; the formers for patients incoming tide and the laters for regulations.In this thesis, we present different methods of modelling patients admission data in order to forecast patients incoming tide and compare hospitals attractiveness.The two first method use counting data models with possible spatial dependancy. Illustration is done on patients admission data in Languedoc-Roussillon.The third method uses discrete choice models (RUMs). Due to some limitations of these models according to our goal, we introduce a new approach where we released the assumption of utility maximization for an utility-threshold ; that is to say that an agent (patient) can choose an alternative (hospital) since he thinks that he can obtain a certain level of satisfaction of doing so, according to some aspects. Illustration of the approach is done on 2009 asthma admission data in Hérault.
34

Méthodes bayésiennes pour l'analyse génétique / Bayesian methods for gene expression factor analysis

Bazot, Cécile 27 September 2013 (has links)
Ces dernières années, la génomique a connu un intérêt scientifique grandissant, notamment depuis la publication complète des cartes du génome humain au début des années 2000. A présent, les équipes médicales sont confrontées à un nouvel enjeu : l'exploitation des signaux délivrés par les puces ADN. Ces signaux, souvent de grande taille, permettent de connaître à un instant donné quel est le niveau d'expression des gênes dans un tissu considéré, sous des conditions particulières (phénotype, traitement, ...), pour un individu. Le but de cette recherche est d'identifier des séquences temporelles caractéristiques d'une pathologie, afin de détecter, voire de prévenir, une maladie chez un groupe de patients observés. Les solutions développées dans cette thèse consistent en la décomposition de ces signaux en facteurs élémentaires (ou signatures génétiques) selon un modèle bayésien de mélange linéaire, permettant une estimation conjointe de ces facteurs et de leur proportion dans chaque échantillon. L’utilisation de méthodes de Monte Carlo par chaînes de Markov sera tout particulièrement appropriée aux modèles bayésiens hiérarchiques proposés puisqu'elle permettra de surmonter les difficultés liées à leur complexité calculatoire. / In the past few years, genomics has received growing scientic interest, particularly since the map of the human genome was completed and published in early 2000's. Currently, medical teams are facing a new challenge: processing the signals issued by DNA microarrays. These signals, often of voluminous size, allow one to discover the level of a gene expression in a given tissue at any time, under specic conditions (phenotype, treatment, ...). The aim of this research is to identify characteristic temporal gene expression proles of host response to a pathogen, in order to detect or even prevent a disease in a group of observed patients. The solutions developed in this thesis consist of the decomposition of these signals into elementary factors (genetic signatures) following a Bayesian linear mixing model, allowing for joint estimation of these factors and their relative contributions to each sample. The use of Markov chain Monte Carlo methods is particularly suitable for the proposed hierarchical Bayesian models. Indeed they allow one to overcome the diculties related to their computational complexity.
35

Uso de Métodos Bayesianos para Confiabilidade de Redes / Using Bayesian methods for network reliability

Sandra Cristina de Oliveira 21 May 1999 (has links)
Neste trabalho apresentamos uma análise Bayesiana para confiabilidade de sistemas de redes usando métodos de simulação de Monte Carlo via Cadeias de Markov. Assumimos diferentes densidades a priori para as confiabilidades dos componentes individuais, com o objetivo de obtermos sumários de interesse. A metodologia é exemplificada condiderando um sistema de rede com sete componentes e um caso especial de sistema complexo composto por nove componentes. Consideramos ainda confiabilidade de redes tipo k-out--of-m com alguns exemplos numéricos / In this work we present a Bayesian approach for network reliability systems using Marov Chain Monte Carlo methods. We assume different prior densities for the individual component reliabilities th to get the posterior summaries of interest. The methodology is exemplified considering a network system with seven components and a special case of complex system with nine components. We also consider k-out-of-m system reliabiility with some numerical examples
36

Inferência estatística para regressão múltipla h-splines / Statistical inference for h-splines multiple regression

Morellato, Saulo Almeida, 1983- 25 August 2018 (has links)
Orientador: Ronaldo Dias / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Matemática Estatística e Computação Científica / Made available in DSpace on 2018-08-25T00:25:46Z (GMT). No. of bitstreams: 1 Morellato_SauloAlmeida_D.pdf: 32854783 bytes, checksum: 040664acd0c8f1efe07cedccda8d11f6 (MD5) Previous issue date: 2014 / Resumo: Este trabalho aborda dois problemas de inferência relacionados à regressão múltipla não paramétrica: a estimação em modelos aditivos usando um método não paramétrico e o teste de hipóteses para igualdade de curvas ajustadas a partir do modelo. Na etapa de estimação é construída uma generalização dos métodos h-splines, tanto no contexto sequencial adaptativo proposto por Dias (1999), quanto no contexto bayesiano proposto por Dias e Gamerman (2002). Os métodos h-splines fornecem uma escolha automática do número de bases utilizada na estimação do modelo. Estudos de simulação mostram que os resultados obtidos pelos métodos de estimação propostos são superiores aos conseguidos nos pacotes gamlss, mgcv e DPpackage em R. São criados dois testes de hipóteses para testar H0 : f = f0. Um teste de hipóteses que tem sua regra de decisão baseada na distância quadrática integrada entre duas curvas, referente à abordagem sequencial adaptativa, e outro baseado na medida de evidência bayesiana proposta por Pereira e Stern (1999). No teste de hipóteses bayesiano o desempenho da medida de evidência é observado em vários cenários de simulação. A medida proposta apresentou um comportamento que condiz com uma medida de evidência favorável à hipótese H0. No teste baseado na distância entre curvas, o poder do teste foi estimado em diversos cenários usando simulações e os resultados são satisfatórios. Os procedimentos propostos de estimação e teste de hipóteses são aplicados a um conjunto de dados referente ao trabalho de Tanaka e Nishii (2009) sobre o desmatamento no leste da Ásia. O objetivo é escolher um entre oito modelos candidatos. Os testes concordaram apontando um par de modelos como sendo os mais adequados / Abstract: In this work we discuss two inference problems related to multiple nonparametric regression: estimation in additive models using a nonparametric method and hypotheses testing for equality of curves, also considering additive models. In the estimation step, it is constructed a generalization of the h-splines method, both in the sequential adaptive context proposed by Dias (1999), and in the Bayesian context proposed by Dias and Gamerman (2002). The h-splines methods provide an automatic choice of the number of bases used in the estimation of the model. Simulation studies show that the results obtained by proposed estimation methods are superior to those achieved in the packages gamlss, mgcv and DPpackage in R. Two hypotheses testing are created to test H0 : f = f0. A hypotheses test that has a decision rule based on the integrated squared distance between two curves, for adaptive sequential approach, and another based on the Bayesian evidence measure proposed by Pereira and Stern (1999). In Bayesian hypothesis testing the performance measure of evidence is observed in several simulation scenarios. The proposed measure showed a behavior that is consistent with evidence favorable to H0. In the test based on the distance between the curves, the power of the test was estimated at various scenarios using simulations, and the results are satisfactory. At the end of the work the proposed procedures of estimation and hypotheses testing are applied in a dataset concerning to the work of Tanaka and Nishii (2009) about the deforestation in East Asia. The objective is to choose one amongst eight models. The tests point to a pair of models as being the most suitableIn this work we discuss two inference problems related to multiple nonparametric regression: estimation in additive models using a nonparametric method and hypotheses testing for equality of curves, also considering additive models. In the estimation step, it is constructed a generalization of the h-splines method, both in the sequential adaptive context proposed by Dias (1999), and in the Bayesian context proposed by Dias and Gamerman (2002). The h-splines methods provide an automatic choice of the number of bases used in the estimation of the model. Simulation studies show that the results obtained by proposed estimation methods are superior to those achieved in the packages gamlss, mgcv and DPpackage in R. Two hypotheses testing are created to test H0 : f = f0. A hypotheses test that has a decision rule based on the integrated squared distance between two curves, for adaptive sequential approach, and another based on the Bayesian evidence measure proposed by Pereira and Stern (1999). In Bayesian hypothesis testing the performance measure of evidence is observed in several simulation scenarios. The proposed measure showed a behavior that is consistent with evidence favorable to H0. In the test based on the distance between the curves, the power of the test was estimated at various scenarios using simulations, and the results are satisfactory. At the end of the work the proposed procedures of estimation and hypotheses testing are applied in a dataset concerning to the work of Tanaka and Nishii (2009) about the deforestation in East Asia. The objective is to choose one amongst eight models. The tests point to a pair of models as being the most suitable / Doutorado / Estatistica / Doutor em Estatística
37

Estimação e diagnóstico na distribuição exponencial por partes em análise de sobrevivência com fração de cura / Estimation and diagnostics for the piecewise exponential distribution in survival analysis with fraction cure

Alessandra Cristiane Sibim 31 March 2011 (has links)
O principal objetivo deste trabalho é desenvolver procedimentos inferências em uma perspectiva bayesiana para modelos de sobrevivência com (ou sem) fração de cura baseada na distribuição exponencial por partes. A metodologia bayesiana é baseada em métodos de Monte Carlo via Cadeias de Markov (MCMC). Para detectar observações influentes nos modelos considerados foi usado o método bayesiano de análise de influência caso a caso (Cho et al., 2009), baseados na divergência de Kullback-Leibler. Além disso, propomos o modelo destrutivo binomial negativo com fração de cura. O modelo proposto é mais geral que os modelos de sobrevivência com fração de cura, já que permitem estimar a probabilidade do número de causas que não foram eliminadas por um tratamento inicial / The main objective is to develop procedures inferences in a bayesian perspective for survival models with (or without) the cure rate based on piecewise exponential distribution. The methodology is based on bayesian methods for Markov Chain Monte Carlo (MCMC). To detect influential observations in the models considering bayesian case deletion influence diagnostics based on the Kullback-Leibler divergence (Cho et al., 2009). Furthermore, we propose the negative binomial model destructive cure rate. The proposed model is more general than the survival models with cure rate, since the probability to estimate the number of cases which were not eliminated by an initial treatment
38

Lois a priori non-informatives et la modélisation par mélange / Non-informative priors and modelization by mixtures

Kamary, Kaniav 15 March 2016 (has links)
L’une des grandes applications de la statistique est la validation et la comparaison de modèles probabilistes au vu des données. Cette branche des statistiques a été développée depuis la formalisation de la fin du 19ième siècle par des pionniers comme Gosset, Pearson et Fisher. Dans le cas particulier de l’approche bayésienne, la solution à la comparaison de modèles est le facteur de Bayes, rapport des vraisemblances marginales, quelque soit le modèle évalué. Cette solution est obtenue par un raisonnement mathématique fondé sur une fonction de coût.Ce facteur de Bayes pose cependant problème et ce pour deux raisons. D’une part, le facteur de Bayes est très peu utilisé du fait d’une forte dépendance à la loi a priori (ou de manière équivalente du fait d’une absence de calibration absolue). Néanmoins la sélection d’une loi a priori a un rôle vital dans la statistique bayésienne et par conséquent l’une des difficultés avec la version traditionnelle de l’approche bayésienne est la discontinuité de l’utilisation des lois a priori impropres car ils ne sont pas justifiées dans la plupart des situations de test. La première partie de cette thèse traite d’un examen général sur les lois a priori non informatives, de leurs caractéristiques et montre la stabilité globale des distributions a posteriori en réévaluant les exemples de [Seaman III 2012]. Le second problème, indépendant, est que le facteur de Bayes est difficile à calculer à l’exception des cas les plus simples (lois conjuguées). Une branche des statistiques computationnelles s’est donc attachée à résoudre ce problème, avec des solutions empruntant à la physique statistique comme la méthode du path sampling de [Gelman 1998] et à la théorie du signal. Les solutions existantes ne sont cependant pas universelles et une réévaluation de ces méthodes suivie du développement de méthodes alternatives constitue une partie de la thèse. Nous considérons donc un nouveau paradigme pour les tests bayésiens d’hypothèses et la comparaison de modèles bayésiens en définissant une alternative à la construction traditionnelle de probabilités a posteriori qu’une hypothèse est vraie ou que les données proviennent d’un modèle spécifique. Cette méthode se fonde sur l’examen des modèles en compétition en tant que composants d’un modèle de mélange. En remplaçant le problème de test original avec une estimation qui se concentre sur le poids de probabilité d’un modèle donné dans un modèle de mélange, nous analysons la sensibilité sur la distribution a posteriori conséquente des poids pour divers modélisation préalables sur les poids et soulignons qu’un intérêt important de l’utilisation de cette perspective est que les lois a priori impropres génériques sont acceptables, tout en ne mettant pas en péril la convergence. Pour cela, les méthodes MCMC comme l’algorithme de Metropolis-Hastings et l’échantillonneur de Gibbs et des approximations de la probabilité par des méthodes empiriques sont utilisées. Une autre caractéristique de cette variante facilement mise en œuvre est que les vitesses de convergence de la partie postérieure de la moyenne du poids et de probabilité a posteriori correspondant sont assez similaires à la solution bayésienne classique / One of the major applications of statistics is the validation and comparing probabilistic models given the data. This branch statistics has been developed since the formalization of the late 19th century by pioneers like Gosset, Pearson and Fisher. In the special case of the Bayesian approach, the comparison solution of models is the Bayes factor, ratio of marginal likelihoods, whatever the estimated model. This solution is obtained by a mathematical reasoning based on a loss function. Despite a frequent use of Bayes factor and its equivalent, the posterior probability of models, by the Bayesian community, it is however problematic in some cases. First, this rule is highly dependent on the prior modeling even with large datasets and as the selection of a prior density has a vital role in Bayesian statistics, one of difficulties with the traditional handling of Bayesian tests is a discontinuity in the use of improper priors since they are not justified in most testing situations. The first part of this thesis deals with a general review on non-informative priors, their features and demonstrating the overall stability of posterior distributions by reassessing examples of [Seaman III 2012].Beside that, Bayes factors are difficult to calculate except in the simplest cases (conjugate distributions). A branch of computational statistics has therefore emerged to resolve this problem with solutions borrowing from statistical physics as the path sampling method of [Gelman 1998] and from signal processing. The existing solutions are not, however, universal and a reassessment of the methods followed by alternative methods is a part of the thesis. We therefore consider a novel paradigm for Bayesian testing of hypotheses and Bayesian model comparison. The idea is to define an alternative to the traditional construction of posterior probabilities that a given hypothesis is true or that the data originates from a specific model which is based on considering the models under comparison as components of a mixture model. By replacing the original testing problem with an estimation version that focus on the probability weight of a given model within a mixture model, we analyze the sensitivity on the resulting posterior distribution of the weights for various prior modelings on the weights and stress that a major appeal in using this novel perspective is that generic improper priors are acceptable, while not putting convergence in jeopardy. MCMC methods like Metropolis-Hastings algorithm and the Gibbs sampler are used. From a computational viewpoint, another feature of this easily implemented alternative to the classical Bayesian solution is that the speeds of convergence of the posterior mean of the weight and of the corresponding posterior probability are quite similar.In the last part of the thesis we construct a reference Bayesian analysis of mixtures of Gaussian distributions by creating a new parameterization centered on the mean and variance of those models itself. This enables us to develop a genuine non-informative prior for Gaussian mixtures with an arbitrary number of components. We demonstrate that the posterior distribution associated with this prior is almost surely proper and provide MCMC implementations that exhibit the expected component exchangeability. The analyses are based on MCMC methods as the Metropolis-within-Gibbs algorithm, adaptive MCMC and the Parallel tempering algorithm. This part of the thesis is followed by the description of R package named Ultimixt which implements a generic reference Bayesian analysis of unidimensional mixtures of Gaussian distributions obtained by a location-scale parameterization of the model. This package can be applied to produce a Bayesian analysis of Gaussian mixtures with an arbitrary number of components, with no need to specify the prior distribution.
39

Modelagem espaço-temporal para dados de incidência de doenças em plantas. / Spatiotemporal modelling of plant disease incidence.

Lima, Renato Ribeiro de 18 March 2005 (has links)
A informação sobre a dinâmica espaço-temporal de doenças de plantas é de importância fundamental em estudos epidemiológicos, podendo ser utilizada para descrever e entender o desenvolvimento das doenças, desenvolver planos de amostragem, planejar experimentos controlados e caracterizar perdas na produção ocasionadas pela doença. O estudo de padrões espaciais de doenças de plantas, que são reflexos do processo de dispersão dos patógenos, é importante em estudos epidemiológicos, como o de doenças dos citros, para se definirem estratégias mais adequadas para o controle das doenças, diminuindo os prejuízos causados. A Citricultura é uma das principais atividades agrícolas do Brasil e representa a principal atividade econômica de mais de 400 municípios do Triângulo Mineiro e do Estado de São Paulo, onde se encontra a maior área de citros do país e a maior região produtora de laranjas do mundo. Na avaliação do padrão espacial, diferentes métodos têm sido utilizados, dentre os quais incluem-se o ajuste de distribuições, como, por exemplo, a distribuição beta-binomial, o estudo da relação variância-média, o cálculo de correlação ao intraclasse, a utilização de técnicas de autocorrelação espacial, métodos de classes de distâncias e o ajuste de modelos estocásticos espaço-temporais. Diante da importância de se estudarem padrões espaciais da incidência de doenças em plantas e da necessidade de se conhecer melhor a epidemiologia da morte súbita dos citros e do cancro cítrico, uma técnica baseada em verossimilhança para o ajuste de modelos estocásticos espaço-temporais foi utilizada na caracterização de padrões espaciais. Modificações na metodologia original, buscando uma diminuição do tempo gasto nas análises, foram propostas nesse estudo. Os resultados mostram que as modificações propostas resultaram em uma diminuição significativa no tempo de análise, sem perda de acurácia na estimação dos parâmetros dos modelos considerados. / The information about the spatial-temporal dynamics is of fundamental importance in epidemiological studies for describing and understanding the development of diseases, for developing efficient sampling plans, for planning controlled experiments, for evaluating the effect of different treatments, and for determining crop losses. The Citriculture is the major economic activity of more than 400 municipalities in Minas Gerais and São Paulo States. This is the largest citrus area in Brazil, and the largest sweet orange production area in the world. Therefore, it is very important to study and to characterize spatial patterns of plant diseases, such as citrus canker and citrus sudden death. In the spatial dynamics study, many different methods have been used to characterize the spatial aggregation. These include the fitting of distributions, such as the beta-binomial distribution, the study of variance-mean relationships, the calculation of intraclass correlation, the use of spatial autocorrelation techniques, distance class methods and, the fitting of continuous time spatiotemporal stochastic models. In this work, an improved technique for fitting models to the spatial incidence data by using MCMC methods is proposed. This improved technique, which is used to investigate the spatial patterns of plant disease incidence, is considerably faster than Gibson’s methodology, in terms of computational time, without any loss of accuracy.
40

Modeling spatial and temporal variabilities in hyperspectral image unmixing / Modélisation de la variabilité spectrale pour le démélange d’images hyperspectral

Thouvenin, Pierre-Antoine 17 October 2017 (has links)
Acquises dans plusieurs centaines de bandes spectrales contiguës, les images hyperspectrales permettent d'analyser finement la composition d'une scène observée. En raison de la résolution spatiale limitée des capteurs utilisés, le spectre d'un pixel d'une image hyperspectrale résulte de la composition de plusieurs signatures associées à des matériaux distincts. À ce titre, le démélange d'images hyperspectrales vise à estimer les signatures des différents matériaux observés ainsi que leur proportion dans chacun des pixels de l'image. Pour cette analyse, il est d'usage de considérer qu'une signature spectrale unique permet de décrire un matériau donné, ce qui est généralement intrinsèque au modèle de mélange choisi. Toutefois, la signature d'un matériau présente en pratique une variabilité spectrale qui peut être significative d'une image à une autre, voire au sein d'une même image. De nombreux paramètres peuvent en être cause, tels que les conditions d'acquisitions (e.g., conditions d'illumination locales), la déclivité de la scène observée ou des interactions complexes entre la lumière incidente et les éléments observés. À défaut d'être prises en compte, ces sources de variabilité perturbent fortement les signatures extraites, tant en termes d'amplitude que de forme. De ce fait, des erreurs d'estimation peuvent apparaître, qui sont d'autant plus importantes dans le cas de procédures de démélange non-supervisées. Le but de cette thèse consiste ainsi à proposer de nouvelles méthodes de démélange pour prendre en compte efficacement ce phénomène. Nous introduisons dans un premier temps un modèle de démélange original visant à prendre explicitement en compte la variabilité spatiale des spectres purs. Les paramètres de ce modèle sont estimés à l'aide d'un algorithme d'optimisation sous contraintes. Toutefois, ce modèle s'avère sensible à la présence de variations spectrales abruptes, telles que causées par la présence de données aberrantes ou l'apparition d'un nouveau matériau lors de l'analyse d'images hyperspectrales multi-temporelles. Pour pallier ce problème, nous introduisons une procédure de démélange robuste adaptée à l'analyse d'images multi-temporelles de taille modérée. Compte tenu de la dimension importante des données étudiées, notamment dans le cas d'images multi-temporelles, nous avons par ailleurs étudié une stratégie d'estimation en ligne des différents paramètres du modèle de mélange proposé. Enfin, ce travail se conclut par l'étude d'une procédure d'estimation distribuée asynchrone, adaptée au démélange d'un grand nombre d'images hyperspectrales acquises sur une même scène à différents instants. / Acquired in hundreds of contiguous spectral bands, hyperspectral (HS) images have received an increasing interest due to the significant spectral information they convey about the materials present in a given scene. However, the limited spatial resolution of hyperspectral sensors implies that the observations are mixtures of multiple signatures corresponding to distinct materials. Hyperspectral unmixing is aimed at identifying the reference spectral signatures composing the data -- referred to as endmembers -- and their relative proportion in each pixel according to a predefined mixture model. In this context, a given material is commonly assumed to be represented by a single spectral signature. This assumption shows a first limitation, since endmembers may vary locally within a single image, or from an image to another due to varying acquisition conditions, such as declivity and possibly complex interactions between the incident light and the observed materials. Unless properly accounted for, spectral variability can have a significant impact on the shape and the amplitude of the acquired signatures, thus inducing possibly significant estimation errors during the unmixing process. A second limitation results from the significant size of HS data, which may preclude the use of batch estimation procedures commonly used in the literature, i.e., techniques exploiting all the available data at once. Such computational considerations notably become prominent to characterize endmember variability in multi-temporal HS (MTHS) images, i.e., sequences of HS images acquired over the same area at different time instants. The main objective of this thesis consists in introducing new models and unmixing procedures to account for spatial and temporal endmember variability. Endmember variability is addressed by considering an explicit variability model reminiscent of the total least squares problem, and later extended to account for time-varying signatures. The variability is first estimated using an unsupervised deterministic optimization procedure based on the Alternating Direction Method of Multipliers (ADMM). Given the sensitivity of this approach to abrupt spectral variations, a robust model formulated within a Bayesian framework is introduced. This formulation enables smooth spectral variations to be described in terms of spectral variability, and abrupt changes in terms of outliers. Finally, the computational restrictions induced by the size of the data is tackled by an online estimation algorithm. This work further investigates an asynchronous distributed estimation procedure to estimate the parameters of the proposed models.

Page generated in 0.0251 seconds