• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 1
  • Tagged with
  • 6
  • 6
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Leave-Group-Out Cross-Validation for Latent Gaussian Models

Liu, Zhedong 04 1900 (has links)
Cross-validation is a widely used technique in statistics and machine learning for predictive performance assessment and model selection. It involves dividing the available data into multiple sets, training the model on some of the data and testing it on the rest, and repeating this process multiple times. The goal of cross-validation is to assess the model’s predictive performance on unseen data. Two standard methods for cross-validation are leave-one-out cross-validation and K-fold cross-validation. However, these methods may not be suitable for structured models with many potential prediction tasks, as they do not take into account the structure of the data. As a solution, leave-group-out cross-validation is an extension of cross-validation that allows the left-out groups to make training sets and testing points to adapt to different prediction tasks. In this dissertation, we propose an automatic group construction procedure for leave-group-out cross-validation to estimate the predictive performance of the model when the prediction task is not specified. We also propose an efficient approximation of leave-group-out cross-validation for latent Gaussian models. Both of these procedures are implemented in the R-INLA software. We demonstrate the usefulness of our proposed leave-group-out cross-validation method through its application in the joint modeling of survival data and longitudinal data. The example shows the effectiveness of this method in real-world scenarios.
2

Criticism and robustification of latent Gaussian models

Cabral, Rafael 28 May 2023 (has links)
Latent Gaussian models (LGMs) are perhaps the most commonly used class of statistical models with broad applications in various fields, including biostatistics, econometrics, and spatial modeling. LGMs assume that a set of unobserved or latent variables follow a Gaussian distribution, commonly used to model spatial and temporal dependence in the data. The availability of computational tools, such as R-INLA, that permit fast and accurate estimation of LGMs has made their use widespread. Nevertheless, it is easy to find datasets that contain inherently non-Gaussian features, such as sudden jumps or spikes, that adversely affect the inferences and predictions made from an LGM. These datasets require more general latent non-Gaussian models (LnGMs) that can automatically handle these non-Gaussian features by assuming more flexible and robust non-Gaussian distributions on the latent variables. However, fast implementation and easy-to-use software are lacking, which prevents LnGMs from becoming widely applicable. This dissertation aims to tackle these challenges and provide ready-to-use implementations for the R-INLA package. We view scientific learning as an iterative process involving model criticism followed by model improvement and robustification. Thus, the first step is to provide a framework that allows researchers to criticize and check the adequacy of an LGM without fitting the more expensive LnGM. We employ concepts from Bayesian sensitivity analysis to check the influence of the latent Gaussian assumption on the statistical answers and Bayesian predictive checking to check if the fitted LGM can predict important features in the data. In many applications, this procedure will suffice to justify using an LGM. For cases where this check fails, we provide fast and scalable implementations of LnGMs based on variational Bayes and Laplace approximations. The approximation leads to an LGM that downweights extreme events in the latent variables, reducing their impact and leading to more robust inferences. Each step, the first of LGM criticism and the second of LGM robustification, can be executed in R-INLA, requiring only the addition of a few lines of code. This results in a robust workflow that applied researchers can readily use.
3

Joint Posterior Inference for Latent Gaussian Models and extended strategies using INLA

Chiuchiolo, Cristian 06 June 2022 (has links)
Bayesian inference is particularly challenging on hierarchical statistical models as computational complexity becomes a significant issue. Sampling-based methods like the popular Markov Chain Monte Carlo (MCMC) can provide accurate solutions, but they likely suffer a high computational burden. An attractive alternative is the Integrated Nested Laplace Approximations (INLA) approach, which is faster when applied to the broad class of Latent Gaussian Models (LGMs). The method computes fast and empirically accurate deterministic posterior marginal approximations of the model's unknown parameters. In the first part of this thesis, we discuss how to extend the software's applicability to a joint posterior inference by constructing a new class of joint posterior approximations, which also add marginal corrections for location and skewness. As these approximations result from a combination of a Gaussian Copula and internally pre-computed accurate Gaussian Approximations, we name this class Skew Gaussian Copula (SGC). By computing moments and correlation structure of a mixture representation of these distributions, we achieve new fast and accurate deterministic approximations for linear combinations in a subset of the model's latent field. The same mixture approximates a full joint posterior density through a Monte Carlo sampling on the hyperparameter set. We set highly skewed examples based on Poisson and Binomial hierarchical models and verify these new approximations using INLA and MCMC. The new skewness correction from the Skew Gaussian Copula is more consistent with the outcomes provided by the default INLA strategies. In the last part, we propose an extension of the parametric fit employed by the Simplified Laplace Approximation strategy in INLA when approximating posterior marginals. By default, the strategy matches log derivatives from a third-order Taylor expansion of each Laplace Approximation marginal with those derived from Skew Normal distributions. We consider a fourth-order term and adapt an Extended Skew Normal distribution to produce a more accurate approximation fit when skewness is large. We set similarly skewed data simulations with Poisson and Binomial likelihoods and show that the posterior marginal results from the new extended strategy are more accurate and coherent with the MCMC ones than its original version.
4

Changements d'échelles en modélisation de la qualité de l'air et estimation des incertitudes associées / Multiple scales in air quality modeling, and estimation of associated uncertainties

Bourdin-Korsakissok, Irène 15 December 2009 (has links)
L’évolution des polluants dans l’atmosphère dépend de phénomènes variés, tels que les émissions, la météorologie, la turbulence ou les transformations physico-chimiques, qui ont des échelles caractéristiques spatiales et temporelles très diverses. Il est très difficile, par conséquent, de représenter l’ensemble de ces échelles dans un modèle de qualité de l’air. Les modèles eulériens de chimie-transport, couramment utilisés, ont une résolution bien supérieure à la taille des plus petites échelles. Cette thèse propose une revue des processus physiques mal représentés par les modèles de qualité de l’air, et de la variabilité sous-maille qui en résulte. Parmi les méthodes possibles permettant de mieux prendre en compte les différentes échelles , deux approches ont été développées : le couplage entre un modèle local et un modèle eulérien, ainsi qu’une approche statistique de réduction d’échelle. (1) Couplage de modèles : l’une des principales causes de la variabilité sous-maille réside dans les émissions, qu’il s’agisse des émissions ponctuelles ou du trafic routier. En particulier, la taille caractéristique d’un panache émis par une cheminée très inférieure à l’échelle spatiale bien résolue par les modèles eulériens. Une première approche étudiée dans la thèse est un traitement sous maille des émissions ponctuelles, en couplant un modèle gaussien à bouffées pour l’échelle locale à un modèle eulérien (couplage appelé panache sous-maille). L’impact de ce traitement est évalué sur des cas de traceurs à l’échelle continentale (ETEX-I et Tchernobyl) ainsi que sur un cas de photochimie à l’échelle de la région parisienne. Différents aspects sont étudiés, notamment l’incertitude due aux paramétrisations du modèle local, ainsi que l’influence de la résolution du maillage eulérien. (2) Réduction d’échelle statistique : une seconde approche est présentée, basée sur des méthodes statistiques de réduction d’échelle. Il s’agit de corriger l’erreur de représentativité du modèle aux stations de mesures. En effet, l’échelle de représentativité d’une station de mesure est souvent inférieure à l’échelle traitée par le modèle (échelle d’une maille), et les concentrations à la station sont donc mal représentées par le modèle. En pratique, il s’agit d’utiliser des relations statistiques entre les concentrations dans les mailles du modèle et les concentrations aux stations de mesure, afin d’améliorer les prévisions aux stations. L’utilisation d’un ensemble de modèles permet de prendre en compte l’incertitude inhérente aux paramétrisations des modèles. Avec cet ensemble, différentes techniques sont utilisées, de la régression simple à la décomposition en composantes principales, ainsi qu’une technique nouvelle appelée « composantes principales ajustées ». Les résultats sont présentés pour l’ozone à l’échelle européenne, et analysés notamment en fonction du type de station concerné (rural, urbain ou périurbain) / The evolution of atmospheric pollutants depends on various processes which occur at multiple characteristic scales, such as emissions, meteorology, turbulence, chemical transformation and deposition. Representing all the time and spatial scales in an air quality model is, therefore, very difficult. Chemical-transport Eulerian models, which are generally used, have a typical resolution much coarser than the finest scales.. Thus, many processes are not well described by these models, which results in subgrid-scale variability. This thesis proposes a review of subgrid-scale processes and associated uncertainty, as well as two multiscale methods aimed at reducing this uncertainty : (1) coupling an Eulerian model with a local-scale Gaussian model, and (2)using statistical downscaling methods. (1) Model coupling : one aof the main subgrid-scale processes is emissions, especially point emissions (industry) and traffic. In particular, the characteristic spatial scale of a plume emitted by a chimmey is much smaller than the typical Eulerian grid resolution. The coupling method, called plume-in-grid model, uses a Gaussian puff model to better represent point emissions at local scale, coupled to an Eulerain model. The impact of this subgrid-scale treatment of emissions is evaluated at continental scale for passive tracers (ETEX-I et Tchernobyl), as well as for photochemistry at regional scale (Paris region). Several issues are addressed, especially the uncertainty due to local-scale parameterizations and the influence of the Eulerian grid resolution. (2) Statistical downscaling : this method aims at compensating the representativity error made by the model when forecasting concentrations at particular measurement stations. The representativity scale of these stations is, indeed, typically smaller than the Eulerian cell size, and concentrations at stations depend on many subgrid-scale phenomena (micrometeorology, topography…). Thus, using statistical relationships between the larg-scale variable (model output) and local-scale variable (concentrations observed at stations) allows to significantly reduce the forecast error. In addition, using ensemble simulations allows to better take into account the model error due to physical parameterizations. With this ensemble, several downscaling methods are implemented : simple and multiple linear regression, with or without preprocessing. The preprocessing methods include a classical principal component analysis, as well as another method called “principal fitted component”. Results are presented at European scale, for ozone peaks, and analyzed for several types of stations (rural, urban or periurban)
5

Some advances in patch-based image denoising / Quelques avancées dans le débruitage d'images par patchs

Houdard, Antoine 12 October 2018 (has links)
Cette thèse s'inscrit dans le contexte des méthodes non locales pour le traitement d'images et a pour application principale le débruitage, bien que les méthodes étudiées soient suffisamment génériques pour être applicables à d'autres problèmes inverses en imagerie. Les images naturelles sont constituées de structures redondantes, et cette redondance peut être exploitée à des fins de restauration. Une manière classique d’exploiter cette auto-similarité est de découper l'image en patchs. Ces derniers peuvent ensuite être regroupés, comparés et filtrés ensemble.Dans le premier chapitre, le principe du "global denoising" est reformulé avec le formalisme classique de l'estimation diagonale et son comportement asymptotique est étudié dans le cas oracle. Des conditions précises à la fois sur l'image et sur le filtre global sont introduites pour assurer et quantifier la convergence.Le deuxième chapitre est consacré à l'étude d’a priori gaussiens ou de type mélange de gaussiennes pour le débruitage d'images par patches. Ces a priori sont largement utilisés pour la restauration d'image. Nous proposons ici quelques indices pour répondre aux questions suivantes : Pourquoi ces a priori sont-ils si largement utilisés ? Quelles informations encodent-ils ?Le troisième chapitre propose un modèle probabiliste de mélange pour les patchs bruités, adapté à la grande dimension. Il en résulte un algorithme de débruitage qui atteint les performance de l'état-de-l'art.Le dernier chapitre explore des pistes d'agrégation différentes et propose une écriture de l’étape d'agrégation sous la forme d'un problème de moindre carrés. / This thesis studies non-local methods for image processing, and their application to various tasks such as denoising. Natural images contain redundant structures, and this property can be used for restoration purposes. A common way to consider this self-similarity is to separate the image into "patches". These patches can then be grouped, compared and filtered together.In the first chapter, "global denoising" is reframed in the classical formalism of diagonal estimation and its asymptotic behaviour is studied in the oracle case. Precise conditions on both the image and the global filter are introduced to ensure and quantify convergence.The second chapter is dedicated to the study of Gaussian priors for patch-based image denoising. Such priors are widely used for image restoration. We propose some ideas to answer the following questions: Why are Gaussian priors so widely used? What information do they encode about the image?The third chapter proposes a probabilistic high-dimensional mixture model on the noisy patches. This model adopts a sparse modeling which assumes that the data lie on group-specific subspaces of low dimensionalities. This yields a denoising algorithm that demonstrates state-of-the-art performance.The last chapter explores different way of aggregating the patches together. A framework that expresses the patch aggregation in the form of a least squares problem is proposed.
6

[en] SCOREDRIVENMODELS.JL: A JULIA PACKAGE FOR GENERALIZED AUTOREGRESSIVE SCORE MODELS / [pt] SCOREDRIVENMODELS.JL: PACOTE EM JULIA PARA MODELOS GENERALIZADOS AUTORREGRESSIVOS COM SCORE

GUILHERME MEIRELLES BODIN DE MORAES 03 February 2022 (has links)
[pt] Os modelos orientados por score, também conhecidos como modelos generalizados de score autorregressivo (GAS), representam uma classe de modelos de séries temporais orientados por observação. Eles possuem propriedades desejáveis para modelagem de séries temporais, como a capacidade de modelar diferentes distribuições condicionais e considerar parâmetros variantes no tempo dentro de uma estrutura flexível. Neste trabalho, apresentamos ScoreDrivenModels.jl, um pacote Julia de código aberto para modelagem, previsão e simulação de séries temporais usando a estrutura de modelos baseados em score. O pacote é flexível no que diz respeito à definição do modelo, permitindo ao usuário especificar a estrutura de atraso e quais parâmetros são variantes no tempo ou constantes. Também é possível considerar várias distribuições, incluindo Beta, Exponencial, Gama, Lognormal, Normal, Poisson, Student s t e Weibull. A interface fornecida é flexível, permitindo aos usuários interessados implementar qualquer distribuição e parametrização desejada. / [en] Score-driven models, also known as generalized autoregressive score (GAS) models, represent a class of observation-driven time series models. They possess desirable properties for time series modeling, such as the ability to model different conditional distributions and to consider time-varying parameters within a flexible framework. In this dissertation, we present ScoreDrivenModels.jl, an open-source Julia package for modeling, forecasting, and simulating time series using the framework of score-driven models. The package is flexible with respect to model definition, allowing the user to specify the lag structure and which parameters are time-varying or constant. It is also possible to consider several distributions, including Beta, Exponential, Gamma, Lognormal, Normal, Poisson, Student s t, and Weibull. The provided interface is flexible, allowing interested users to implement any desired distribution and parametrization.

Page generated in 0.0468 seconds