• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 82
  • 18
  • 13
  • 3
  • 3
  • 3
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 143
  • 143
  • 143
  • 29
  • 25
  • 23
  • 20
  • 19
  • 19
  • 19
  • 18
  • 16
  • 15
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Análise estatística de curvas de crescimento sob o enfoque clássico e Bayesiano: aplicação à dados médicos e biológicos / Statistical analysis of growth curves under the classical and Bayesian approach: application to medical and biological data

Oliveira, Breno Raphael Gomes de 16 February 2016 (has links)
Introdução: A curva de crescimento é um modelo empírico da evolução de uma quantidade ao longo do tempo. As curvas de crescimento são utilizadas em muitas disciplinas , em particular no domínio da estatística, onde há uma grande literatura sobre o assunto relacionado a modelos não lineares. Método:No desenvolvimento dessa dissertação de mestrado, foi realizado um estudo baseado em dados de crescimento nas áreas biológica e médica para comparar os dois tipos de inferência (Clássica e Bayesiana), na busca de melhores estimativas e resultados para modelos de regressão não lineares, especialmente considerando alguns modelos de crescimento introduzidos na literatura. No método Bayesiano para a modelagem não linear assumimos erros normais uma suposição usual e também distribuições estáveis para a variável resposta. Estudamos também alguns aspectos de robustez dos modelos de regressão não linear para a presença de outliers ou observações discordantes considerando o uso de distribuições estáveis para a resposta no lugar da suposição de normalidade habitual. Resultados e Conclusões: Análise dos dois exemplos pode-se observar melhores ajustes quando utilizada o método Bayesiano de ajustes de modelos não lineares de curvas de crescimento. É bem sabido que, em geral, não há nenhuma forma fechada para a função densidade de probabilidade de distribuições estáveis. No entanto, sob uma abordagem Bayesiana, a utilização de uma variável aleatória latente ou auxiliar proporciona uma simplificação para obter qualquer distribuição a posteriori quando relacionado com distribuições estáveis. Esses resultados poderiam ser de grande interesse para pesquisadores e profissionais, ao lidar com dados não Gauss. Para demonstrar a utilidade dos aspectos computacionais, a metodologia é aplicada a um exemplo relacionado com as curvas de crescimento intra-uterino para prematuros. Resumos a posteriori de interesse são obtidos utilizando métodos MCMC (Markov Chain Monte Carlo) e o software OpenBugs. / Introduction: The growth curve is an empirical model of the evolution of a quantity over time. Growth curves are used in many disciplines, particularly in the field of statistics, where there is a large literature on the subject related to nonlinear models. Method: In the development of this dissertation, a study based on data growth in biological areas and medical was conducted to compare two types of inferences (Classical and Bayesian), in search of better estimates and results for nonlinear regression models, especially considering some growth models introduced in the literature. The Bayesian method for nonlinear modeling assume normal errors an usual assumption and also stable distributions for the response variable. We also study some aspects of robustness of nonlinear regression models for the presence of outliers or discordant observations regarding the use of stable distributions to the response in place of the usual assumption of normality. Results and Conclusions: In the analysis of two examples it can be seen best results using Bayesian methodology for non linear models of growth curves. It is well known that, in general, there is no closed form for the probability density function of stable distributions. However, under a Bayesian approach, the use of a latent random variable or auxiliary variable provides a simplification to get every conditional posterior related to stable distributions. These results could be of great interest to researchers and practitioners when dealing with non-Gaussian data. To demonstrate the utility of the computational aspects, the methodology is also applied to an example related to intrauterine growth curves for premature infants. Posterior summaries of interest are obtained using MCMC methods (MCMC) and the OpenBugs software.
82

Análise estatística de curvas de crescimento sob o enfoque clássico e Bayesiano: aplicação à dados médicos e biológicos / Statistical analysis of growth curves under the classical and Bayesian approach: application to medical and biological data

Breno Raphael Gomes de Oliveira 16 February 2016 (has links)
Introdução: A curva de crescimento é um modelo empírico da evolução de uma quantidade ao longo do tempo. As curvas de crescimento são utilizadas em muitas disciplinas , em particular no domínio da estatística, onde há uma grande literatura sobre o assunto relacionado a modelos não lineares. Método:No desenvolvimento dessa dissertação de mestrado, foi realizado um estudo baseado em dados de crescimento nas áreas biológica e médica para comparar os dois tipos de inferência (Clássica e Bayesiana), na busca de melhores estimativas e resultados para modelos de regressão não lineares, especialmente considerando alguns modelos de crescimento introduzidos na literatura. No método Bayesiano para a modelagem não linear assumimos erros normais uma suposição usual e também distribuições estáveis para a variável resposta. Estudamos também alguns aspectos de robustez dos modelos de regressão não linear para a presença de outliers ou observações discordantes considerando o uso de distribuições estáveis para a resposta no lugar da suposição de normalidade habitual. Resultados e Conclusões: Análise dos dois exemplos pode-se observar melhores ajustes quando utilizada o método Bayesiano de ajustes de modelos não lineares de curvas de crescimento. É bem sabido que, em geral, não há nenhuma forma fechada para a função densidade de probabilidade de distribuições estáveis. No entanto, sob uma abordagem Bayesiana, a utilização de uma variável aleatória latente ou auxiliar proporciona uma simplificação para obter qualquer distribuição a posteriori quando relacionado com distribuições estáveis. Esses resultados poderiam ser de grande interesse para pesquisadores e profissionais, ao lidar com dados não Gauss. Para demonstrar a utilidade dos aspectos computacionais, a metodologia é aplicada a um exemplo relacionado com as curvas de crescimento intra-uterino para prematuros. Resumos a posteriori de interesse são obtidos utilizando métodos MCMC (Markov Chain Monte Carlo) e o software OpenBugs. / Introduction: The growth curve is an empirical model of the evolution of a quantity over time. Growth curves are used in many disciplines, particularly in the field of statistics, where there is a large literature on the subject related to nonlinear models. Method: In the development of this dissertation, a study based on data growth in biological areas and medical was conducted to compare two types of inferences (Classical and Bayesian), in search of better estimates and results for nonlinear regression models, especially considering some growth models introduced in the literature. The Bayesian method for nonlinear modeling assume normal errors an usual assumption and also stable distributions for the response variable. We also study some aspects of robustness of nonlinear regression models for the presence of outliers or discordant observations regarding the use of stable distributions to the response in place of the usual assumption of normality. Results and Conclusions: In the analysis of two examples it can be seen best results using Bayesian methodology for non linear models of growth curves. It is well known that, in general, there is no closed form for the probability density function of stable distributions. However, under a Bayesian approach, the use of a latent random variable or auxiliary variable provides a simplification to get every conditional posterior related to stable distributions. These results could be of great interest to researchers and practitioners when dealing with non-Gaussian data. To demonstrate the utility of the computational aspects, the methodology is also applied to an example related to intrauterine growth curves for premature infants. Posterior summaries of interest are obtained using MCMC methods (MCMC) and the OpenBugs software.
83

Modelling Long-Term Persistence in Hydrological Time Series

Thyer, Mark Andrew January 2001 (has links)
The hidden state Markov (HSM) model is introduced as a new conceptual framework for modelling long-term persistence in hydrological time series. Unlike the stochastic models currently used, the conceptual basis of the HSM model can be related to the physical processes that influence long-term hydrological time series in the Australian climatic regime. A Bayesian approach was used for model calibration. This enabled rigourous evaluation of parameter uncertainty, which proved crucial for the interpretation of the results. Applying the single site HSM model to rainfall data from selected Australian capital cities provided some revealing insights. In eastern Australia, where there is a significant influence from the tropical Pacific weather systems, the results showed a weak wet and medium dry state persistence was likely to exist. In southern Australia the results were inconclusive. However, they suggested a weak wet and strong dry persistence structure may exist, possibly due to the infrequent incursion of tropical weather systems in southern Australia. This led to the postulate that the tropical weather systems are the primary cause of two-state long-term persistence. The single and multi-site HSM model results for the Warragamba catchment rainfall data supported this hypothesis. A strong two-state persistence structure was likely to exist in the rainfall regime of this important water supply catchment. In contrast, the single and multi-site results for the Williams River catchment rainfall data were inconsistent. This illustrates further work is required to understand the application of the HSM model. Comparisons with the lag-one autoregressive [AR(1)] model showed that it was not able to reproduce the same long-term persistence as the HSM model. However, with record lengths typical of real data the difference between the two approaches was not statistically significant. Nevertheless, it was concluded that the HSM model provides a conceptually richer framework than the AR(1) model. / PhD Doctorate
84

Predominant magnetic states in the Hubbard model on anisotropic triangular lattices

Watanabe, T., Yokoyama, H., Tanaka, Y., Inoue, J. 06 1900 (has links)
No description available.
85

Coupled flow systems, adjoint techniques and uncertainty quantification

Garg, Vikram Vinod, 1985- 25 October 2012 (has links)
Coupled systems are ubiquitous in modern engineering and science. Such systems can encompass fluid dynamics, structural mechanics, chemical species transport and electrostatic effects among other components, all of which can be coupled in many different ways. In addition, such models are usually multiscale, making their numerical simulation challenging, and necessitating the use of adaptive modeling techniques. The multiscale, multiphysics models of electrosomotic flow (EOF) constitute a particularly challenging coupled flow system. A special feature of such models is that the coupling between the electric physics and hydrodynamics is via the boundary. Numerical simulations of coupled systems are typically targeted towards specific Quantities of Interest (QoIs). Adjoint-based approaches offer the possibility of QoI targeted adaptive mesh refinement and efficient parameter sensitivity analysis. The formulation of appropriate adjoint problems for EOF models is particularly challenging, due to the coupling of physics via the boundary as opposed to the interior of the domain. The well-posedness of the adjoint problem for such models is also non-trivial. One contribution of this dissertation is the derivation of an appropriate adjoint problem for slip EOF models, and the development of penalty-based, adjoint-consistent variational formulations of these models. We demonstrate the use of these formulations in the simulation of EOF flows in straight and T-shaped microchannels, in conjunction with goal-oriented mesh refinement and adjoint sensitivity analysis. Complex computational models may exhibit uncertain behavior due to various reasons, ranging from uncertainty in experimentally measured model parameters to imperfections in device geometry. The last decade has seen a growing interest in the field of Uncertainty Quantification (UQ), which seeks to determine the effect of input uncertainties on the system QoIs. Monte Carlo methods remain a popular computational approach for UQ due to their ease of use and "embarassingly parallel" nature. However, a major drawback of such methods is their slow convergence rate. The second contribution of this work is the introduction of a new Monte Carlo method which utilizes local sensitivity information to build accurate surrogate models. This new method, called the Local Sensitivity Derivative Enhanced Monte Carlo (LSDEMC) method can converge at a faster rate than plain Monte Carlo, especially for problems with a low to moderate number of uncertain parameters. Adjoint-based sensitivity analysis methods enable the computation of sensitivity derivatives at virtually no extra cost after the forward solve. Thus, the LSDEMC method, in conjuction with adjoint sensitivity derivative techniques can offer a robust and efficient alternative for UQ of complex systems. The efficiency of Monte Carlo methods can be further enhanced by using stratified sampling schemes such as Latin Hypercube Sampling (LHS). However, the non-incremental nature of LHS has been identified as one of the main obstacles in its application to certain classes of complex physical systems. Current incremental LHS strategies restrict the user to at least doubling the size of an existing LHS set to retain the convergence properties of LHS. The third contribution of this research is the development of a new Hierachical LHS algorithm, that creates designs which can be used to perform LHS studies in a more flexibly incremental setting, taking a step towards adaptive LHS methods. / text
86

Mathematical and algorithmic analysis of modified Langevin dynamics / L'analyse mathématique et algorithmique de la dynamique de Langevin modifié

Trstanova, Zofia 25 November 2016 (has links)
En physique statistique, l’information macroscopique d’intérêt pour les systèmes considérés peut être dé-duite à partir de moyennes sur des configurations microscopiques réparties selon des mesures de probabilitéµ caractérisant l’état thermodynamique du système. En raison de la haute dimensionnalité du système (quiest proportionnelle au nombre de particules), les configurations sont le plus souvent échantillonnées en util-isant des trajectoires d’équations différentielles stochastiques ou des chaînes de Markov ergodiques pourla mesure de Boltzmann-Gibbs µ, qui décrit un système à température constante. Un processus stochas-tique classique permettant d’échantillonner cette mesure est la dynamique de Langevin. En pratique, leséquations de la dynamique de Langevin ne peuvent pas être intégrées analytiquement, la solution est alorsapprochée par un schéma numérique. L’analyse numérique de ces schémas de discrétisation est maintenantbien maîtrisée pour l’énergie cinétique quadratique standard. Une limitation importante des estimateurs desmoyennes sontleurs éventuelles grandes erreurs statistiques.Sous certaines hypothèsessur lesénergies ciné-tique et potentielle, il peut être démontré qu’un théorème de limite central est vrai. La variance asymptotiquepeut être grande en raison de la métastabilité du processus de Langevin, qui se produit dès que la mesure deprobabilité µ est multimodale.Dans cette thèse, nous considérons la discrétisation de la dynamique de Langevin modifiée qui améliorel’échantillonnage de la distribution de Boltzmann-Gibbs en introduisant une fonction cinétique plus généraleà la place de la formulation quadratique standard. Nous avons en fait deux situations en tête : (a) La dy-namique de Langevin Adaptativement Restreinte, où l’énergie cinétique s’annule pour les faibles moments,et correspond à l’énergie cinétique standard pour les forts moments. L’intérêt de cette dynamique est que lesparticules avec une faible énergie sont restreintes. Le gain vient alors du fait que les interactions entre lesparticules restreintes ne doivent pas être mises à jour. En raison de la séparabilité des positions et des mo-ments marginaux de la distribution, les moyennes des observables qui dépendent de la variable de positionsont égales à celles calculées par la dynamique de Langevin standard. L’efficacité de cette méthode résidedans le compromis entre le gain de calcul et la variance asymptotique des moyennes ergodiques qui peutaugmenter par rapport à la dynamique standards car il existe a priori plus des corrélations dans le tempsen raison de particules restreintes. De plus, étant donné que l’énergie cinétique est nulle sur un ouvert, ladynamique de Langevin associé ne parvient pas à être hypoelliptique. La première tâche de cette thèse est deprouver que la dynamique de Langevin avec une telle énergie cinétique est ergodique. L’étape suivante con-siste à présenter une analyse mathématique de la variance asymptotique de la dynamique AR-Langevin. Afinde compléter l’analyse de ce procédé, on estime l’accélération algorithmique du coût d’une seule itération,en fonction des paramètres de la dynamique. (b) Nous considérons aussi la dynamique de Langevin avecdes énergies cinétiques dont la croissance est plus que quadratique à l’infini, dans une tentative de réduire lamétastabilité. La liberté supplémentaire fournie par le choix de l’énergie cinétique doit être utilisée afin deréduire la métastabilité de la dynamique. Dans cette thèse, nous explorons le choix de l’énergie cinétique etnous démontrons une convergence améliorée des moyennes ergodiques sur un exemple de faible dimension.Un des problèmes avec les situations que nous considérons est la stabilité des régimes discrétisés. Afind’obtenir une méthode de discrétisation faiblement cohérente d’ordre 2 (ce qui n’est plus trivial dans le casde l’énergie cinétique générale), nous nous reposons sur les schémas basés sur des méthodes de Metropolis. / In statistical physics, the macroscopic information of interest for the systems under consideration can beinferred from averages over microscopic configurations distributed according to probability measures µcharacterizing the thermodynamic state of the system. Due to the high dimensionality of the system (whichis proportional to the number of particles), these configurations are most often sampled using trajectories ofstochastic differential equations or Markov chains ergodic for the probability measure µ, which describesa system at constant temperature. One popular stochastic process allowing to sample this measure is theLangevin dynamics. In practice, the Langevin dynamics cannot be analytically integrated, its solution istherefore approximated with a numerical scheme. The numerical analysis of such discretization schemes isby now well-understood when the kinetic energy is the standard quadratic kinetic energy.One important limitation of the estimators of the ergodic averages are their possibly large statisticalerrors.Undercertainassumptionsonpotentialandkineticenergy,itcanbeshownthatacentrallimittheoremholds true. The asymptotic variance may be large due to the metastability of the Langevin process, whichoccurs as soon as the probability measure µ is multimodal.In this thesis, we consider the discretization of modified Langevin dynamics which improve the samplingof the Boltzmann–Gibbs distribution by introducing a more general kinetic energy function U instead of thestandard quadratic one. We have in fact two situations in mind:(a) Adaptively Restrained (AR) Langevin dynamics, where the kinetic energy vanishes for small momenta,while it agrees with the standard kinetic energy for large momenta. The interest of this dynamics isthat particles with low energy are restrained. The computational gain follows from the fact that theinteractions between restrained particles need not be updated. Due to the separability of the positionand momenta marginals of the distribution, the averages of observables which depend on the positionvariable are equal to the ones computed with the standard Langevin dynamics. The efficiency of thismethod lies in the trade-off between the computational gain and the asymptotic variance on ergodic av-erages which may increase compared to the standard dynamics since there are a priori more correlationsin time due to restrained particles. Moreover, since the kinetic energy vanishes on some open set, theassociated Langevin dynamics fails to be hypoelliptic. In fact, a first task of this thesis is to prove thatthe Langevin dynamics with such modified kinetic energy is ergodic. The next step is to present a math-ematical analysis of the asymptotic variance for the AR-Langevin dynamics. In order to complementthe analysis of this method, we estimate the algorithmic speed-up of the cost of a single iteration, as afunction of the parameters of the dynamics.(b) We also consider Langevin dynamics with kinetic energies growing more than quadratically at infinity,in an attempt to reduce metastability. The extra freedom provided by the choice of the kinetic energyshould be used in order to reduce the metastability of the dynamics. In this thesis, we explore thechoice of the kinetic energy and we demonstrate on a simple low-dimensional example an improvedconvergence of ergodic averages.An issue with the situations we consider is the stability of discretized schemes. In order to obtain aweakly consistent method of order 2 (which is no longer trivial for a general kinetic energy), we rely on therecently developped Metropolis schemes.
87

Méthodes d'inférence statistique pour champs de Gibbs / Statistical inference methods for Gibbs random fields

Stoehr, Julien 29 October 2015 (has links)
La constante de normalisation des champs de Markov se présente sous la forme d'une intégrale hautement multidimensionnelle et ne peut être calculée par des méthodes analytiques ou numériques standard. Cela constitue une difficulté majeure pour l'estimation des paramètres ou la sélection de modèle. Pour approcher la loi a posteriori des paramètres lorsque le champ de Markov est observé, nous remplaçons la vraisemblance par une vraisemblance composite, c'est à dire un produit de lois marginales ou conditionnelles du modèle, peu coûteuses à calculer. Nous proposons une correction de la vraisemblance composite basée sur une modification de la courbure au maximum afin de ne pas sous-estimer la variance de la loi a posteriori. Ensuite, nous proposons de choisir entre différents modèles de champs de Markov cachés avec des méthodes bayésiennes approchées (ABC, Approximate Bayesian Computation), qui comparent les données observées à de nombreuses simulations de Monte-Carlo au travers de statistiques résumées. Afin de pallier l'absence de statistiques exhaustives pour ce choix de modèle, des statistiques résumées basées sur les composantes connexes des graphes de dépendance des modèles en compétition sont introduites. Leur efficacité est étudiée à l'aide d'un taux d'erreur conditionnel original mesurant la puissance locale de ces statistiques à discriminer les modèles. Nous montrons alors que nous pouvons diminuer sensiblement le nombre de simulations requises tout en améliorant la qualité de décision, et utilisons cette erreur locale pour construire une procédure ABC qui adapte le vecteur de statistiques résumés aux données observées. Enfin, pour contourner le calcul impossible de la vraisemblance dans le critère BIC (Bayesian Information Criterion) de choix de modèle, nous étendons les approches champs moyens en substituant la vraisemblance par des produits de distributions de vecteurs aléatoires, à savoir des blocs du champ. Le critère BLIC (Block Likelihood Information Criterion), que nous en déduisons, permet de répondre à des questions de choix de modèle plus large que les méthodes ABC, en particulier le choix conjoint de la structure de dépendance et du nombre d'états latents. Nous étudions donc les performances de BLIC dans une optique de segmentation d'images. / Due to the Markovian dependence structure, the normalizing constant of Markov random fields cannot be computed with standard analytical or numerical methods. This forms a central issue in terms of parameter inference or model selection as the computation of the likelihood is an integral part of the procedure. When the Markov random field is directly observed, we propose to estimate the posterior distribution of model parameters by replacing the likelihood with a composite likelihood, that is a product of marginal or conditional distributions of the model easy to compute. Our first contribution is to correct the posterior distribution resulting from using a misspecified likelihood function by modifying the curvature at the mode in order to avoid overly precise posterior parameters.In a second part we suggest to perform model selection between hidden Markov random fields with approximate Bayesian computation (ABC) algorithms that compare the observed data and many Monte-Carlo simulations through summary statistics. To make up for the absence of sufficient statistics with regard to this model choice, we introduce summary statistics based on the connected components of the dependency graph of each model in competition. We assess their efficiency using a novel conditional misclassification rate that evaluates their local power to discriminate between models. We set up an efficient procedure that reduces the computational cost while improving the quality of decision and using this local error rate we build up an ABC procedure that adapts the summary statistics to the observed data.In a last part, in order to circumvent the computation of the intractable likelihood in the Bayesian Information Criterion (BIC), we extend the mean field approaches by replacing the likelihood with a product of distributions of random vectors, namely blocks of the lattice. On that basis, we derive BLIC (Block Likelihood Information Criterion) that answers model choice questions of a wider scope than ABC, such as the joint selection of the dependency structure and the number of latent states. We study the performances of BLIC in terms of image segmentation.
88

Méthodes de Monte Carlo stratifiées pour la simulation des chaines de Markov / Stratified Monte Carlo Methods for the simulation of Markov chains

El maalouf, Joseph 16 December 2016 (has links)
Les méthodes de Monte Carlo sont des méthodes probabilistes qui utilisent des ordinateurs pour résoudre de nombreux problèmes de la science à l’aide de nombres aléatoires. Leur principal inconvénient est leur convergence lente. La mise au point de techniques permettant d’accélérer la convergence est un domaine de recherche très actif. C’est l’objectif principal des méthodes déterministes quasi-Monte Carlo qui remplacent les points pseudo-aléatoires de simulation par des points quasi-aléatoires ayant une excellente répartition uniforme. Ces méthodes ne fournissent pas d’intervalles de confiance permettant d’estimer l’erreur. Nous étudions dans ce travail des méthodes stochastiques qui permettent de réduire la variance des estimateurs Monte Carlo : ces techniques de stratification le font en divisant le domaine d’échantillonnageen sous-domaines. Nous examinons l’intérêt de ces méthodes pour l’approximation des chaînes de Markov, la simulation de la diffusion physique et la résolution numérique de la fragmentation.Dans un premier chapitre, nous présentons les méthodes de Monte Carlo pour l’intégration numérique. Nous donnons le cadre général des méthodes de stratification. Nous insistons sur deux techniques : la stratification simple (MCS) et la stratification Sudoku (SS), qui place les points sur des grilles analogues à celle du jeu. Nous pressentons également les méthodesquasi-Monte Carlo qui partagent avec les méthodes de stratification certaines propriétés d'équipartition des points d’échantillonnage.Le second chapitre décrit l’utilisation des méthodes de Monte Carlo stratifiées pour la simulation des chaînes de Markov. Nous considérons des chaînes homogènes uni-dimensionnelles à espace d’états discret ou continu. Dans le premier cas, nous démontrons une réduction de variance par rapport `a la méthode de Monte Carlo classique ; la variance des schémas MCSou SS est d’ordre 3/2, alors que celle du schéma MC est de 1. Les résultats d’expériences numériques, pour des espaces d’états discrets ou continus, uni- ou multi-dimensionnels montrent une réduction de variance liée à la stratification, dont nous estimons l’ordre.Dans le troisième chapitre, nous examinons l’intérêt de la méthode de stratification Sudoku pour la simulation de la diffusion physique. Nous employons une technique de marche aléatoire et nous examinons successivement la résolution d’une équation de la chaleur, d’une équation de convection-diffusion, de problèmes de réaction-diffusion (équations de Kolmogorov et équation de Nagumo) ; enfin nous résolvons numériquement l’équation de Burgers. Dans chacun de ces cas, des tests numériques mettent en évidence une réduction de la variance due à l’emploi de la méthode de stratification Sudoku.Le quatrième chapitre décrit un schéma de Monte Carlo stratifie permettant de simuler un phénomène de fragmentation. La comparaison des performances dans plusieurs cas permet de constater que la technique de stratification Sudoku réduit la variance d’une estimation Monte Carlo. Nous testons enfin un algorithme de résolution d’un problème inverse, permettant d’approcher le noyau de fragmentation, à partir de résultats de l’évolution d’une distribution ;nous utilisons dans ce cas des points quasi-Monte Carlo pour résoudre le problème direct. / Monte Carlo methods are probabilistic schemes that use computers for solving various scientific problems with random numbers. The main disadvantage to this approach is the slow convergence. Many scientists are working hard to find techniques that may accelerate Monte Carlo simulations. This is the aim of some deterministic methods called quasi-Monte Carlo, where random points are replaced with special sets of points with enhanced uniform distribution. These methods do not provide confidence intervals that permit to estimate the errordone. In the present work, we are interested with random methods that reduce the variance of a Monte Carlo estimator : the stratification techniques consist of splitting the sampling area into strata where random samples are chosen. We focus here on applications of stratified methods for approximating Markov chains, simulating diffusion in materials, or solving fragmentationequations.In the first chapter, we present Monte Carlo methods in the framework of numerical quadrature, and we introduce the stratification strategies. We focus on two techniques : the simple stratification (MCS) and the Sudoku stratification (SS), where the points repartitions are similar to Sudoku grids. We also present quasi-Monte Carlo methods, where quasi-random pointsshare common features with stratified points.The second chapter describes the use of stratified algorithms for the simulation of Markov chains. We consider time-homogeneous Markov chains with one-dimensional discrete or continuous state space. We establish theoretical bounds for the variance of some estimator, in the case of a discrete state space, that indicate a variance reduction with respect to usual MonteCarlo. The variance of MCS and SS methods is of order 3/2, instead of 1 for usual MC. The results of numerical experiments, for one-dimensional or multi-dimensional, discrete or continuous state spaces show improved variances ; the order is estimated using linear regression.In the third chapter, we investigate the interest of stratified Monte Carlo methods for simulating diffusion in various non-stationary physical processes. This is done by discretizing time and performing a random walk at every time-step. We propose algorithms for pure diffusion, for convection-diffusion, and reaction-diffusion (Kolmogorov equation or Nagumo equation) ; we finally solve Burgers equation. In each case, the results of numerical tests show an improvement of the variance due to the use of stratified Sudoku sampling.The fourth chapter describes a stratified Monte Carlo scheme for simulating fragmentation phenomena. Through several numerical comparisons, we can see that the stratified Sudoku sampling reduces the variance of Monte Carlo estimates. We finally test a method for solving an inverse problem : knowing the evolution of the mass distribution, it aims to find a fragmentation kernel. In this case quasi-random points are used for solving the direct problem.
89

Reconstitution par filtrage non-linéaire de milieux turbulents et rétrodiffusants à l'aide de LIDARs Doppler et aérosols / Retrival of the propertiers of turbulent and backscattering media using non linear filtering techniques applied to the observation data from a combination of a dopller and an aerosol lidar

Campi, Antoine 09 December 2015 (has links)
Le but de cette thèse était de mettre en place un algorithme permettant de traiter des données de LIDAR (LIght Detection And Ranging). On a principalement eu recours à des LIDAR de type Doppler et aérosols. Les mesures de ces appareils sont obtenues de telle sorte que l'on dispose en fait d'une grille d'observation. Cependant on souhaite avoir des informations sur l'atmosphère qui est un milieu continu. Nous avons utilisé des méthodes d'analyse multi-résolution pour se placer dans un cadre mathématique correspondant au problème physique. On a donc obtenu un découpage de l'espace d'évolution du processus en deux sous-espaces orthogonaux, imposé par la structure des observations. Nous avons alors pu étendre la théorie du filtrage non linéaire dans ce cadre. Pour cela nous avons utilisé les noyaux de filtrage de type Feynman-Kac. Il nous a fallu reprendre les calculs et formulés des hypothèses cohérentes avec le problème physique pour obtenir des résultats de convergence semblable à ceux de la théorie classique. Nous nous sommes alors ramené dans le cadre adapté aux filtres à particules. Nous avons alors développé différents algorithmes basés sur les résultats théoriques obtenus. Différentes applications de notre méthode nous a permis de mettre en valeur le fait que nous pouvions, dans une certaine mesure, retrouver des paramètres de taille inférieur à la résolution donnée par la grille. Finalement nous avons mis en place un cadre théorique ainsi qu'un algorithme permettant de traiter à la fois des données de LIDAR Doppler et aérosols. Nous avons ainsi pu vérifier que nos estimations se raffinaient par l'ajout de traceurs passifs. / The aim of this thesis was to set up an algorithm for processing LIDAR data (LIght Detection And Ranging). LIDARs of the Doppler and aerosol type were mainly used. The measurements of these measuring devices are obtained in such a way that only an observation grid is in fact available. However, it is desired to have information about the atmosphere which is a continuous medium. We used multi-resolution analysis methods to place ourselves in a mathematical framework corresponding to the physical problem. We have thus obtained a division of the space of evolution of the process into two orthogonal subspaces, imposed by the structure of the observations. We were able to extend the theory of nonlinear ltering in this framework. For this we used the filter kernels of Feynman-Kac type. We had to resume the calculations and formulate hypotheses consistent with the physical problem in order to obtain results of convergence similar to those of the classical theory. We then returned to the frame suitable for particle filters. We developed different algorithms based on the theoretical results obtained. Different applications of our method allowed us to highlight the fact that we could, to some extent, find parameters smaller than the resolution given by the grid. Finally, we set up a theoretical framework as well as an algorithm for processing LIDAR Doppler and aerosol data. We were able to verify that our estimates were refined by the addition of passive tracers.
90

Target Tracking in Environments of Rapidly Changing Clutter

January 2015 (has links)
abstract: Tracking targets in the presence of clutter is inevitable, and presents many challenges. Additionally, rapid, drastic changes in clutter density between different environments or scenarios can make it even more difficult for tracking algorithms to adapt. A novel approach to target tracking in such dynamic clutter environments is proposed using a particle filter (PF) integrated with Interacting Multiple Models (IMMs) to compensate and adapt to the transition between different clutter densities. This model was implemented for the case of a monostatic sensor tracking a single target moving with constant velocity along a two-dimensional trajectory, which crossed between regions of drastically different clutter densities. Multiple combinations of clutter density transitions were considered, using up to three different clutter densities. It was shown that the integrated IMM PF algorithm outperforms traditional approaches such as the PF in terms of tracking results and performance. The minimal additional computational expense of including the IMM more than warrants the benefits of having it supplement and amplify the advantages of the PF. / Dissertation/Thesis / Masters Thesis Electrical Engineering 2015

Page generated in 0.0816 seconds