• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 171
  • 17
  • 15
  • 11
  • 10
  • 8
  • 8
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 334
  • 334
  • 334
  • 334
  • 146
  • 79
  • 73
  • 54
  • 47
  • 46
  • 44
  • 42
  • 42
  • 31
  • 29
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Bayesian Parameterization in the spread of Diseases

Eriksson, Robin January 2017 (has links)
Mathematical and computational epidemiological models are important tools in efforts to combat the spread of infectious diseases. The models can be used to predict further progression of an epidemic and for assessing potential countermeasures to control disease spread. In the proposal of models (when data is available), one needs parameter estimation methods. In this thesis, likelihood-less Bayesian inference methods are concerned. The data and the model originate from the spread of a verotoxigenic Escherichia coli in the Swedish cattle population. In using the SISE3 model, which is an extension of the susceptible-infected-susceptible model with added environmental pressure and three age categories, two different methods were employed to give an estimated posterior: Approximate Bayesian Computations and Synthetic Likelihood Markov chain Monte Carlo. The mean values of the resulting posteriors were close to the previously performed point estimates, which gives the conclusion that Bayesian inference on a nation scaled SIS-like network is conceivable.
242

Ancillarity-Sufficiency Interweaving Strategy (ASIS) for Boosting MCMC Estimation of Stochastic Volatility Models

Kastner, Gregor, Frühwirth-Schnatter, Sylvia 01 1900 (has links) (PDF)
Bayesian inference for stochastic volatility models using MCMC methods highly depends on actual parameter values in terms of sampling efficiency. While draws from the posterior utilizing the standard centered parameterization break down when the volatility of volatility parameter in the latent state equation is small, non-centered versions of the model show deficiencies for highly persistent latent variable series. The novel approach of ancillarity-sufficiency interweaving has recently been shown to aid in overcoming these issues for a broad class of multilevel models. In this paper, we demonstrate how such an interweaving strategy can be applied to stochastic volatility models in order to greatly improve sampling efficiency for all parameters and throughout the entire parameter range. Moreover, this method of "combining best of different worlds" allows for inference for parameter constellations that have previously been infeasible to estimate without the need to select a particular parameterization beforehand. / Series: Research Report Series / Department of Statistics and Mathematics
243

A Monte-Carlo approach to dominant scatterer tracking of a single extended target in high range-resolution radar

De Freitas, Allan January 2013 (has links)
In high range-resolution (HRR) radar systems, the returns from a single target may fall in multiple adjacent range bins which individually vary in amplitude. A target following this representation is commonly referred to as an extended target and results in more information about the target. However, extracting this information from the radar returns is challenging due to several complexities. These complexities include the single dimensional nature of the radar measurements, complexities associated with the scattering of electromagnetic waves, and complex environments in which radar systems are required to operate. There are several applications of HRR radar systems which extract target information with varying levels of success. A commonly used application is that of imaging referred to as synthetic aperture radar (SAR) and inverse SAR (ISAR) imaging. These techniques combine multiple single dimension measurements in order to obtain a single two dimensional image. These techniques rely on rotational motion between the target and the radar occurring during the collection of the single dimension measurements. In the case of ISAR, the radar is stationary while motion is induced by the target. There are several difficulties associated with the unknown motion of the target when standard Doppler processing techniques are used to synthesise ISAR images. In this dissertation, a non-standard Dop-pler approach, based on Bayesian inference techniques, was considered to address the difficulties. The target and observations were modelled with a non-linear state space model. Several different Bayesian techniques were implemented to infer the hidden states of the model, which coincide with the unknown characteristics of the target. A simulation platform was designed in order to analyse the performance of the implemented techniques. The implemented techniques were capable of successfully tracking a randomly generated target in a controlled environment. The influence of varying several parameters, related to the characteristics of the target and the implemented techniques, was explored. Finally, a comparison was made between standard Doppler processing and the Bayesian methods proposed. / Dissertation (MEng)--University of Pretoria, 2013. / gm2014 / Electrical, Electronic and Computer Engineering / unrestricted
244

Méthodes de simulation stochastique pour le traitement de l’information / Stochastic simulation methods for information processing

Minvielle-Larrousse, Pierre 05 March 2019 (has links)
Lorsqu’une grandeur d’intérêt ne peut être directement mesurée, il est fréquent de procéder à l’observation d’autres quantités qui lui sont liées par des lois physiques. Ces quantités peuvent contenir de l’information sur la grandeur d’intérêt si l’on sait résoudre le problème inverse, souvent mal posé, et inférer la valeur. L’inférence bayésienne constitue un outil statistique puissant pour l’inversion, qui requiert le calcul d’intégrales en grande dimension. Les méthodes Monte Carlo séquentielles (SMC), aussi dénommées méthodes particulaires, sont une classe de méthodes Monte Carlo permettant d’échantillonner selon une séquence de densités de probabilité de dimension croissante. Il existe de nombreuses applications, que ce soit en filtrage, en optimisation globale ou en simulation d’évènement rare. Les travaux ont porté notamment sur l’extension des méthodes SMC dans un contexte dynamique où le système, régi par un processus de Markov caché, est aussi déterminé par des paramètres statiques que l’on cherche à estimer. En estimation bayésienne séquentielle, la détermination de paramètres fixes provoque des difficultés particulières : un tel processus est non-ergodique, le système n’oubliant pas ses conditions initiales. Il est montré comment il est possible de surmonter ces difficultés dans une application de poursuite et identification de formes géométriques par caméra numérique CCD. Des étapes d’échantillonnage MCMC (Chaîne de Markov Monte Carlo) sont introduites pour diversifier les échantillons sans altérer la distribution a posteriori. Pour une autre application de contrôle de matériau, qui cette fois « hors ligne » mêle paramètres statiques et dynamiques, on a proposé une approche originale. Elle consiste en un algorithme PMMH (Particle Marginal Metropolis-Hastings) intégrant des traitements SMC Rao-Blackwellisés, basés sur des filtres de Kalman d’ensemble en interaction.D’autres travaux en traitement de l’information ont été menés, que ce soit en filtrage particulaire pour la poursuite d’un véhicule en phase de rentrée atmosphérique, en imagerie radar 3D par régularisation parcimonieuse ou en recalage d’image par information mutuelle. / When a quantity of interest is not directly observed, it is usual to observe other quantities that are linked by physical laws. They can provide information about the quantity of interest if it is able to solve the inverse problem, often ill posed, and infer the value. Bayesian inference is a powerful tool for inversion that requires the computation of high dimensional integrals. Sequential Monte Carlo (SMC) methods, a.k.a. interacting particles methods, are a type of Monte Carlo methods that are able to sample from a sequence of probability densities of growing dimension. They are many applications, for instance in filtering, in global optimization or rare event simulation.The work has focused in particular on the extension of SMC methods in a dynamic context where the system, governed by a hidden Markov process, is also determined by static parameters that we seek to estimate. In sequential Bayesian estimation, the determination of fixed parameters causes particular difficulties: such a process is non-ergodic, the system not forgetting its initial conditions. It is shown how it is possible to overcome these difficulties in an application of tracking and identification of geometric shapes by CCD digital camera. Markov Monte Carlo Chain (MCMC) sampling steps are introduced to diversify the samples without altering the posterior distribution. For another material control application, which mixes static and dynamic parameters, we proposed an original offline approach. It consists of a Particle Marginal Metropolis-Hastings (PMMH) algorithm that integrates Rao-Blackwellized SMC, based on a bank of interacting Ensemble Kalman filters.Other information processing works has been conducted: particle filtering for atmospheric reentry vehicle tracking, 3D radar imaging by sparse regularization and image registration by mutual information.
245

Evaluation of Probabilistic Programming Frameworks

Munkby, Carl January 2022 (has links)
In recent years significant progress has been made in the area of Probabilistic Programming, contributing to a considerably easier workflow for quantitative research in many fields. However, as new Probabilistic Programming Frameworks (PPFs) are continuously being created and developed, there is a need for finding ways of evaluating and benchmarking these frameworks. To this end, this thesis explored the use of a range of evaluation measures to evaluate and better understand the performance of three PPFs: Stan, NumPyro and TensorFlow Probability (TFP). Their respective Hamiltonian Monte Carlo (HMC) samplers were benchmarked on three different hierarchical models using both centered and non-centered parametrizations. The results showed that even if the same inference algorithms were used, the PPFs’ samplers still exhibited different behaviours, which consequently lead to non-negligible differences in their statistical efficiency. Furthermore, the sampling behaviour of the PPFs indicated that the observed differences can possibly be attributed to how the warm-up phase used in HMC-sampling is constructed. Finally, this study concludes that the computational speed of the numerical library used, was the primary deciding factor of performance in this benchmark. This was demonstrated by NumPyros superior computational speed, contributing to it yielding up to 10x higher ESSmin/s than Stan and 4x higher ESSmin/s than TFP.
246

Data-driven test case design of automatic test cases using Markov chains and a Markov chain Monte Carlo method / Datadriven testfallsdesign av automatiska testfall med Markovkedjor och en Markov chain Monte Carlo-metod

Lindahl, John, Persson, Douglas January 2021 (has links)
Large and complex software that is frequently changed leads to testing challenges. It is well established that the later a fault is detected in software development, the more it costs to fix. This thesis aims to research and develop a method of generating relevant and non-redundant test cases for a regression test suite, to catch bugs as early in the development process as possible. The research was executed at Axis Communications AB with their products and systems in mind. The approach utilizes user data to dynamically generate a Markov chain model and with a Markov chain Monte Carlo method, strengthen that model. The model generates test case proposals, detects test gaps, and identifies redundant test cases based on the user data and data from a test suite. The sampling in the Markov chain Monte Carlo method can be modified to bias the model for test coverage or relevancy. The model is generated generically and can therefore be implemented in other API-driven systems. The model was designed with scalability in mind and further implementations can be made to increase the complexity and further specialize the model for individual needs.
247

Probabilistic Computing: From Devices to Systems

Jan Kaiser (8346969) 22 April 2022 (has links)
<p>Conventional computing is based on the concept of bits which are classical entities that are either 0 or 1 and can be represented by stable magnets. The field of quantum computing relies on qubits which are a complex linear combination of 0 and 1. Recently, the concept of probabilistic computing with probabilistic (<em>p-</em>)bits was introduced where <em>p-</em>bits are robust classical entities that fluctuate between 0 and 1. <em>P-</em>bits can be naturally represented by low-barrier nanomagnets. Probabilistic computers (<em>p-</em>computers) based on <em>p-</em>bits are domain-based hardware accelerators for Monte Carlo algorithms that can efficiently address probabilistic tasks like sampling, optimization and machine learning. </p> <p>In this dissertation, starting from the intrinsic physics of nanomagnets, we show that a compact hardware implementation of a <em>p-</em>bit based on stochastic magnetic tunnel junctions (s-MTJs) can operate at high-speeds in the order of nanoseconds, a prediction that has recently received experimental support.</p> <p>We then move to the system level and illustrate by simulation and by experiment how multiple interconnected <em>p-</em>bits can be utilized to train a Boltzmann machine built with hardware <em>p-</em>bits. We observe that even non-ideal s-MTJs can be utilized for probabilistic computing when combined with hardware-aware learning.</p> <p>Finally, we show how to build a <em>p-</em>computer to accelerate a wide variety of problems ranging from optimization and sampling to quantum computing and machine learning. The common theme for all these applications is the underlying Monte Carlo and Markov chain Monte Carlo algorithms and their parallelism enabled by a unique <em>p-</em>computer architecture.</p>
248

Around the Langevin Monte Carlo algorithm : extensions and applications / Autour de l'algorithme du Langevin : extensions et applications

Brosse, Nicolas 12 June 2019 (has links)
Cette thèse porte sur le problème de l'échantillonnage en grande dimension et est basée sur l'algorithme de Langevin non ajusté (ULA).Dans une première partie, nous proposons deux extensions d'ULA et fournissons des garanties de convergence précises pour ces algorithmes. ULA n'est pas applicable lorsque la distribution cible est à support compact; grâce à une régularisation de Moreau Yosida, il est néanmoins possible d'échantillonner à partir d'une distribution suffisamment proche de la distribution cible. ULA diverge lorsque les queues de la distribution cible sont trop fines; en renormalisant correctement le gradient, cette difficulté peut être surmontée.Dans une deuxième partie, nous donnons deux applications d'ULA. Nous fournissons un algorithme pour estimer les constantes de normalisation de densités log concaves à partir d'une suite de distributions dont la variance augmente graduellement. En comparant ULA avec la diffusion de Langevin, nous développons une nouvelle méthode de variables de contrôle basée sur la variance asymptotique de la diffusion de Langevin.Dans une troisième partie, nous analysons Stochastic Gradient Langevin Dynamics (SGLD), qui diffère de ULA seulement dans l'estimation stochastique du gradient. Nous montrons que SGLD, appliqué avec des paramètres habituels, peut être très éloigné de la distribution cible. Cependant, avec une technique appropriée de réduction de variance, son coût calcul peut être bien inférieur à celui d'ULA pour une précision similaire. / This thesis focuses on the problem of sampling in high dimension and is based on the unadjusted Langevin algorithm (ULA).In a first part, we suggest two extensions of ULA and provide precise convergence guarantees for these algorithms. ULA is not feasible when the target distribution is compactly supported; thanks to a Moreau Yosida regularization, it is nevertheless possible to sample from a probability distribution close enough to the distribution of interest. ULA diverges when the tails of the target distribution are too thin; by taming appropriately the gradient, this difficulty can be overcome.In a second part, we give two applications of ULA. We provide an algorithm to estimate normalizing constants of log concave densities based on a sequence of distributions with increasing variance. By comparison of ULA with the Langevin diffusion, we develop a new control variates methodology based on the asymptotic variance of the Langevin diffusion.In a third part, we analyze Stochastic Gradient Langevin Dynamics (SGLD), which differs from ULA only in the stochastic estimation of the gradient. We show that SGLD, applied with usual parameters, may be very far from the target distribution. However, with an appropriate variance reduction technique, its computational cost can be much lower than ULA for the same accuracy.
249

Parameter Recovery for the Four-Parameter Unidimensional Binary IRT Model: A Comparison of Marginal Maximum Likelihood and Markov Chain Monte Carlo Approaches

Do, Hoan 26 May 2021 (has links)
No description available.
250

Regression Analysis(Bayesian and Simple linear) of Pulmonary <sup>129</sup>Xe ADC on Voxel MRI Data: A Comparison of CF Patients and Healthy Controls AND Optimizing Under sampled Voxel MRI Data for Retaining T2* Information: Finding the Point of Cessation.

Chatterjee, Neelakshi 02 June 2023 (has links)
No description available.

Page generated in 0.083 seconds