• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • Tagged with
  • 6
  • 6
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Continuous Model Updating and Forecasting for a Naturally Fractured Reservoir

Almohammadi, Hisham 16 December 2013 (has links)
Recent developments in instrumentation, communication and software have enabled the integration of real-time data into the decision-making process of hydrocarbon production. Applications of real-time data integration in drilling operations and horizontal-well lateral placement are becoming industry common practice. In reservoir management, the use of real-time data has been shown to be advantageous in tasks such as improving smart-well performance and in pressure-maintenance programs. Such capabilities allow for a paradigm change in which reservoir management can be looked at as a strategy that enables a semi-continuous process of model updates and decision optimizations instead of being periodic or reactive. This is referred to as closed-loop reservoir management (CLRM). Due to the complexity of the dynamic physical processes, large sizes, and huge uncertainties associated with reservoir description, continuous model updating is a large-scale problem with a highly dimensional parameter space and high computational costs. The need for an algorithm that is both feasible for practical applications and capable of generating reliable estimates of reservoir uncertainty is a key element in CLRM. This thesis investigates the validity of Markov Chain Monte Carlo (MCMC) sampling used in a Bayesian framework as an uncertainty quantification and model-updating tool suitable for real-time applications. A 3-phase, dual-porosity, dual-permeability reservoir model is used in a synthetic experiment. Continuous probability density functions of cumulative oil production for two cases with different model updating frequencies and reservoir maturity levels are generated and compared to a case with a known geology, i.e., truth case. Results show continuously narrowing ranges for cumulative oil production, with mean values approaching the truth case as model updating advances and the reservoir becomes more mature. To deal with MCMC sampling sensitivity to increasing numbers of observed measurements, as in the case of real-time applications, a new formulation of the likelihood function is proposed. Changing the likelihood function significantly improved chain convergence, chain mixing and forecast uncertainty quantification. Further, methods to validate the sampling quality and to judge the prior model for the MCMC process in real applications are advised.
2

Modélisation probabiliste d’impression à l’échelle micrométrique / Probabilistic modeling of prints at the microscopic scale

Nguyen, Quoc Thong 18 May 2015 (has links)
Nous développons des modèles probabilistes pour l’impression à l’échelle micrométrique. Tenant compte de l’aléa de la forme des points qui composent les impressions, les modèles proposés pourront être ultérieurement exploités dans différentes applications dont l’authentification de documents imprimés. Une analyse de l’impression sur différents supports papier et par différentes imprimantes a été effectuée. Cette étude montre que la grande variété de forme dépend de la technologie et du papier. Le modèle proposé tient compte à la fois de la distribution du niveau de gris et de la répartition spatiale de l’encre sur le papier. Concernant le niveau de gris, les modèles des surfaces encrées/vierges sont obtenues en sélectionnant les distributions dans un ensemble de lois de forme similaire aux histogrammes et à l’aide de K-S critère. Le modèle de répartition spatiale de l’encre est binaire. Le premier modèle consiste en un champ de variables indépendantes de Bernoulli non-stationnaire dont les paramètres forment un noyau gaussien généralisé. Un second modèle de répartition spatiale des particules d’encre est proposé, il tient compte de la dépendance des pixels à l’aide d’un modèle de Markov non stationnaire. Deux méthodes d’estimation ont été développées, l’une approchant le maximum de vraisemblance par un algorithme de Quasi Newton, la seconde approchant le critère de l’erreur quadratique moyenne minimale par l’algorithme de Metropolis within Gibbs. Les performances des estimateurs sont évaluées et comparées sur des images simulées. La précision des modélisations est analysée sur des jeux d’images d’impression à l’échelle micrométrique obtenues par différentes imprimantes. / We develop the probabilistic models of the print at the microscopic scale. We study the shape randomness of the dots that originates the prints, and the new models could improve many applications such as the authentication. An analysis was conducted on various papers, printers. The study shows a large variety of shape that depends on the printing technology and paper. The digital scan of the microscopic print is modeled in: the gray scale distribution, and the spatial binary process modeling the printed/blank spatial distribution. We seek the best parametric distribution that takes account of the distributions of the blank and printed areas. Parametric distributions are selected from a set of distributions with shapes close to the histograms and with the Kolmogorov-Smirnov divergence. The spatial binary model handles the wide diversity of dot shape and the range of variation of spatial density of inked particles. At first, we propose a field of independent and non-stationary Bernoulli variables whose parameters form a Gaussian power. The second spatial binary model encompasses, in addition to the first model, the spatial dependence of the inked area through an inhomogeneous Markov model. Two iterative estimation methods are developed; a quasi-Newton algorithm which approaches the maximum likelihood and the Metropolis-Hasting within Gibbs algorithm that approximates the minimum mean square error estimator. The performances of the algorithms are evaluated and compared on simulated images. The accuracy of the models is analyzed on the microscopic scale printings coming from various printers. Results show the good behavior of the estimators and the consistency of the models.
3

A Bayesian Hierarchical Model for Studying Inter-Occasion and Inter-Subject Variability in Pharmacokinetics

Li, Xia 19 April 2011 (has links)
No description available.
4

Analysis Of Stochastic And Non-stochastic Volatility Models

Ozkan, Pelin 01 September 2004 (has links) (PDF)
Changing in variance or volatility with time can be modeled as deterministic by using autoregressive conditional heteroscedastic (ARCH) type models, or as stochastic by using stochastic volatility (SV) models. This study compares these two kinds of models which are estimated on Turkish / USA exchange rate data. First, a GARCH(1,1) model is fitted to the data by using the package E-views and then a Bayesian estimation procedure is used for estimating an appropriate SV model with the help of Ox code. In order to compare these models, the LR test statistic calculated for non-nested hypotheses is obtained.
5

Quantum Emulation with Probabilistic Computers

Shuvro Chowdhury (14030571) 31 October 2022 (has links)
<p>The recent groundbreaking demonstrations of quantum supremacy in noisy intermediate scale quantum (NISQ) computing era has triggered an intense activity in establishing finer boundaries between classical and quantum computing. In this dissertation, we use established techniques based on quantum Monte Carlo (QMC) to map quantum problems into probabilistic networks where the fundamental unit of computation, p-bit, is inherently probabilistic and can be tuned to fluctuate between ‘0’ and ‘1’ with desired probability. We can view this mapped network as a Boltzmann machine whose states each represent a Feynman path leading from an initial configuration of q-bits to a final configuration. Each such path, in general, has a complex amplitude, ψ which can be associated with a complex energy. The real part of this energy can be used to generate samples of Feynman paths in the usual way, while the imaginary part is accounted for by treating the samples as complex entities, unlike ordinary Boltzmann machines where samples are positive. This mapping of a quantum circuit onto a Boltzmann machine with complex energies should be particularly useful in view of the advent of special-purpose hardware accelerators known as Ising Machines which can obtain a very large number of samples per second through massively parallel operation. We also demonstrate this acceleration using a recently used quantum problem and speeding its QMC simulation by a factor of ∼ 1000× compared to a highly optimized CPU program. Although this speed-up has been demonstrated using a graph colored architecture in FPGA, we project another ∼ 100× improvement with an architecture that utilizes clockless analog circuits. We believe that this will contribute significantly to the growing efforts to push the boundaries of the simulability of quantum circuits with classical/probabilistic resources and comparing them with NISQ-era quantum computers. </p>
6

Non-convex Bayesian Learning via Stochastic Gradient Markov Chain Monte Carlo

Wei Deng (11804435) 18 December 2021 (has links)
<div>The rise of artificial intelligence (AI) hinges on the efficient training of modern deep neural networks (DNNs) for non-convex optimization and uncertainty quantification, which boils down to a non-convex Bayesian learning problem. A standard tool to handle the problem is Langevin Monte Carlo, which proposes to approximate the posterior distribution with theoretical guarantees. However, non-convex Bayesian learning in real big data applications can be arbitrarily slow and often fails to capture the uncertainty or informative modes given a limited time. As a result, advanced techniques are still required.</div><div><br></div><div>In this thesis, we start with the replica exchange Langevin Monte Carlo (also known as parallel tempering), which is a Markov jump process that proposes appropriate swaps between exploration and exploitation to achieve accelerations. However, the na\"ive extension of swaps to big data problems leads to a large bias, and the bias-corrected swaps are required. Such a mechanism leads to few effective swaps and insignificant accelerations. To alleviate this issue, we first propose a control variates method to reduce the variance of noisy energy estimators and show a potential to accelerate the exponential convergence. We also present the population-chain replica exchange and propose a generalized deterministic even-odd scheme to track the non-reversibility and obtain an optimal round trip rate. Further approximations are conducted based on stochastic gradient descents, which yield a user-friendly nature for large-scale uncertainty approximation tasks without much tuning costs. </div><div><br></div><div>In the second part of the thesis, we study scalable dynamic importance sampling algorithms based on stochastic approximation. Traditional dynamic importance sampling algorithms have achieved successes in bioinformatics and statistical physics, however, the lack of scalability has greatly limited their extensions to big data applications. To handle this scalability issue, we resolve the vanishing gradient problem and propose two dynamic importance sampling algorithms based on stochastic gradient Langevin dynamics. Theoretically, we establish the stability condition for the underlying ordinary differential equation (ODE) system and guarantee the asymptotic convergence of the latent variable to the desired fixed point. Interestingly, such a result still holds given non-convex energy landscapes. In addition, we also propose a pleasingly parallel version of such algorithms with interacting latent variables. We show that the interacting algorithm can be theoretically more efficient than the single-chain alternative with an equivalent computational budget.</div>

Page generated in 0.0866 seconds