Spelling suggestions: "subject:"month carlo sampling"" "subject:"fonte carlo sampling""
1 |
Bayesian inference for non-Gaussian state space model using simulationPitt, Michael K. January 1997 (has links)
No description available.
|
2 |
On Grouped Observation Level Interaction and a Big Data Monte Carlo Sampling AlgorithmHu, Xinran 26 January 2015 (has links)
Big Data is transforming the way we live. From medical care to social networks, data is playing a central role in various applications. As the volume and dimensionality of datasets keeps growing, designing effective data analytics algorithms emerges as an important research topic in statistics. In this dissertation, I will summarize our research on two data analytics algorithms: a visual analytics algorithm named Grouped Observation Level Interaction with Multidimensional Scaling and a big data Monte Carlo sampling algorithm named Batched Permutation Sampler. These two algorithms are designed to enhance the capability of generating meaningful insights and utilizing massive datasets, respectively. / Ph. D.
|
3 |
Model-based experimental design in electrochemistryNguyen, H. Viet January 2018 (has links)
The following thesis applies an experimental design framework to investigate properties of electron transfer kinetics and homogeneous catalytic reactions. The approach is model-based and the classical Butler-Volmer description is chosen to describe the fundamental electrochemical reaction at a conductive interface. The methodology focuses on two significant design variables: the applied potential at the electrode and mass transport mode induced by physical arrangement. An important problem in electrochemistry is the recovery of model parameters from output current measurements. In this work, the identifiability function is proposed as a measure of correspondence between the parameters and output variable. Under diffusion-limit conditions, plain Monte Carlo optimization shows that the function is globally non-identifiable, or equivalently the correspondence is generally non-unique. However by selecting linear voltammetry as the applied potential, the primary parameters in the Butler-Volmer description are theoretically recovered from a single set of data. The result is accomplished via applications of Sobol ranking to reduce the parameter set and a sensitivity equation to inverse these parameters. The use of hydrodynamic tools for investigating electron transfer reactions is next considered. The work initially focuses on the rotating disk and its generalization - the rocking disk mechanism. A numerical framework is developed to analyze the latter, most notably the derivation of a Levich-like expression for the limiting current. The results are then used to compute corresponding identifiability functions for each of the above configurations. Potential effectiveness of each device in recovering kinetic parameters are straightforwardly evaluated by comparing the functional values. Furthermore, another hydrodynamic device - the rotating drum, which is highly suitable for viscous and resistive solvents, is theoretically analyzed. Combined with previous results, this rotating drum configuration shows promising potential as an alternative tool to traditional electrode arrangement. The final chapter illustrates the combination of modulated input signal and appro- priate mass transport regimes to express electro-catalytic effects. An AC voltammetry technique plays an important role in this approach and is discussed step-by-step from simple redox reaction to the complete EC′ catalytic mechanism. A general algorithm based on forward and inverse Fourier transform functions for extracting harmonic currents from the total current is presented. The catalytic effect is evaluated and compared for three cases: macro, micro electrodes under diffusion control condition and in micro fluidic environments. Experimental data are also included to support the simulated design results.
|
4 |
Spatial Function Estimation with Uncertain Sensor Locations / Spatial Function Estimation with Uncertain Sensor LocationsPtáček, Martin January 2021 (has links)
Tato práce se zabývá úlohou odhadování prostorové funkce z hlediska regrese pomocí Gaussovských procesů (GPR) za současné nejistoty tréninkových pozic (pozic senzorů). Nejdříve je zde popsána teorie v pozadí GPR metody pracující se známými tréninkovými pozicemi. Tato teorie je poté aplikována při odvození výrazů prediktivní distribuce GPR v testovací pozici při uvážení nejistoty tréninkových pozic. Kvůli absenci analytického řešení těchto výrazů byly výrazy aproximovány pomocí metody Monte Carlo. U odvozené metody bylo demonstrováno zlepšení kvality odhadu prostorové funkce oproti standardnímu použití GPR metody a také oproti zjednodušenému řešení uvedenému v literatuře. Dále se práce zabývá možností použití metody GPR s nejistými tréninkovými pozicemi v~kombinaci s výrazy s dostupným analytickým řešením. Ukazuje se, že k dosažení těchto výrazů je třeba zavést značné předpoklady, což má od počátku za následek nepřesnost prediktivní distribuce. Také se ukazuje, že výsledná metoda používá standardní výrazy GPR v~kombinaci s upravenou kovarianční funkcí. Simulace dokazují, že tato metoda produkuje velmi podobné odhady jako základní GPR metoda uvažující známé tréninkové pozice. Na druhou stranu prediktivní variance (nejistota odhadu) je u této metody zvýšena, což je žádaný efekt uvážení nejistoty tréninkových pozic.
|
5 |
Peptide Refinement by Using a Stochastic SearchLewis, Nicole H., Hitchcock, David B., Dryden, Ian L., Rose, John R. 01 November 2018 (has links)
Identifying a peptide on the basis of a scan from a mass spectrometer is an important yet highly challenging problem. To identify peptides, we present a Bayesian approach which uses prior information about the average relative abundances of bond cleavages and the prior probability of any particular amino acid sequence. The scoring function proposed is composed of two overall distance measures, which measure how close an observed spectrum is to a theoretical scan for a peptide. Our use of our scoring function, which approximates a likelihood, has connections to the generalization presented by Bissiri and co-workers of the Bayesian framework. A Markov chain Monte Carlo algorithm is employed to simulate candidate choices from the posterior distribution of the peptide sequence. The true peptide is estimated as the peptide with the largest posterior density.
|
6 |
Bayesian estimation by sequential Monte Carlo sampling for nonlinear dynamic systemsChen, Wen-shiang 17 June 2004 (has links)
No description available.
|
7 |
Touchstat V. 3.00: A New and Improved Monte Carlo Adjunct for the Sequential Touching TaskDixon, Wallace E., Jr., Price, Robert M., Watkins, Michael, Brink, Christine 01 August 2007 (has links)
The sequential-touching procedure is employed by researchers studying nonlinguistic categorization in toddlers. TouchStat 3.00 is introduced in this article as an adjunct to the sequential-touching procedure, allowing researchers to compare children’s actual touching behavior to what might be expected by chance. Advantages over the Thomas and Dahlin (2000) framework include ease of use, and fewer assumptive limitations. Improvements over TouchStat 1.00 include calculation of chance probabilities for multiple “special cases” and for immediate intercategory alternations. A new feature for calculating mean run length is also included.
|
8 |
Numerical optimization for mixed logit models and an applicationDogan, Deniz 08 January 2008 (has links)
In this thesis an algorithm (MLOPT) for mixed logit models is proposed. Mixed logit models are flexible discrete choice models, but their estimation with large datasets involves the solution of a nonlinear optimization problem with a high dimensional integral in the objective function, which is the log-likelihood function. This complex structure is a general problem that occurs in statistics and optimization.
MLOPT uses sampling from the dataset of individuals to generate a data sample. In addition to this, Monte Carlo samples are used to generate an integration sample to estimate the choice probabilities. MLOPT estimates the log-likelihood function values for each individual in the dataset by controlling and adaptively changing the data sample and the size of the integration sample at each iteration. Furthermore, MLOPT incorporates statistical testing for the quality of the solution obtained within the optimization problem.
MLOPT is tested with a benchmark study from the literature (AMLET) and further applied to real-life applications in the automotive industry by predicting market shares in the Low Segment of the new car market. The automotive industry is particularly interesting in that understanding the behavior of buyers and how rebates affect their preferences is very important for revenue management.
Real transaction data is used to generate and test the mixed logit models developed in this study. Another new aspect of this study is that the sales transactions are differentiated with respect to the transaction type of the purchases made. These mixed logit models are used to estimate demand and analyze market share changes under different what-if scenarios. An analysis and discussion of the results obtained are also presented.
|
9 |
Vectorisation compacte d’images par approches stochastiques / Compact image vectorization by stochastic approachesFavreau, Jean-Dominique 15 March 2018 (has links)
Les artistes apprécient les images vectorielles car elles sont compactes et facilement manipulables. Cependant, beaucoup d’artistes expriment leur créativité en dessinant, en peignant ou encore en prenant des photographies. Digitaliser ces contenus produit des images rasterisées. L’objectif de cette thèse est de convertir des images rasterisées en images vectorielles qui sont facilement manipulables. Nous avons formulé le problème de vectorisation comme un problème de minimisation d’énergie que nous avons défini par deux termes. Le premier terme, plutôt classique, mesure la fidélité de l’image vectorielle générée avec l’image rasterisée d’origine. La nouveauté principale est le second terme qui mesure la simplicité de l’image vectorielle générée. Le terme de simplicité est global et contient des variables discrètes, ce qui rend sa minimisation difficile. Nous avons proposé deux algorithmes de vectorisation : un pour la vectorisation de croquis et un autre pour la vectorisation multicouches d’images couleurs. Ces deux algorithmes commencent par extraire des primitives géométriques (un squelette pour les croquis et une segmentation pour les images couleurs) qu’ils assemblent ensuite pour former l’image vectorielle. Dans la dernière partie de la thèse, nous proposons un nouvel algorithme qui est capable de vectoriser des croquis sans étapes préliminaires : on extrait et assemble les primitives simultanément. Nous montrons le potentiel de ce nouvel algorithme pour une variété de problèmes de vision par ordinateur comme l’extraction de réseaux linéiques, l’extraction d’objets et la compression d’images. / Artists appreciate vector graphics for their compactness and editability. However many artists express their creativity by sketching, painting or taking photographs. Digitizing these images produces raster graphics. The goal of this thesis is to convert raster graphics into vector graphics that are easy to edit. We cast image vectorization as an energy minimization problem. Our energy is a combination of two terms. The first term measures the fidelity of the vector graphics to the input raster graphics. This term is a standard term for image reconstruction problems. The main novelty is the second term which measures the simplicity of the vector graphics. The simplicity term is global and involves discrete unknowns which makes its minimization challenging. We propose two stochastic optimizations for this formulation: one for the line drawing vectorization problem and another one for the color image vectorization problem. These optimizations start by extracting geometric primitives (skeleton for sketches and segmentation for color images) and then assembling these primitives together to form the vector graphics. In the last chapter we propose a generic optimization method for the problem of geometric shape extraction. This new algorithm does not require any preprocessing step. We show its efficiency in a variety of vision problems including line network extraction, object contouring and image compression.
|
10 |
Determination of Phase Equilibria and the Critical Point Using Two-Phase Molecular Dynamics Simulations with Monte Carlo SamplingPatel, Sonal 15 June 2012 (has links) (PDF)
The two-phase MD technique employed in this work determines the liquid and vapor phase densities from a histogram of molecular densities within phase clusters in the simulation cell using a new Monte Carlo (MC) sampling method. These equilibrium densities are then fitted in conjunction with known critical-point scaling laws to obtain the critical temperature, and the critical density. This MC post-processing method was found to be more easily implemented in code, and it is efficient and easily applied to complex, structured molecules. This method has been successfully applied and benchmarked for a simple Lennard-Jones (LJ) fluid and a structured molecule, propane. Various degrees of internal flexibility in the propane models showed little effect on the coexisting densities far from critical point, but internal flexibility (angle bending and bond vibrations) seemed to affect the saturated liquid densities in the near-critical region, changing the critical temperature by approximately 20 K. Shorter cutoffs were also found to affect the phase dome and the location of the critical point. The developed MD+MC method was then used to test the efficacy of two all-atom, site-site pair potential models (with and without point charges) developed solely from the energy landscape obtained from high-level ab initio pair interactions for the first time. Both models produced equivalent phase domes and critical loci. The model's critical temperature for methanol is 77 K too high while that for 1-propanol is 80 K too low, but the critical densities are in good agreement. These differences are likely attributable to the lack of multi-body interactions in the true pair potential models used here. Lastly, the transferability of the ab initio potential model was evaluated by applying it to 1-pentanol. An attempt has been made to separate the errors due to transferability of the potential model from errors due to the use of a true-pair potential. The results suggested a good level of transferability for the site-site model. The lack of multi-body effects appears to be dominant weakness in using the generalized ab initio potential model for determination of the phase dome and critical properties of larger alcohols.
|
Page generated in 0.0948 seconds