Spelling suggestions: "subject:"stochastic"" "subject:"ctochastic""
691 |
Linear and nonlinear stochastic differential equations with applicationsStasiak, Wojciech Boguslaw 09 July 2010 (has links)
Novel analytical nonperturbative techniques are developed in the area of nonlinear and linear stochastic differential equations and applications are considered to a variety of physical problems.
First, a method is introduced for deriving first- and second-order moment equations for a general class of stochastic nonlinear equations by performing a renormalization at the level of the second moment. These general results, when specialized to the weak-coupling limit, lead to a complete set of closed equations for the first two moments within the framework of an approximation corresponding to the direct-interaction approximation. Additional restrictions result in a self-consistent set of equations for the first two moments in the stochastic quasi-linear approximation. The technique is illustrated by considering two specific nonlinear physical random problems: model hydrodynamic and Vlasov-plasma turbulence.
The equations for the phenomenon of hydrodynamic turbulence are examined in more detail at the level of the quasi-linear approximation, which is valid for small turbulence Reynolds numbers. Closed form solutions are found for the equations governing the random fluctuations of the velocity field under the assumption of special time-dependent, uniform or sheared, mean flow profiles. Constant, transient and oscillatory flows are considered.
The smoothing approximation for solving linear stochastic differential equations is applied to several specific physical problems. The problem of a randomly perturbed quantum mechanical harmonic oscillator is investigated first using the wave kinetic technique. The equations for the ensemble average of the Wigner distribution function are defined within the framework of the smoothing approximation. Special attention is paid to the so-called long-time Markovian approximation, where the discrete nature of the quantum mechanical oscillator is explicitly visible. For special statistics of the random perturbative potential, the dependence of physical observables on time is examined in detail.
As a last example of the application of the stochastic techniques, the diffusion of a scalar quantity in the presence of a turbulent fluid is investigated. An equation corresponding to the smoothing approximation is obtained, and its asymptotic long-time version is examined for the cases of zero-mean flow and linearly sheared mean flow. / Ph. D.
|
692 |
Random Vector Generation on Large Discrete SpacesShin, Kaeyoung 17 December 2010 (has links)
This dissertation addresses three important open questions in the context of generating random vectors having discrete support. The first question relates to the "NORmal To Anything" (NORTA) procedure, which is easily the most widely used amongst methods for general random vector generation. While NORTA enjoys such popularity, there remain issues surrounding its efficient and correct implementation particularly when generating random vectors having denumerable support. These complications stem primarily from having to safely compute (on a digital computer) certain infinite summations that are inherent to the NORTA procedure. This dissertation addresses the summation issue within NORTA through the construction of easily computable truncation rules that can be applied for a range of discrete random vector generation contexts.
The second question tackled in this dissertation relates to developing a customized algorithm for generating multivariate Poisson random vectors. The algorithm developed (TREx) is uniformly fast—about hundred to thousand times faster than NORTA—and presents opportunities for straightforward extensions to the case of negative binomial marginal distributions.
The third and arguably most important question addressed in the dissertation is that of exact nonparametric random vector generation on finite spaces. Specifically, it is wellknown that NORTA does not guarantee exact generation in dimensions higher than two. This represents an important gap in the random vector generation literature, especially in view of contexts that stipulate strict adherence to the dependency structure of the requested random vectors. This dissertation fully addresses this gap through the development of Maximum Entropy methods. The methods are exact, very efficient, and work on any finite discrete space with stipulated nonparametric marginal distributions. All code developed as part of the dissertation was written in MATLAB, and is publicly accessible through the Web site https://filebox.vt.edu/users/pasupath/pasupath.htm. / Ph. D.
|
693 |
Estimation problems connected with stochastic processesGarratt, Alfred Edward January 1957 (has links)
A brief introduction to the concepts and terminology of spectral analysis and a review of the standard methods for cross-spectral estimation, based on discrete time history data, are incorporated in Chapter 1.
Co-spectral and quadrature-spectral estimators which are characterized by non-negative spectral windows are developed in Chapter 2. While the spectral windows for the co-spectral estimators are non-negative for all relevant values of the assignable constants, certain restrictions on these constants are necessary to assure the non-negativity of the quadrature-spectral window. The properties of these estimators are considered in detail.
In Chapter 3 randomized co-spectral and quadrature spectral estimators are presented. These estimators depend on the random selection of sets of time differences, as opposed to the systematic evaluation of all possible time differences for the standard estimators. By suitable choices of probability distributions for the time differences and of weight functions, the expectations of the randomized estimators can be made equivalent to the expectations of the standard estimators or the estimators of Chapter 2. Since the randomized estimator is much simpler to use than the standard estimator, these estimators are compared in terms of their variances, given that they have equal expectations. The choice of probability distributions to yield minimum variance, given that the expectation is specified, is considered.
Extremely simple co-spectral and quadrature-spectral estimators, for the case where the coefficients of the Fourier series expansions of realizations of the processes over a finite time interval can be obtained by means of suitable analog equipment, are developed in Chapter 4. The expectations, variances and covariances of these estimators are derived. / Ph. D.
|
694 |
A New Class of Stochastic Volatility Models for Pricing Options Based on Observables as Volatility ProxiesZhou, Jie 12 1900 (has links)
One basic assumption of the celebrated Black-Scholes-Merton PDE model for pricing derivatives is that the volatility is a constant. However, the implied volatility plot based on real data is not constant, but curved exhibiting patterns of volatility skews or smiles. Since the volatility is not observable, various stochastic volatility models have been proposed to overcome the problem of non-constant volatility. Although these methods are fairly successful in modeling volatilities, they still rely on the implied volatility approach for model implementation. To avoid such circular reasoning, we propose a new class of stochastic volatility models based on directly observable volatility proxies and derive the corresponding option pricing formulas. In addition, we propose a new GARCH (1,1) model, and show that this discrete-time stochastic volatility process converges weakly to Heston's continuous-time stochastic volatility model. Some Monte Carlo simulations and real data analysis are also conducted to demonstrate the performance of our methods.
|
695 |
Stochastic Modeling and Simulation of Reaction-Diffusion Biochemical SystemsLi, Fei 10 March 2016 (has links)
Reaction Diffusion Master Equation (RDME) framework, characterized by the discretization of the spatial domain, is one of the most widely used methods in the stochastic simulation of reaction-diffusion systems. Discretization sizes for RDME have to be appropriately chosen such that each discrete compartment is "well-stirred" and the computational cost is not too expensive.
An efficient discretization size based on the reaction-diffusion dynamics of each species is derived in this dissertation. Usually, the species with larger diffusion rate yields a larger discretization size. Partitioning with an efficient discretization size for each species, a multiple grid discretization (MGD) method is proposed. MGD avoids unnecessary molecular jumping and achieves great simulation efficiency improvement.
Moreover, reaction-diffusion systems with reaction dynamics modeled by highly nonlinear functions, show large simulation error when discretization sizes are too small in RDME systems. The switch-like Hill function reduces into a simple bimolecular mass reaction when the discretization size is smaller than a critical value in RDME framework. Convergent Hill function dynamics in RDME framework that maintains the switch behavior of Hill functions with fine discretization is proposed.
Furthermore, the application of stochastic modeling and simulation techniques to the spatiotemporal regulatory network in Caulobacter crescentus is included. A stochastic model based on Turing pattern is exploited to demonstrate the bipolarization of a scaffold protein, PopZ, during Caulobacter cell cycle. In addition, the stochastic simulation of the spatiotemporal histidine kinase switch model captures the increased variability of cycle time in cells depleted of the divJ genes. / Ph. D.
|
696 |
Modeling and Analysis of Non-Linear Dependencies using Copulas, with Applications to Machine LearningKarra, Kiran 21 September 2018 (has links)
Many machine learning (ML) techniques rely on probability, random variables, and stochastic modeling. Although statistics pervades this field, there is a large disconnect between the copula modeling and the machine learning communities. Copulas are stochastic models that capture the full dependence structure between random variables and allow flexible modeling of multivariate joint distributions. Elidan was the first to recognize this disconnect, and introduced copula based models to the ML community that demonstrated magnitudes of order better performance than the non copula-based models Elidan [2013]. However, the limitation of these is that they are only applicable for continuous random variables and real world data is often naturally modeled jointly as continuous and discrete. This report details our work in bridging this gap of modeling and analyzing data that is jointly continuous and discrete using copulas.
Our first research contribution details modeling of jointly continuous and discrete random variables using the copula framework with Bayesian networks, termed Hybrid Copula Bayesian Networks (HCBN) [Karra and Mili, 2016], a continuation of Elidan’s work on Copula Bayesian Networks Elidan [2010]. In this work, we extend the theorems proved by Neslehov ˇ a [2007] from bivariate ´ to multivariate copulas with discrete and continuous marginal distributions. Using the multivariate copula with discrete and continuous marginal distributions as a theoretical basis, we construct an HCBN that can model all possible permutations of discrete and continuous random variables for parent and child nodes, unlike the popular conditional linear Gaussian network model. Finally, we demonstrate on numerous synthetic datasets and a real life dataset that our HCBN compares favorably, from a modeling and flexibility viewpoint, to other hybrid models including the conditional linear Gaussian and the mixture of truncated exponentials models.
Our second research contribution then deals with the analysis side, and discusses how one may use copulas for exploratory data analysis. To this end, we introduce a nonparametric copulabased index for detecting the strength and monotonicity structure of linear and nonlinear statistical dependence between pairs of random variables or stochastic signals. Our index, termed Copula Index for Detecting Dependence and Monotonicity (CIM), satisfies several desirable properties of measures of association, including Renyi’s properties, the data processing inequality (DPI), and ´ consequently self-equitability. Synthetic data simulations reveal that the statistical power of CIM compares favorably to other state-of-the-art measures of association that are proven to satisfy the DPI. Simulation results with real-world data reveal CIM’s unique ability to detect the monotonicity structure among stochastic signals to find interesting dependencies in large datasets. Additionally, simulations show that CIM shows favorable performance to estimators of mutual information when discovering Markov network structure.
Our third research contribution deals with how to assess an estimator’s performance, in the scenario where multiple estimates of the strength of association between random variables need to be rank ordered. More specifically, we introduce a new property of estimators of the strength of statistical association, which helps characterize how well an estimator will perform in scenarios where dependencies between continuous and discrete random variables need to be rank ordered. The new property, termed the estimator response curve, is easily computable and provides a marginal distribution agnostic way to assess an estimator’s performance. It overcomes notable drawbacks of current metrics of assessment, including statistical power, bias, and consistency. We utilize the estimator response curve to test various measures of the strength of association that satisfy the data processing inequality (DPI), and show that the CIM estimator’s performance compares favorably to kNN, vME, AP, and HMI estimators of mutual information. The estimators which were identified to be suboptimal, according to the estimator response curve, perform worse than the more optimal estimators when tested with real-world data from four different areas of science, all with varying dimensionalities and sizes. / Ph. D. / Many machine learning (ML) techniques rely on probability, random variables, and stochastic modeling. Although statistics pervades this field, many of the traditional machine learning techniques rely on linear statistical techniques and models. For example, the correlation coefficient, a widely used construct in modern data analysis, is only a measure of linear dependence and cannot fully capture non-linear interactions. In this dissertation, we aim to address some of these gaps, and how they affect machine learning performance, using the mathematical construct of copulas.
Our first contribution deals with accurate probabilistic modeling of real-world data, where the underlying data is both continuous and discrete. We show that even though the copula construct has some limitations with respect to discrete data, it is still amenable to modeling large real-world datasets probabilistically. Our second contribution deals with analysis of non-linear datasets. Here, we develop a new measure of statistical association that can handle discrete, continuous, or combinations of such random variables that are related by any general association pattern. We show that our new metric satisfies several desirable properties and compare it’s performance to other measures of statistical association. Our final contribution attempts to provide a framework for understanding how an estimator of statistical association will affect end-to-end machine learning performance. Here, we develop the estimator response curve, and show a new way to characterize the performance of an estimator of statistical association, termed the estimator response curve. We then show that the estimator response curve can help predict how well an estimator performs in algorithms which require statistical associations to be rank ordered.
|
697 |
A Gillespie-Type Algorithm for Particle Based Stochastic Model on LatticeLiu, Weigang January 2019 (has links)
In this thesis, I propose a general stochastic simulation algorithm for particle based lattice model using the concepts of Gillespie's stochastic simulation algorithm, which was originally designed for well-stirred systems. I describe the details about this method and analyze its complexity compared with the StochSim algorithm, another simulation algorithm originally proposed to simulate stochastic lattice model. I compare the performance of both algorithms with application to two different examples: the May-Leonard model and Ziff-Gulari-Barshad model. Comparison between the simulation results from both algorithms has validate our claim that our new proposed algorithm is comparable to the StochSim in simulation accuracy. I also compare the efficiency of both algorithms using the CPU cost of each code and conclude that the new algorithm is as efficient as the StochSim in most test cases, while performing even better for certain specific cases. / Computer simulation has been developed for almost one century. Stochastic lattice model, which follows the physics concept of lattice, is defined as a kind of system in which individual entities live on grids and demonstrate certain random behaviors according to certain specific rules. It is mainly studied using computer simulations. The most widely used simulation method to for stochastic lattice systems is the StochSim algorithm, which just randomly pick an entity and then determine its behavior based on a set of specific random rules. Our goal is to develop new simulation methods so that it is more convenient to simulate and analyze stochastic lattice system. In this thesis I propose another type of simulation methods for the stochastic lattice model using totally different concepts and procedures. I developed a simulation package and applied it to two different examples using both methods, and then conducted a series of numerical experiment to compare their performance. I conclude that they are roughly equivalent and our new method performs better than the old one in certain special cases.
|
698 |
Bridging the Gap between Deterministic and Stochastic Modeling with Automatic Scaling and ConversionWang, Pengyuan 17 June 2008 (has links)
During the past decade, many successful deterministic models of macromolecular regulatory networks have been built. Deterministic simulations of these models can show only average dynamics of the systems. However, stochastic simulations of macromolecular regulatory models can account for behaviors that are introduced by the noisy nature of the systems but not revealed by deterministic simulations. Thus, converting an existing model of value from the most common deterministic formulation to one suitable for stochastic simulation enables further investigation of the regulatory network. Although many different stochastic models can be developed and evolved from deterministic models, a direct conversion is the first step in practice.
This conversion process is tedious and error-prone, especially for complex models. Thus, we seek to automate as much of the conversion process as possible. However, deterministic models often omit key information necessary for a stochastic formulation. Specifically, values in the model have to be scaled before a complete conversion, and the scaling factors are typically not given in the deterministic model. Several functionalities helping model scaling and converting are introduced and implemented in the JigCell modeling environment. Our tool makes it easier for the modeler to include complete details as well as to convert the model.
Stochastic simulations are known for being computationally intensive, and thus require high performance computing facilities to be practical. With parallel computation on Virginia Tech's System X supercomputer, we are able to obtain the first stochastic simulation results for realistic cell cycle models. Stochastic simulation results for several mutants, which are thought to be biologically significant, are presented. Successful deployment of the enhanced modeling environment demonstrates the power of our techniques. / Master of Science
|
699 |
The Stochastic Dynamics of an Array of Micron Scale Cantilevers in Viscous FluidClark, Matthew Taylor 26 September 2006 (has links)
The stochastic dynamics of an array of closely spaced micron scale cantilevers in a viscous fluid is considered. The stochastic cantilever dynamics are due to the constant buffeting of fluid particles by Brownian motion and the dynamics of adjacent cantilevers are correlated due to long range effects of fluid dynamics. The measurement sensitivity of an experimental setup is limited by the magnitude of inherent stochastic motion. However, the magnitude of this noise can be decreased using correlated measurements allowing for improved force resolution. A correlated scheme is proposed using two atomic force microscope cantilevers for the purpose of analyzing the dynamics of single molecules in real time, a regime that is difficult to observe using current technologies.
Using a recently proposed thermodynamic approach the hydrodynamic coupling of an array of cantilevers is quantified for precise experimental conditions through deterministic numerical simulations. Results are presented for an array of two readily available micron-scale cantilevers yielding the possible force sensitivity and time resolution of correlated measurements. This measurement scheme is capable of achieving a force resolution that is more than three fold more sensitive than that of a single cantilever when the two cantilevers are separated by 200 nm, with a time scale on the order of tens of microseconds. / Master of Science
|
700 |
Robustness measures for stochastic resource constrained project schedulingSelim, Basma R. 01 October 2002 (has links)
No description available.
|
Page generated in 0.0505 seconds