• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2532
  • 671
  • 302
  • 287
  • 96
  • 68
  • 48
  • 36
  • 34
  • 31
  • 26
  • 26
  • 26
  • 26
  • 26
  • Tagged with
  • 5042
  • 1044
  • 744
  • 594
  • 566
  • 562
  • 555
  • 547
  • 509
  • 428
  • 422
  • 409
  • 406
  • 387
  • 383
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Comparative study of simulation algorithms in mapping spaces of uncertainty /

Qureshi, Sumaira Ejaz. January 2002 (has links) (PDF)
Thesis (M. Phil.)--University of Queensland, 2002. / Includes bibliographical references.
92

An effective method of stochastic simulation of complex large-scale transport processes in naturally fractured reservoirs

Hu, Yujie. January 2002 (has links)
Thesis (Ph. D.)--University of Texas at Austin, 2002. / Vita. Includes bibliographical references. Available also from UMI Company.
93

Statistical Modelling and the Fokker-Planck Equation

Adesina, Owolabi Abiona January 2008 (has links)
A stochastic process or sometimes called random process is the counterpart to a deterministic process in theory. A stochastic process is a random field, whose domain is a region of space, in other words, a random function whose arguments are drawn from a range of continuously changing values. In this case, Instead of dealing only with one possible 'reality' of how the process might evolve under time (as is the case, for example, for solutions of an ordinary differential equation), in a stochastic or random process there is some indeterminacy in its future evolution described by probability distributions. This means that even if the initial condition (or starting point) is known, there are many possibilities the process might go to, but some paths are more probable and others less. However, in discrete time, a stochastic process amounts to a sequence of random variables known as a time series. Over the past decades, the problems of synergetic are concerned with the study of macroscopic quantitative changes of systems belonging to various disciplines such as natural science, physical science and electrical engineering. When such transition from one state to another take place, fluctuations i.e. (random process) may play an important role. Fluctuations in its sense are very common in a large number of fields and nearly every system is subjected to complicated external or internal influences that are often termed noise or fluctuations. Fokker-Planck equation has turned out to provide a powerful tool with which the effects of fluctuation or noise close to transition points can be adequately be treated. For this reason, in this thesis work analytical and numerical methods of solving Fokker-Planck equation, its derivation and some of its applications will be carefully treated. Emphasis will be on both for one variable and N- dimensional cases.
94

Stochastic equilibria in a general class of incomplete Brownian market environments

Zhao, Yingwu 12 July 2012 (has links)
This dissertation is a contribution to the equilibrium theory in incomplete financial markets. It shows that, under appropriate conditions, an equilibrium exists and is unique in a general class of incomplete Brownian market environments either composed of exponential-utility-maximizing agents or populated by a class of convex-risk-measure-minimizing agents. We first use the Dynamic Programming Principle to deduce the Hamilton-Jacobi-Bellman (HJB) equation for each agent, and solve the individual optimization problem, to identify the optimal control. Using the optimal portfolio, we establish the equivalence between the existence of a stochastic equilibrium in an incomplete Brownian market and solvability of a non-linearly coupled parabolic PDE system with a homogeneously-quadratic non-linear structure. To solve this PDE system, we work mainly in anisotropic Hölder spaces. There, we construct a proper class of Hölder subspaces, where potential solutions to the equilibrium PDE system are expected to “live”. These turn out to be convex and compact under the uniform topology, thanks to the help of an Arzela-Ascoli-type theorem for unbounded domains. We then define an approapriate functional on the subspace, and show that, if we choose the parameters associated with the subspace carefully, this functional maps the subspace back to itself. After that, we apply Schauder’s fixed point theorem on a constructed subset of the subspace, and establish the existence of solutions to the PDE system, therefore equivalently, the existence of market equilibria in these general incomplete Brownian market environments. To prove the uniqueness of the solution to the parabolic PDE system, we utilize classical L2-type energy estimates and the Gronwall’s inequality. This way, we also establish the uniqueness of a market equilibrium within a class of smooth Markovian markets. / text
95

Modeling stochastic dependencies and their impact on optimal decision making

Galenko, Alexander Yurievich, 1982- 28 September 2012 (has links)
This research addresses three important questions for solving a general stochastic optimization problem: proper modeling of the uncertainties and their interactions, use of decomposition techniques to solve the resulting optimization problems, and the impact of stochastic dependencies to the optimal solution. In particular, we develop sampling methodologies for scenario generation that preserve the cointegration properties of financial time series, create a new conditional decision-dependent probability model for the lifetime of components in nuclear power plants, define the corresponding stochastic optimization problems, and construct decomposition algorithms to solve them. We investigate the impact of the input (in terms of different stochastic dependencies) to the solution of the corresponding optimization problem. For the last issue we concentrate on the general financial asset allocation problem. / text
96

Aggregation, dissemination and filtering : controlling complex information flows in networks

Banerjee, Siddhartha 25 October 2013 (has links)
Modern day networks, both physical and virtual, are designed to support increasingly sophisticated applications based on complex manipulation of information flows. On the flip side, the ever-growing scale of the underlying networks necessitate the use of low-complexity algorithms. Exploring this tension needs an understanding of the relation between these flows and the network structure. In this thesis, we undertake a study of three such processes: aggregation, dissemination and filtering. In each case, we characterize how the network topology imposes limits on these processes, and how one can use knowledge of the topology to design simple yet efficient control algorithms. Aggregation: We study data-aggregation in sensor networks via in-network computation, i.e., via combining packets at intermediate nodes. In particular, we are interested in maximizing the refresh-rate of repeated/streaming aggregation. For a particular class of functions, we characterize the maximum achievable refresh-rate in terms of the underlying graph structure; furthermore we develop optimal algorithms for general networks, and also a simple distributed algorithm for acyclic wired networks. Dissemination: We consider dissemination processes on networks via intrinsic peer-to-peer transmissions aided by external agents: sources with bounded spreading power, but unconstrained by the network. Such a model captures many static (e.g. long-range links) and dynamic/controlled (e.g. mobile nodes, broadcasting) models for long-range dissemination. We explore the effect of external sources for two dissemination models: spreading processes, wherein nodes once infected remain so forever, and epidemic process, in which nodes can recover from the infection. The main takeaways from our results demonstrate: (i) the role of graph structure, and (ii) the power of random strategies. In spreading processes, we show that external agents dramatically reduce the spreading time in networks that are spatially constrained; furthermore random policies are order-wise optimal. In epidemic processes, we show that for causing long-lasting epidemics, external sources must scale with the number of nodes -- however the strategies can be random. Filtering: A common phenomena in modern recommendation systems is the use of user-feedback to infer the 'value' of an item to other users, resulting in an exploration vs. exploitation trade-off. We study this in a simple natural model, where an 'access-graph' constrains which user is allowed to see which item, and the number of items and the number of item-views are of the same order. We want algorithms that recommend relevant content in an online manner (i.e., instantaneously on user arrival). To this end, we consider both finite-population (i.e., with a fixed set of users and items) and infinite-horizon settings (i.e., with user/item arrivals and departures) -- in each case, we design algorithms with guarantees on the competitive ratio for any arbitrary user. Conversely, we also present upper bounds on the competitive ratio, which show that in many settings our algorithms are orderwise optimal. / text
97

Computational methods for stochastic control problems with applications in finance

Mitchell, Daniel Allen 01 July 2014 (has links)
Stochastic control is a broad tool with applications in several areas of academic interest. The financial literature is full of examples of decisions made under uncertainty and stochastic control is a natural framework to deal with these problems. Problems such as optimal trading, option pricing and economic policy all fall under the purview of stochastic control. These problems often face nonlinearities that make analytical solutions infeasible and thus numerical methods must be employed to find approximate solutions. In this dissertation three types of stochastic control formulations are used to model applications in finance and numerical methods are developed to solve the resulting nonlinear problems. To begin with, optimal stopping is applied to option pricing. Next, impulse control is used to study the problem of interest rate control faced by a nation's central bank, and finally a new type of hybrid control is developed and applied to an investment decision faced by money managers. / text
98

Understanding approximate Bayesian computation(ABC)

Lim, Boram 16 March 2015 (has links)
The Bayesian approach has been developed in various areas and has come to be part of main stream statistical research. Markov Chain Monte Carlo (MCMC) methods have freed us from computational constraints for a wide class of models and several MCMC methods are now available for sampling from posterior distributions. However, when data is large and models are complex and the likelihood function is intractable we are limited in the use of MCMC, especially in evaluating likelihood function. As a solution to the problem, researchers have put forward approximate Bayesian computation (ABC), also known as a likelihood-free method. In this report I introduce the ABC algorithm and show implementation for a stochastic volatility model (SV). Even though there are alternative methods for analyzing SV models, such as particle filters and other MCMC methods, I show the ABC method with an SV model and compare it, based on the same data and the SV model, to an approach based on a mixture of normals and MCMC. / text
99

A novel subspace identification algorithm and its application in stochastic fault detection

Wang, Jin 28 August 2008 (has links)
Not available / text
100

Adaptive jacknife estimators for stochastic programming

Partani, Amit, 1978- 29 August 2008 (has links)
Stochastic programming facilitates decision making under uncertainty. It is usually impractical or impossible to find the optimal solution to a stochastic problem, and approximations are required. Sampling-based approximations are simple and attractive, but the standard point estimate of optimization and the Monte Carlo approximation. We provide a method to reduce this bias, and hence provide a better, i.e., tighter, confidence interval on the optimal value and on a candidate solution's optimality gap. Our method requires less restrictive assumptions on the structure of the bias than previously-available estimators. Our estimators adapt to problem-specific properties, and we provide a family of estimators, which allows flexibility in choosing the level of aggressiveness for bias reduction. We establish desirable statistical properties of our estimators and empirically compare them with known techniques on test problems from the literature.

Page generated in 0.039 seconds