• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 28
  • 15
  • 7
  • 1
  • 1
  • Tagged with
  • 267
  • 42
  • 32
  • 28
  • 22
  • 20
  • 20
  • 16
  • 15
  • 15
  • 14
  • 14
  • 13
  • 13
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A pair of explicitly solvable impulse control problems

Al Azemi, Fares M. M. S. January 2010 (has links)
This thesis is concerned with the formulation and the explicit solution of two impulse stochastic control problems that are motivated by applications in the area of sequential investment decisions. Each of the two problems considers a stochastic system whose uncontrolled state dynamics are modelled by a general one-dimensional Ito diffusion. In the first of the two problems, the control that can be applied to the system takes the form of one-sided impulsive action, and the associated objective is to maximise a performance criterion that rewards high values of the utility derived from the system's controlled state and penalises the expenditure of any control effort. Potential applications of this model arise in the area of real options where one has to balance the sunk costs incurred by investment against their resulting uncertain cashflows. The second model is concerned with the so-called buy-low and sell-high investment strategies. In this context, an investor aims at maximising the expected discounted cash-flow that can be generated by sequentially buying and selling one share of a given asset at fixed transaction costs. Both of the control problems are solved in a closed analytic form and the associated optimal control strategies are completely characterised. The main results are illustrated by means of special cases that arise when the uncontrolled system dynamics are a geometric Brownian motion or a mean-reverting square-root process such as the one in the Cox-Ingersoll-Ross interest rate model.
2

Uniform convergence and learnability

Anthony, Martin Henry George January 1991 (has links)
This thesis analyses some of the more mathematical aspects of the Probably Approximately Correct (PAC) model of computational learning theory. The main concern is with the sample size required for valid learning in the PAC model. A sufficient sample-size involving the Vapnik-Chervonenkis (VC) dimension of the hypothesis space is derived; this improves the best previously known bound of this nature. Learnability results and sufficient sample-sizes can in many cases be derived from results of Vapnik on the uniform convergence (in probability) of relative frequencies of events to their probabilities, when the collection of events has finite VC dimension. Two simple new combinatorial proofs of each of two of Vapnik's results are proved here and the results are then applied to the theory of learning stochastic concepts, where again improved sample-size bounds are obtained. The PAC model of learning is a distribution-free model; the resulting sample sizes are not permitted to depend on the usually fixed but unknown probability distribution on the input space. Results of Ben-David, Benedek and Mansour are described, presenting a theory for distribution-dependent learnability. The conditions under which a feasible upper bound on sample-size can be obtained are investigated, introducing the concept of polynomial Xo-finite dimension. The theory thus far is then applied to the learnability of formal concepts, defined by Wille. A learning algorithm is also presented for this problem. Extending the theory of learnability to the learnability of functions which have range in some arbitrary set, learnability results and sample-size bounds, depending on a generalization of the VC dimension, are obtained and these results are applied to the theory of artificial neural networks. Specifically, a sufficient sample-size for valid generalization in multiple-output feedforward linear threshold networks is found.
3

Some problems on random walks

Doney, Ronald A. January 1964 (has links)
No description available.
4

The minimal entropy martingale measure and hedging in incomplete markets

Lee, Young January 2009 (has links)
The intent of these essays is to study the minimal entropy martingale measure, to examine some new martingale representation theorems and to discuss its related Kunita-Watanabe decompositions. Such problems arise in mathematical finance for an investor who is confronted with the issues of pricing and hedging in incomplete markets. We adopt the standpoint of a rational investor who principally endeavours to maximize her expected exponential utility. Resolving this issue within a semimartingale framework leads to a non-trivial martingale problem equipped with an equation between random variables but not processes. It is well known that utility maximization admits a dual formulation: maximizing expected utility is equivalent to minimizing some sort of distance to the physical probability measure. In our setting, this is compatible to finding the entropy minimizing martingale measure whose density process can be written in a particular form. This minimal entropy martingale model has an information theoretic interpretation: if the physical probability measure encapsulates some information about how the market behaves, pricing financial instruments with respect to this entropy minimizer corresponds to selecting a martingale measure by adding the least amount of information to the physical model. We present a method of solving the non-trivial martingale problem within models which exhibit stochastic compensator. Several martingale representation theorems are established to derive an apparent entropy equation. We then verify that the conjectured martingale measure is indeed the entropy minimizer.
5

Extreme value theory for group actions on homogeneous spaces

Kirsebom, Maxim Solund January 2014 (has links)
In this thesis we study extreme value theory for random walks as well as one-parameter actions on homogeneous spaces. In both cases we investigate the limiting distributions for the maximum of an observable evaluated along a trajectory of the system. In particular we are going to consider asymptotic distributions for closest distance returns to a given point· and tor maximal excursions to the cusp. For closest returns on the torus we establish an exact extreme value distribution while for other cases we obtain estimates on the extreme value distributions for sparse sequences. For random walks we also obtain logarithm laws for the maximum. Finally we look into the extreme value statistics of exceedances of high levels in these settings. For the closest returns we establish convergence to a Poisson process for the point process of exceedances. In other cases we obtain estimates on the limiting distribution of the k'th largest maximum for sparse sequences.
6

Stability issues in the numerical solution of stochastic differential equations

Bryden, Alan January 2004 (has links)
No description available.
7

Some estimators and properties of the three-parameter Weibull distribution

Cran, Gordon William January 1972 (has links)
No description available.
8

The cover time of random walks on graph

Abdullah, Mohammed January 2012 (has links)
A simple random walk on a graph is a sequence of movements from one vertex to another where at each step an edge is chosen uniformly at random from the set of edges incident on the current vertex, and then transitioned to next vertex. Central to this thesis is the cover time of the walk, that is, the expectation of the number of steps required to visit every vertex, maximised over all starting vertices. In our rst contribution, we establish a relation between the cover times of a pair of graphs, and the cover time of their Cartesian product. This extends previous work on special cases of the Cartesian product, in particular, the square of a graph. We show that when one of the factors is in some sense larger than the other, its cover time dominates, and can become within a logarithmic factor of the cover time of the product as a whole. Our main theorem eectively gives conditions for when this holds. The techniques and lemmas we introduce may be of independent interest. In our second contribution, we determine the precise asymptotic value of the cover time of a random graph with given degree sequence. This is a graph picked uniformly at random from all simple graphs with that degree sequence. We also show that with high probability, a structural property of the graph called conductance, is bounded below by a constant. This is of independent interest. Finally, we explore random walks with weighted random edge choices. We present a weighting scheme that has a smaller worst case cover time than a simple random walk. We give an upper bound for a random graph of given degree sequence weighted according to our scheme. We demonstrate that the speed-up (that is, the ratio of cover times) over a simple random walk can be unbounded.
9

Learning curves for Gaussian process regression on random graphs

Urry, Matthew January 2013 (has links)
Gaussian processes are a non-parametric method that can be used to learn both regression and classification rules from examples for arbitrary input spaces using the ’kernel trick’. They are well understood for inputs from Euclidean spaces, however, much less research has focused on other spaces. In this thesis I aim to at least partially resolve this. In particular I focus on the case where inputs are defined on the vertices of a graph and the task is to learn a function defined on the vertices from noisy examples, i.e. a regression problem. A challenging problem in the area of non-parametric learning is to predict the general-isation error as a function of the number of examples or learning curve. I show that, unlike in the Euclidean case where predictions are either quantitatively accurate for a few specific cases or only qualitatively accurate for a broader range of situations, I am able to derive accurate learning curves for Gaussian processes on graphs for a wide range of input spaces given by ensembles of random graphs. I focus on the random walk kernel but my results generalise to any kernel that can be written as a truncated sum of powers of the normalised graph Laplacian. I begin first with a discussion of the properties of the random walk kernel, which can be viewed as an approximation of the ubiquitous squared exponential kernel in continuous spaces. I show that compared to the squared exponential kernel, the random walk kernel has some surprising properties which includes a non-trivial limiting form for some types of graphs. After investigating the limiting form of the kernel I then study its use as a prior. I propose a solution to this in the form of a local normalisation, where the prior scale at each vertex is normalised locally as desired. To drive home the point about kernel normalisation I then examine the differences between the two kernels when they are used as a Gaussian process prior over functions defined on the vertices of a graph. I show using numerical simulations that the locally normalised kernel leads to a probabilistically more plausible Gaussian process prior. After investigating the properties of the random walk kernel I then discuss the learning curves of a Gaussian process with a random walk kernel for both kernel normalisations in a matched scenario (where student and teacher are both Gaussian processes with matching hyperparameters). I show that by using the cavity method I can derive accu-rate predictions along the whole length of the learning curve that dramatically improves upon previously derived approximations for continuous spaces suitably extended to the discrete graph case. The derivation of the learning curve for the locally normalised kernel required an addi-tional approximation in the resulting cavity equations. I subsequently, therefore, investi-gate this approximation in more detail using the replica method. I show that the locally normalised kernel leads to a highly non-trivial replica calculation, that eventually shows that the approximation used in the cavity analysis amounts to ignoring some consistency requirements between incoming cavity distributions. I focus in particular on a teacher distribution that is given by a Gaussian process with a random walk kernel but different hyperparameters. I show that in this case, by applying the cavity method, I am able once more to calculate accurate predictions of the learning curve. The resulting equations resemble the matched case over an inflated number of variables. To finish this thesis I examine the learning curves for varying degrees of model mismatch.
10

Theory and applications of multi-dimensional stationary stochastic processes

Schagen, Ian P. January 1981 (has links)
The theory of stationary stochastic processes in several dimensions has been investigated to provide a general model which may be applied to various problems which involve unknown functions of several variables. In particular, when values of the function are known only at a finite set of points, treating the unknown function as a realisation of a stationary stochastic process leads to an interpolating function which reproduces the values exactly at the given points. With suitable choice of auto-correlation for the model, the interpolating function may also he shown to be continuous in all its derivatives everywhere. A few parameters only need to be found for the interpolator, and these may be estimated from the given data. One problem tackled using such an interpolator is that of automatic contouring of functions of two variables from arbitrarily scattered data points. A "two-stage" model was developed, which incorporates a long-range "trend" component as well as a shorter-range "residual" term. This leads to a contouring algorithm which gives good results with difficult data. The second area of application is that of optimisation, particularly of objective functions which are expensive to compute. Since the interpolator gives an estimate of the derivatives with little work, it is simple to optimise it using conventional techniques, and to re-evaluate the true function at the apparent optimum point. An iterative algorithm along these lines gives good results with test functions, especially with fuactions of more than two variables. A program has been developed whicj incorporates both the optimisation and contouring applications into a single peckage. Finally, the theory of excursions of a stationary process above a fixed level has been applied to the problem of modelling the occurrence of oilfields, with special reference to their spatial distribution and tendency to cluster. An intuitively reasonable model with few parameters has been developed and applied to North Sea data, with interesting results.

Page generated in 0.0222 seconds