• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 28
  • 15
  • 7
  • 1
  • 1
  • Tagged with
  • 267
  • 42
  • 32
  • 28
  • 22
  • 20
  • 20
  • 16
  • 15
  • 15
  • 14
  • 14
  • 13
  • 13
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.

Use of extreme value theory for making statistical inference about endpoints of distributions, with applications in global optimization and meteorology

Hamilton, Emily January 2008 (has links)
We use extreme value theory to make statistical inference about the endpoint of distributions. First we compare estimators of the endpoint of several distributions, including a distribution that appears in problems of global optimization. These estimators use a fixed number of order statistics (k) from a sample of fixed size (n). Two of the estimators investigated are the optimal linear estimator and the maximum likelihood estimator. We find that the optimal linear estimator often outperforms the maximum likelihood estimator. We next investigate how the order statistics change as sample size increases. In order to do this, we define record times: the sample size at which the set of k smallest order statistics changes. We give the distributions of several statistics related to order statistics and record times, in particular we show that records occur according to a nonhomogeneous Poisson process. We show that order statistics can be modeled using a Markov chain, and use this Markov chain to investigate estimators of the endpoint of a distribution. Two additional estimators are derived and investigated using the Markov chain model. Finally, we consider a meteorological application of extreme value theory. In particular, we estimate the maximum and minimum sea level at several ports in the Netherlands. This is done using a combination of record theory, singular spectrum decomposition and known estimators of the endpoint of a distribution.

Statistical inference for negative binomial processes with applications to market research

Savani, Vippal January 2006 (has links)
The negative binomial distribution (NBD) and negative binomial processes have been used as natural models for events occurring in fields such as accident proneness accidents and sickness market research insurance and risk theory. The fitting of negative binomial processes in practice has mainly focussed on fitting the one-dimensional distribution, namely the NBD, to data. In practice, the parameters of the NBD are usually estimated by using inefficient moment based estimation methods due to the ease in estimating moment based estimators in comparison to maximum likelihood estimators. This thesis develops efficient moment based estimation methods for estimating parameters of the NBD that can be easily implemented in practice. These estimators, called power method estimators, are almost as efficient as maximum likelihood estimators when the sample is independent and identically distributed. For dependent NBD samples, the power method estimators are more efficient than the commonly used method of moments and zero term method estimators. Fitting the one-dimensional marginal distribution of negative binomial processes to data gives partial information as to the adequacy of the process being fitted. This thesis further develops methods of statistical inference for data generated by negative binomial processes by comparing the dynamical properties of the process to the dynamical properties of data. For negative binomial autoregressive processes, the dynamical properties may be checked by using the autocorrelation function. The dynamical properties of the gamma Poisson process are considered by deriving the asymptotic covariance and correlation structures of estimators and functionals of the gamma Poisson process and verifying these structures against data. The adequacy of two negative binomial processes, namely the gamma Poisson process and the negative binomial first-order autoregressive process, as models for consumer buying behavior are considered. The models are fitted to market research data kindly provided by ACNielsen BASES.

Stochastic control under partial information

Kollias-Liapis, Spyridon January 2016 (has links)
In this thesis, we consider the problem of continuous-time stochastic control with full and partial information and quadratic costs. Under some assumptions we reduce the problem of controlling a general diffusion process into controlling a piecewise linear system, called the Linearized system. The Linearized system is defined with respect to a time-partition of the fixed horizon [0,T]. We initially prove that the cost functional associated with the Linearized system converges to the cost functional of the original system as the mesh of the partition goes to 0. This in turn implies that an optimal control for the approximating system is also ε-optimal for the original system. Hence we centre our analysis at obtaining the optimal control for the Linearized system. To this end, we present two methodologies : the Perturbation method and the Policy Improvement method. In the first method, by imposing boundedness assumptions on the coefficients of the controlled diffusion, we construct the optimal control in each subinterval of the partition based on the framework of the so-called Linear Quadratic Regulator problem. In the second method we construct the optimal control in each subinterval of the partition by using a criterion under which, by starting from an arbitrary control and an associated cost, we eventually obtain, after consecutive steps, the control which minimises the cost functional of the Linearized system.

Some infinite dimensional topics in probability and statistics

Blacque-Florentin, Pierre January 2016 (has links)
This thesis comprises two independent parts. In the first part, we develop a pathwise calculus for functionals of integer-valued measures and extend the framework of Functional Itô Calculus to functionals of integer-valued random measures by constructing a ’stochastic derivative’ operator with respect to such integer-valued random measures. This allows us to obtain weak martingale representation formulae holding beyond the class of Poisson random measures, and allowing for random and time-dependent compensators. We study the behaviour of this operator and compare it with other previous approaches in the literature, providing in passing a review of the various Malliavin approaches for jump processes. Finally, some examples of computations are provided. The second part is oriented towards nonparametric statistics, with a financial application as our main goal: we aim at recovering a surface of FX call options on a pegged currency such as the Hong Kong dollar against the U.S. dollar, based on a small number of noisy measurements (the market bid-ask quotes). Inspiring ourselves from the Compressed Sensing literature, we develop a methodology that aims at recovering an arbitrage-free call surface. We first apply this methodology, based on tensor polynomial decomposition of the surface, to a sparse set of call-option prices on the S&P500, recovering the call option prices within desired tolerance, as well as a smooth local-volatility surface. On a pegged currency such as the HKD/USD, it appears that tensor polynomials may not be an adequate way to model the smiles across maturities. Modifying the methodology in favour of structure-preserving functions, we apply the new methodology to our HKD/USD dataset, recovering the smiles, and the corresponding state-price density.

Long time behaviour of infinite dimensional stochastic processes

Zak, Frantisek January 2016 (has links)
We study two examples of infinite dimensional stochastic processes. Situations and techniques involved are quite varied, however in both cases we achieve a progress in describing their long time behaviour. The first case concerns interacting particle system of diffusions. We construct rigorously the process using finite dimensional approximation and the notion of martingale solution. The existence of invariant measure for the process is proved. The novelty of the results lies in the fact, that our methods enable us to consider such examples, where the generator of the diffusion is subelliptic. The other project is related to stochastic partial differential equations and their stability properties. In particular it is shown that Robbins-Monro procedure can be extended to infinite dimensional setting. Thus we achieve results about pathwise convergence of solution. To be able to define corresponding solution, we rely on so-called variational approach to stochastic partial differential equations as pioneered by E. Pardoux, N. Krylov and B. Rozovskii. Our examples covers situations such as p-Laplace operator or Porous medium operator.

Multilevel algorithms for the optimization of structured problems

Ho, Chinpang January 2016 (has links)
Although large scale optimization problems are very difficult to solve in general, problems that arise from practical applications often exhibit particular structure. In this thesis we study and improve algorithms that can efficiently solve structured problems. Three separate settings are considered. The first part concerns the topic of singularly perturbed Markov decision processes (MDPs). When a MDP is singularly perturbed, one can construct an aggregate model in which the solution is asymptotically optimal. We develop an algorithm that takes advantage of existing results to compute the solution of the original model. The proposed algorithm can compute the optimal solution with a reduction in complexity without any penalty in accuracy. In the second part, the class of empirical risk minimization (ERM) problems is studied. When using a first order method, the Lipschitz constant of the empirical risk plays a crucial role in the convergence analysis and stepsize strategy of these problems. We derive the probabilistic bounds for such Lipschitz constants using random matrix theory. Our results are used to derive the probabilistic complexity and develop a new stepsize strategy for first order methods. The proposed stepsize strategy, Probabilistic Upper-bound Guided stepsize strategy (PUG), has a strong theoretical guarantee on its performance compared to the standard stepsize strategy. In the third part, we extend the existing results on multilevel methods for unconstrained convex optimization. We study a special case where the hierarchy of models is created by approximating first and second order information of the exact model. This is known as Galerkin approximation, and we named the corresponding algorithm Galerkin-based Algebraic Multilevel Algorithm (GAMA). Three case studies are conducted to show how the structure of a problem could affect the convergence of GAMA.

Gaussian process and functional data methods for mortality modelling

Wu, Ruhao January 2017 (has links)
Modelling the demographic mortality trends is of great importance due to its considerable impact on welfare policy, resource allocation and government planning. In this thesis, we propose to use various statistical methods, including Gaussian process (GP), principal curve, multilevel functional principal component analysis (MFPCA) for forecasting and clustering of human mortality data. This thesis is actually composed of three main topics regarding mortality modelling. In the first topic, we propose a new Gaussian process regression method and apply it to the modelling and forecasting of age-specific human mortality rates for a single population. The proposed method incorporates a weighted mean function and the spectral mixture covariance function, hence provides better performance in forecasting long term mortality rates, compared with the conventional GPR methods. The performance of the proposed method is also compared with Lee-Miller model and the functional data model by Hyndman and Ullah (2007) in the context of forecasting the French total mortality rates. Then, in the second topic, we extend mortality modelling for a single population independently to that for multiple populations simultaneously, by developing a new framework for coherent modelling and forecasting of mortality rates for multiple subpopulations within one large population. We treat the mortality of subpopulations as multilevel functional data and then a weighted multilevel functional principal component approach is proposed and used for modelling and forecasting the mortality rates. The proposed model is applied to sex-specific data for nine developed countries, and the forecasting results suggest that, in terms of overall accuracy, the model outperforms the independent model (Hyndman and Ullah 2007) and is comparable to the Product-Ratio model (Hyndman et al 2013) but with several advantages. Finally, in the third topic, we introduce a clustering method based on principal curves for clustering of human mortality as functional data. And this innovative clustering method is applied to French total mortality data for exploring its potential features.

Bayesian inference for stochastic differential mixed-effects models

Whitaker, Gavin Andrew January 2016 (has links)
Stochastic differential equations (SDEs) provide a natural framework for modelling intrinsic stochasticity inherent in many continuous-time physical processes. When such processes are observed in multiple individuals or experimental units, SDE driven mixed- effects models allow the quantification of both between and within individual variation. Performing Bayesian inference for such models, using discrete-time data that may be incomplete and subject to measurement error, is a challenging problem and is the focus of this thesis. Since, in general, no closed form expression exists for the transition densities of the SDE of interest, a widely adopted solution works with the Euler-Maruyama approximation, by replacing the intractable transition densities with Gaussian approximations. These approximations can be made arbitrarily accurate by introducing intermediate time-points between observations. Integrating over the uncertainty associated with the process at these time-points necessitates the use of computationally intensive algorithms such as Markov chain Monte Carlo (MCMC). We extend a recently proposed MCMC scheme to include the SDE driven mixed-effects framework. Key to the development of an e fficient inference scheme is the ability to generate discrete-time realisations of the latent process between observation times. Such realisations are typically termed diffusion bridges. By partitioning the SDE into two parts, one that accounts for nonlinear dynamics in a deterministic way, and another as a residual stochastic process, we develop a class of novel constructs that bridge the residual process via a linear approximation. In addition, we adapt a recently proposed construct to a partial and noisy observation regime. We compare the performance of each new construct with a number of existing approaches, using three applications: a simple birth-death process, a Lotka-Volterra model and a model for aphid growth. We incorporate the best performing bridge construct within an MCMC scheme to determine the posterior distribution of the model parameters. This methodology is then applied to synthetic data generated from a simple SDE model of orange tree growth, and real data consisting of observations on aphid numbers recorded under a variety of different treatment regimes. Finally, we provide a systematic comparison of our approach with an inference scheme based on a tractable approximation of the SDE, that is, the linear noise approximation.

Linking tree shapes to the spread of infection using generalised branching processes

Plazzotta, Giacomo January 2016 (has links)
In this work we look at the shapes of pathogen phylogenetic trees derived from the the spread of an infection. The mathematical framework is the general Crump-Mode-Jager branching process. In an exploratory, simulation study we look at how memory affects the general shape of the tree. By general shape we intend measures such as the imbalance of the tree, the average ladder length, and others. Memory is introduced by defining a non-constant infectivity function which, through a non-honogeneous Poisson process, defines the spread of the infection between hosts. We found that memory, in the way we introduced it, has less effect than expected on the overall shape, but has a marked effect on the size of the tree, even if the Malthusian parameter is kept constant. With a more theoretical approach, we investigate the frequency of subshapes in supercritical branching processes. Through characteristic functions we were able to count the number of subshapes within a growing tree. We prove that the ratio between the number of such shapes and the tips converges to a limit as the tree grows. In the case of homogeneous processes, the limit of the cherries to tips ratio depends only on a simple function of the basic reproduction number of the pathogen. We used this relation to develop a new method of inference of the basic reproduction number. This method increase precision for larger sets of taxa, which are becoming more and more available after the advent of next generation DNA sequences. However, the correctness of the tree reconstructed with the methods currently available still remains dubious, thus the number of cherries may be incorrect. To by-pass the reconstruction, we develop an algorithm able to provide an estimate of the number of the cherries directly from the sequences. Its precision is similar or higher than other methods that reconstruct the tree first to provide the cherries estimate. Its high level of parallelisability enables time complexity to be linear, but it is quadratic if not parallelised. This technique combined with the inference of the basic reproduction number constitutes the first phylodynamics method without a tree. On a side project we evaluate the prevalence of tuberculosis mixed infection, which is likely to be twice as high of the detected 15%.

A Stratonovich-Skorohod integral formula for Gaussian rough paths

Lim, Nengli January 2016 (has links)
Given a Gaussian process $X$, its canonical geometric rough path lift $\mathbf{X}$, and a solution $Y$ to the rough differential equation $\mathrm{d}Y_{t} = V\left (Y_{t}\right ) \circ \mathrm{d} \mathbf{X}_t$, we present a closed-form correction formula for $\int Y_t \circ \mathrm{d} \mathbf{X}_t - \int Y_t \, \mathrm{d} X_t$, i.e. the difference between the rough and Skorohod integrals of $Y$ with respect to $X$. When $X$ is standard Brownian motion, we recover the classical Stratonovich-to-It{\^o} conversion formula, which we generalize to Gaussian rough paths with finite $p$-variation for $1 \leq p < 3$, and Volterra Gaussian rough paths for $3 \leq p < 4$. This encompasses many familiar examples, including fractional Brownian motion with $H > \frac{1}{4}$. It\^{o}'s formula can also be recovered in the case when $Y_t = \nabla f( X_t)$ for some smooth $f$. \par To prove the formula, we show that $\int Y_t \, \mathrm{d} X_t$ is the $L^2(\Omega)$ limit of its Riemann-sum approximants, and that the approximants can be appended with a suitable compensation term without altering the limit. To show convergence of the Riemann-sum approximants, we utilize a novel characterization of the Cameron-Martin norm using multi-dimensional Young-Stieltjes integrals. For the main theorem, complementary regularity between the Cameron-Martin paths and the covariance function of $X$ is used to show the existence of these integrals. However, it turns out not to be a necessary condition, as we provide a new set of conditions for their existence, as well as provide a new formulation of the classical It\^{o}-Skorohod isometry in terms of these Young-Stieltjes integrals.

Page generated in 0.0593 seconds