• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 17
  • Tagged with
  • 244
  • 244
  • 42
  • 35
  • 32
  • 27
  • 26
  • 26
  • 26
  • 25
  • 24
  • 22
  • 22
  • 20
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Dimensionality reduction in nonparametric conditional density estimation with applications to nonlinear time series

Rosemarin, Roy January 2012 (has links)
Nonparametric methods of estimation of conditional density functions when the dimension of the explanatory variable is large are known to suffer from slow convergence rates due to the 'curse of dimensionality'. When estimating the conditional density of a random variable Y given random d-vector X, a significant reduction in dimensionality can be achieved, for example, by approximating the conditional density by that of a Y given θ TX, where the unit-vector θ is chosen to optimise the approximation under the Kullback-Leibler criterion. As a first step, this thesis pursues this 'single-index' approximation by standard kernel methods. Under strong-mixing conditions, we derive a general asymptotic representation for the orientation estimator, and as a result, the approximated conditional density is shown to enjoy the same first-order asymptotic properties as it would have if the optimal θ was known. We then proceed and generalise this result to a 'multi-index' approximation using a Projection Pursuit (PP) type approximation. We propose a multiplicative PP approximation of the conditional density that has the form f(y|x) = f₀(y)πM m=1 hm (y,θTmx), where the projection directions θm and the multiplicative elements, hm, m = 1,...,M, are chosen to minimise a weighted version of the Kullback-Leibler relative entropy between the true and the estimated conditional densities. We first establish the validity of the approximation by proving some probabilistic properties, and in particular we show that the PP approximation converges weakly to the true conditional density as M approaches infinity. An iterative procedure for estimation is outlined, and in order to terminate the iterative estimation procedure, a variant of the bootstrap information criterion is suggested. Finally, the theory established for the single-index model serve as a building block in deriving the asymptotic properties of the PP estimator under strong-mixing conditions. All methods are illustrated in simulations with nonlinear time-series models, and some applications to prediction of daily exchange-rate data are demonstrated.
42

A dynamic contagion process for modelling contagion risk in finance and insurance

Zhao, Hongbiao January 2012 (has links)
We introduce a new point process, the dynamic contagion process, by generalising the Hawkes process and Cox process with shot noise intensity. Our process includes both self-excited and externally excited jumps, which could be used to model the dynamics of contagion impact from endogenous and exogenous factors of the underlying system. We systematically analyse the theoretical distributional properties of this new process, based on the piecewise-deterministic Markov process theory developed in Davis (1984), and the extension of the martingale methodology used in Dassios and Embrechts (1989). The analytic expressions of the Laplace transform of the intensity process and probability generating function of the point process are derived. A simulation algorithm is provided for further industrial implementation and statistical analysis. Some extensions of this process and comparison with other similar processes are also investigated. The major object of this study is to produce a general mathematical framework for modelling the dependence structure of arriving events with dynamic contagion, which has the potential to be applicable to a variety of problems in economics, finance and insurance. We apply our research to the default probability of credit risk and ruin probability of risk theory.
43

Applied probabilistic forecasting

Binter, Roman January 2012 (has links)
In any actual forecast, the future evolution of the system is uncertain and the forecasting model is mathematically imperfect. Both, ontic uncertainties in the future (due to true stochasticity) and epistemic uncertainty of the model (reflecting structural imperfections) complicate the construction and evaluation of probabilistic forecast. In almost all nonlinear forecast models, the evolution of uncertainty in time is not tractable analytically and Monte Carlo approaches (”ensemble forecasting”) are widely used. This thesis advances our understanding of the construction of forecast densities from ensembles, the evolution of the resulting probability forecasts and methods of establishing skill (benchmarks). A novel method of partially correcting the model error is introduced and shown to outperform a competitive approach. The properties of Kernel dressing, a method of transforming ensembles into probability density functions, are investigated and the convergence of the approach is illustrated. A connection between forecasting and Information theory is examined by demonstrating that Kernel dressing via minimization of Ignorance implicitly leads to minimization of Kulback-Leibler divergence. The Ignorance score is critically examined in the context of other Information theory measures. The method of Dynamic Climatology is introduced as a new approach to establishing skill (benchmarking). Dynamic Climatology is a new, relatively simple, nearest neighbor based model shown to be of value in benchmarking of global circulation models of the ENSEMBLES project. ENSEMBLES is a project funded by the European Union bringing together all major European weather forecasting institutions in order to develop and test state-of-the-art seasonal weather forecasting models. Via benchmarking the seasonal forecasts of the ENSEMBLES models we demonstrate that Dynamic Climatology can help us better understand the value and forecasting performance of large scale circulation models. Lastly, a new approach to correcting (improving) imperfect model is presented, an idea inspired by [63]. The main idea is based on a two-stage procedure where a second stage ‘corrective’ model iteratively corrects systematic parts of forecasting errors produced by a first stage ‘core’ model. The corrector is of an iterative nature so that at a given time t the core model forecast is corrected and then used as an input into the next iteration of the core model to generate a time t + 1 forecast. Using two nonlinear systems we demonstrate that the iterative corrector is superior to alternative approaches based on direct (non-iterative) forecasts. While the choice of the corrector model class is flexible, we use radial basis functions. Radial basis functions are frequently used in statistical learning and/or surface approximations and involve a number of computational aspects which we discuss in some detail.
44

Quantitative modelling of market booms and crashes

Sheynzon, Ilya January 2012 (has links)
Multiple equilibria models are one of the main categories of theoretical models for stock market crashes. To the best of my knowledge, existing multiple equilibria models have been developed within a discrete time framework and only explain the intuition behind a single crash on the market. The main objective of this thesis is to model multiple equilibria and demonstrate how market prices move from one regime into another in a continuous time framework. As a consequence of this, a multiple jump structure is obtained with both possible booms and crashes, which are defined as points of discontinuity of the stock price process. I consider five different models for stock market booms and crashes, and look at their pros and cons. For all of these models, I prove that the stock price is a cadlag semimartingale process and find conditional distributions for the time of the next jump, the type of the next jump and the size of the next jump, given the public information available to market participants. Finally, I discuss the problem of model parameter estimation and conduct a number of numerical studies.
45

Statistical inference on linear and partly linear regression with spatial dependence : parametric and nonparametric approaches

Thawornkaiwong, Supachoke January 2012 (has links)
The typical assumption made in regression analysis with cross-sectional data is that of independent observations. However, this assumption can be questionable in some economic applications where spatial dependence of observations may arise, for example, from local shocks in an economy, interaction among economic agents and spillovers. The main focus of this thesis is on regression models under three di§erent models of spatial dependence. First, a multivariate linear regression model with the disturbances following the Spatial Autoregressive process is considered. It is shown that the Gaussian pseudo-maximum likelihood estimate of the regression and the spatial autoregressive parameters can be root-n-consistent under strong spatial dependence or explosive variances, given that they are not too strong, without making restrictive assumptions on the parameter space. To achieve e¢ ciency improvement, adaptive estimation, in the sense of Stein (1956), is also discussed where the unknown score function is nonparametrically estimated by power series estimation. A large section is devoted to an extension of power series estimation for random variables with unbounded supports. Second, linear and semiparametric partly linear regression models with the disturbances following a generalized linear process for triangular arrays proposed by Robinson (2011) are considered. It is shown that instrumental variables estimates of the unknown slope parameters can be root-n-consistent even under some strong spatial dependence. A simple nonparametric estimate of the asymptotic variance matrix of the slope parameters is proposed. An empirical illustration of the estimation technique is also conducted. Finally, linear regression where the random variables follow a marked point process is considered. The focus is on a family of random signed measures, constructed from the marked point process, that are second-order stationary and their spectral properties are discussed. Asymptotic normality of the least squares estimate of the regression parameters are derived from the associated random signed measures under mixing assumptions. Nonparametric estimation of the asymptotic variance matrix of the slope parameters is discussed where an algorithm to obtain a positive deÖnite estimate, with faster rates of convergence than the traditional ones, is proposed.
46

Bayesian inference for indirectly observed stochastic processes : applications to epidemic modelling

Dureau, Joseph January 2013 (has links)
Stochastic processes are mathematical objects that offer a probabilistic representation of how some quantities evolve in time. In this thesis we focus on estimating the trajectory and parameters of dynamical systems in cases where only indirect observations of the driving stochastic process are available. We have first explored means to use weekly recorded numbers of cases of Influenza to capture how the frequency and nature of contacts made with infected individuals evolved in time. The latter was modelled with diffusions and can be used to quantify the impact of varying drivers of epidemics as holidays, climate, or prevention interventions. Following this idea, we have estimated how the frequency of condom use has evolved during the intervention of the Gates Foundation against HIV in India. In this setting, the available estimates of the proportion of individuals infected with HIV were not only indirect but also very scarce observations, leading to specific difficulties. At last, we developed a methodology for fractional Brownian motions (fBM), here a fractional stochastic volatility model, indirectly observed through market prices. The intractability of the likelihood function, requiring augmentation of the parameter space with the diffusion path, is ubiquitous in this thesis. We aimed for inference methods robust to refinements in time discretisations, made necessary to enforce accuracy of Euler schemes. The particle Marginal Metropolis Hastings (PMMH) algorithm exhibits this mesh free property. We propose the use of fast approximate filters as a pre-exploration tool to estimate the shape of the target density, for a quicker and more robust adaptation phase of the asymptotically exact algorithm. The fBM problem could not be treated with the PMMH, which required an alternative methodology based on reparameterisation and advanced Hamiltonian Monte Carlo techniques on the diffusion pathspace, that would also be applicable in the Markovian setting.
47

Temporal and spatial modelling of extreme river flow values in Scotland

Franco Villoria, Maria January 2013 (has links)
Extreme river flows can lead to inundation of floodplains, with consequent impacts for society, the environment and the economy. Flood risk estimates rely on river flow records, hence a good understanding of the patterns in river flow, and, in particular, in extreme river flow, is important to improve estimation of risk. In Scotland, a number of studies suggest a West to East rainfall gradient and increased variability in rainfall and river flow. This thesis presents and develops a number of statistical methods for analysis of different aspects of extreme river flows, namely the variability, temporal trend, seasonality and spatial dependence. The methods are applied to a large data set, provided by SEPA, of daily river flow records from 119 gauging stations across Scotland. The records range in length from 10 up to 80 years and are characterized by non-stationarity and long-range dependence. Examination of non-stationarity is done using wavelets. The results revealed significant changes in the variability of the seasonal pattern over the last 40 years, with periods of high and low variability associated with flood-rich and flood-poor periods respectively. Results from a wavelet coherency analysis suggest significant influence of large scale climatic indices (NAO, AMO) on river flow. A quantile regression model is then developed based on an additive regression framework using P-splines, where the parameters are fitted via weighted least squares. The proposed model includes a trend and seasonal component, estimated using the back-fitting algorithm. Incorporation of covariates and extension to higher dimension data sets is straightforward. The model is applied to a set of eight Scottish rivers to estimate the trend and seasonality in the 95th quantile of river flow. The results suggest differences in the long term trend between the East and the West and a more variable seasonal pattern in the East. Two different approaches are then considered for modelling spatial extremes. The first approach consists of a conditional probability model and concentrates on small subsets of rivers. Then a spatial quantile regression model is developed, extending the temporal quantile model above to estimate a spatial surface using the tensor product of the marginal B-spline bases. Residual spatial correlation using a Gaussian correlation function is incorporated into standard error estimation. Results from the 95th quantile fitted for individual months suggest changes in the spatial pattern of extreme river flow over time. The extension of the spatial quantile model to build a fully spatio-temporal model is briefly outlined and the main statistical issues identified.
48

The applicability of statistical techniques to credit portfolios with specific reference to the use of risk theory in banking

O'Connor, Ronan B. January 1996 (has links)
This thesis examines the use of statistical techniques in credit portfolio management, with emphasis on actuarial and risk theoretic pricing and reserving measures. The bank corporate loan portfolio is envisaged as an insurance collective, with margins charged for credit risk forming premium income, provisions made forming claims outgo, and variation over time in provisioning and profitability producing a need for reserves. The research leads to the computation of portfolio specific measures of risk, and suggests that a value-at-risk (VAR) based reserve computation has several advantages over current regulatory reserving methodology in bank capital adequacy regulation (CAR) with respect to non-residential private sector loan portfolios. These latter, current CAR practices are invariably used by banks to compute the respective capital adequacy backing required on loans. A loan pricing model is developed that allocates capital required by reference to observed provisioning rates across a total of 64 differing combinations of rating factors. This represents a statistically rigorous return on risk adjusted capital (RORAC) approach to loan pricing. The suggested approach is illustrated by reference to a particular portfolio of loans. The reserving and pricing measures computed are portfolio specific, but the methodology developed and tested on the specific portfolio (dataset) of loans has a wider, more general applicability. A credit market comprising portfolios which are both more and less risky than the original particular portfolio is hypothesised, and existing regulation is compared to VAR regulation in the context of the hypothesised credit market. Fewer insolvencies are observed using the VAR framework than under existing regulation, and problem portfolios are identified earlier than under existing regulation. For the particular portfolio of loans, existing algorithm-based loan pricing is compared with the proposed loan pricing model. Significant differences are observed in loan pricing by reference to gearing and collateral, and the elimination of observed inefficiencies in pricing is recommended. Although the proposed model has some limitations, it is argued to be an improvement on existing regulatory and banking practice.
49

Optimal inspection and maintenance for stochastically deteriorating systems

Dagg, Richard Andrew January 1999 (has links)
This thesis concerns the optimisation of maintenance and inspection for stochastically deteriorating systems. The motivation for this thesis is the problem of determining condition based maintenance policies, for systems whose degradation may be modelled by a continuous time stochastic process. Our emphasis is mainly on using the information gained from inspecting the degradation to determine efficient maintenance and inspection policies. The system we shall consider is one in which the degradation is modelled by a Levy process, and in which failure is defined to occur when the degradation reaches a critical level. It is assumed that the system may be inspected or repaired at any time, and that the costs of inspections and repairs may depend on the level of system degradation. Initially we look at determining optimal inspection policies for systems whose degradation may be directly and perfectly observed, before extending this analysis to the case where the degradation is unobservable, and a related covariate process is used to determine maintenance decisions. In both cases it is assumed the replacement policy is fixed and known in advance. Finally we consider the case of joint optimisation of maintenance and inspection, for cases in which the maintenance action has either deterministic or random effect on the degradation level. In all of these cases we use the properties of the Levy process degradation model to form a recursive relationship which allows us to determine integral and functional equations for the maintenance cost of the system. Solutions to these determine optimal periodic and non-periodic inspection and maintenance policies. Throughout the thesis we use the gamma process degradation model as an example. For this model we determine optimal perfect inspection policies for the cases when inspections are periodic and non-periodic. As a special case of a covariate process we consider the optimal imperfect periodic inspection policy. Finally we obtain jointly optimal deterministic-maintenance and periodic-inspection policies.
50

Methods for demographic inference from single-nucleotide polymorphism data

Mair, Colette January 2012 (has links)
The distribution of the current human population is the result of many complex historical and prehistorical demographic events that have shaped variation in the human genome. Genomic dissimilarities between individuals from different geographical regions can potentially unveil something of these processes. The greatest differences lie between, and within, African populations and most research suggests the origin of modern humans lies within Africa. However, differing models have been proposed to model the evolutionary processes leading to humans inhabiting most of the world. This thesis develops a hypothesis test shown to be powerful in distinguishing between two such models. The first ("migration") model assumes the population of interest is divided into subpopulations that exchange migrants at a constant rate arbitrarily far back in the past, whilst the second ("isolation") model assumes that an ancestral population iteratively segregates into subpopulations that evolve independently. Although both models are simplistic, they do capture key aspects of the opposing theories of the history of modern humans. Given single nucleotide polymorphism (SNP) data from two subpopulations, the method described here tests a global null hypothesis that the data are from an isolation model. The test takes a parametric bootstrap approach, iteratively simulating data under the null hypothesis and computing a set of summary statistics shown to be able to distinguish between the two models. Each summary statistic forms the basis of a statistical hypothesis test where the observed value of the statistic is compared to the simulated values. The global null hypothesis is accepted if each individual test is accepted. A correction for multiple comparisons is used to control the type I error rate of this compound test. Extensions to this hypothesis test are given which adapt it to deal with SNP ascertainment and to better handle large genomic data sets. The methods are illustrated on data from the HapMap project using two Kenyan populations and the Japanese and Yoruba populations, after the method has been validated by simulation, where the `true' model is known.

Page generated in 0.2805 seconds