• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 17
  • Tagged with
  • 244
  • 244
  • 42
  • 35
  • 32
  • 27
  • 26
  • 26
  • 26
  • 25
  • 24
  • 22
  • 22
  • 20
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Aspects of population Markov chain Monte Carlo and reversible jump Markov chain Monte Carlo

Bakra, Eleni January 2009 (has links)
This thesis consists ideas of two new population Markov chain Monte Carlo algorithms and an automatic proposal mechanism for the Reversible jump Markov chain Monte Carlo algorithm.
22

The use of Bayes factors in fine-scale genetic association studies

Young, Robin January 2011 (has links)
The aim of this thesis is to explore and compare methods that can be used for the purposes of finding possible genetic effects in the context of fine-scale genotype-phenotype association studies. Fine-scale genetic association studies present unique challenges for attempts at finding genetic effects, due to the strong linkage that can exist between different variants and issues that exist as a result of multiple testing. However, unlike Genome-Wide Association Studies (GWAS), there is potential to use the information from haplotypes arising from areas of low genetic recombination. In order to test the effectiveness of approaches involved in fine-scale studies, the PheGe-Sim (Phenotype Genotype Simulation) application has been developed in order to simulate fine-scale phenotype-genotype data sets under a variety of scenarios. The simulations are based upon the coalescent model with extensions of population expansion, recombination, and finite sites mutations, that allow for real data sets to be more closely mirrored. The simulated data sets are subsequently used to assess the effectiveness of each of the methods that are used in this thesis, in attempting to find the known simulated causal variants. One of the methods suitable for use in fine-scale genetic association studies for testing associations is Treescan (Templeton et al., 2005). Treescan is a method that attempts to use relationships between closely related haplotypes in an attempt to increase the power of finding genetic determinants of a phenotype. A haplotype tree is constructed, and each branch can be sequentially tested for any evidence of association from the resultant groups. To provide comparisons with the Treescan method, similar methods to the Treescan approach using each SNP (single nucleotide polymorphism) and haplotype have been implemented. As a result of the issues of multiple testing in the context of GWAS, Balding (2006) advocated the use of Bayes factors as an alternative to the standard use of p-values for categorical data sets. In this thesis Bayes factors have been formulated that are suitable for continuous phenotype data, and for the context of fine-scale association studies. Bayes factors are used in a method that utilizes the Treescan approach of assessing various groupings from a haplotype tree, with the method being adapted to take advantage of the flexibility offered by Bayes factors. Single SNP and haplotype approaches have also been programmed using the same implementation of Bayes factors. The PheGe-Find (Phenotype Genotype-Find) application has been developed that implements the association methods when supplied with the appropriate genotype and phenotype input files. In addition to testing the methods on simulated data, the approaches are also tested on two real data sets. The first of these concerns genotypes and phenotypes of the Drosophila Melanogaster fruit fly, that has previously been assessed using the original Treescan approach of Templeton et al. (2005). This allows for comparisons to be made between the different approaches upon a data set where there is strong evidence of a causal link between the genotype and phenotypes concerned. A second data set of genetic variants surrounding the human ADRA1A gene is also assessed for any potential causative genetic effects on blood pressure and heart rate phenotype measurements.
23

Estimating air pollution and its relationship with human health

Powell, Helen Louise January 2012 (has links)
The health impact of short-term exposure to air pollution has been the focus of much recent research, the majority of which is based on time-series studies. A time-series study uses health, pollution and meteorological data from an extended urban area. Aggregate level data is used to describe the health of the population living with the region, this is typically a daily count of the number of mortality or morbidity events. Air pollution data is obtained from a number of fixed site monitors located throughout the study region. These monitors measure background pollution levels at a number of time intervals throughout the day and a daily average is typically calculated for each site. A number of pollutants are measured including, carbon monoxide (CO); nitrogen dioxide (NO2); particulate matter (PM2.5 and PM10), and; sulphur dioxide (SO2). These fixed site monitors also measure a number of meteorological covariates such as temperature, humidity and solar radiation. In this thesis I have presented extensions to the current methods which are used to estimate the association between air pollution exposure and the risks to human health. The comparisons of the efficacy of my approaches to those which are adopted by the majority of researchers, highlights some of the deficiencies of the standard approaches to modelling such data. The work presented here is centered around three specific themes, all of which focus on the air pollution component of the model. The first and second theme relate to what is used as a spatially representative measure of air pollution and allowing for uncertainty in what is an inherently unknown quantity, when estimating the associated health risks, respectively. For example the majority of air pollution and health studies only consider the health effects of a single pollutant rather than that of overall air quality. In addition to this, the single pollutant estimate is taken as the average concentration level across the network of monitors. This is unlikely to be the average concentration across the study region due to the likely non random placement of the monitoring network. To address these issues I proposed two methods for estimating a spatially representative measure of pollution. Both methods are based on hierarchical Bayesian methods, as this allows for the correct propagation of uncertainty, the first of which uses geostatistical methods and the second is a simple regression model which includes a time-varying coefficient for covariates which are fixed in space. I compared the two approaches in terms of their predictive accuracy using cross validation. The third theme considers the shape of the estimated concentration-response function between air pollution and health. Currently used modelling techniques make no constraints on such a function and can therefore produce unrealistic results, such as decreasing risks to health at high concentrations. I therefore proposed a model which imposes three constraints on the concentration-response function in order to produce a more sensible shaped curve and therefore eliminate such misinterpretations. The efficacy of this approach was assessed via a simulation study. All of the methods presented in this thesis are illustrated using data from the Greater London area.
24

Econometric inference in models with nonstationary time series

Stamatogiannis, Michalis P. January 2010 (has links)
We investigate the finite sample behaviour of the ordinary least squares (OLS) estimator in vector autoregressive (VAR) models. The data generating process is assumed to be a purely nonstationary first-order VAR. Using Monte Carlo simulation and numerical optimization we derive response surfaces for OLS bias and variance in terms of VAR dimensions both under correct model specification and under several types of over-parameterization: we include a constant, a constant and trend, and introduce excess autoregressive lags. Correction factors are introduced that minimise the mean squared error (MSE) of the OLS estimator. Our analysis improves and extends one of the main finite-sample multivariate analytical bias results of Abadir, Hadri and Tzavalis (1999), generalises the univariate variance and MSE results of Abadir (1995) to a multivariate setting, and complements various asymptotic studies. The distribution of unit root test statistics generally contains nuisance parameters that correspond to the correlation structure of the innovation errors. The presence of such nuisance parameters can lead to serious size distortions. To address this issue, we adopt an approach based on the characterization of the class of asymptotically similar critical regions for the unit root hypothesis and the application of two new optimality criteria for the choice of a test within this class. The correlation structure of the innovation sequence takes the form of a moving average process, the order of which is determined by an appropriate information criterion. Limit distribution theory for the resulting test statistics is developed and simulation evidence suggests that our statistics have substantially reduced size while retaining good power properties. Stock return predictability is a fundamental issue in asset pricing. The conclusions of empirical analyses on the existence of stock return predictability vary according to the time series properties of the economic variables considered as potential predictors. Given the uncertainty about the degree of persistence of these variables, it is important to operate in the most general possible modelling framework. This possibility is provided by the IVX methodology developed by Phillips and Magdalinos (2009) in the context of cointegrated systems with no deterministic components. This method is modified in order to apply to multivariate systems of predictive regressions with an intercept in the model. The resulting modified IVX approach yields chi-squared inference for general linear restrictions on the regression coefficients that is robust to the degree of persistence of the predictor variables. In addition to extending the class of generating mechanisms for predictive regression, the approach extends the range of testable hypotheses, assessing the combined effects of different explanatory variables to stock returns rather than the individual effect of each explanatory variable.
25

Whittle estimation of multivariate exponential volatility models

Marchese, Malvina January 2015 (has links)
The aim of this thesis is to offer some insights into two topics of some interest for time-series econometric research. The first chapter derives the rates of convergence and the asymptotic normality of the pooled OLS estimators for linear regression panel models with mixed stationary and non-stationary regressors. This work is prompted by the consideration that many economic models of interest present a mixture of I(1) and I(0) regressors, for example models for analysis of demand system or for assessment of the relationship between growth and inequality. We present results for a model where the regressors and the regressand are cointegrated. We find that the OLS estimator is asymptotically normal with convergence rates T p n and p nT for respectively the non-stationary and the stationary regressors. Phillips and Moon (1990) show that in a cointegrated regression model with non-stationary regressors, the OLS estimator converges at a rate of T p n. We find that the presence of one stationary regressor in the model does not increases the rate of convergence. All the results are derived for sequential limits, with T going to infinity followed by n; and under quite restrictive regularity conditions. Chapters 3-5 focus on parametric multivariate exponential volatility models. It has long been recognized that the volatility of stock returns responds differently to good news and bad news. In particular, while negative shocks tend to increase future volatility, positive ones of the same size will increase it by less or even decrease it. This was in fact one of the chief motivations that led Nelson (1991) to introduce the univariate EGARCH model. More recently empirical studies have found that the asymmetry is a robust feature of multivariate stock returns series as well, and several multivariate volatility models have been developed to capture it. Another important property that characterizes the dynamic evolution of volatilities is that squared returns have significant autocorrelations that decay to zero at a slow rate, consistent with the notion of long memory, where the auto-covariances are not absolutely summable. Univariate long-memory volatility models have received a great deal of attention. However, the generalization to a multivariate long-memory volatility model has not been attempted in the literature. Chapter 3 offers a detailed literature review on multivariate volatility models. Chapter 4 and 5 introduce a new multivariate exponential volatility (MEV) model which captures long-range dependence in the volatilities, while retaining the martingale difference assumption and short-memory dependence in mean. Moreover the model captures cross-assets spillover effects, leverage and asymmetry. The strong consistency and the asymptotic normality of the Whittle estimator of the parameters in the Multivariate Exponential Volatility model is established under a variety of parameterization. The results cover both the case of exponentially and hyperbolically decaying coefficients, allowing for different degrees of persistence of shocks to the conditional variances. It is shown that the rate of convergence and the asymptotic normality of the Whittle estimates do not depend on the degree of persistence implied by the parameterization as the Whittle function automatically compensates for the possible lack of square integrability of the model spectral density.
26

Analysis of multivariate longitudinal categorical data subject to nonrandom missingness : a latent variable approach

Hafez, Mai January 2015 (has links)
Longitudinal data are collected for studying changes across time. In social sciences, interest is often in theoretical constructs, such as attitudes, behaviour or abilities, which cannot be directly measured. In that case, multiple related manifest (observed) variables, for example survey questions or items in an ability test, are used as indicators for the constructs, which are themselves treated as latent (unobserved) variables. In this thesis, multivariate longitudinal data is considered where multiple observed variables, measured at each time point, are used as indicators for theoretical constructs (latent variables) of interest. The observed items and the latent variables are linked together via statistical latent variable models. A common problem in longitudinal studies is missing data, where missingness can be classiffed into one of two forms. Dropout occurs when subjects exit the study prematurely, while intermittent missingness takes place when subjects miss one or more occasions but show up on a subsequent wave of the study. Ignoring the missingness mechanism can lead to biased estimates, especially when the missingness is nonrandom. The approach proposed in this thesis uses latent variable models to capture the evolution of a latent phenomenon over time, while incorporating a missingness mechanism to account for possibly nonrandom forms of missingness. Two model specifications are presented, the first of which incorporates dropout only in the missingness mechanism, while the other accounts for both dropout and intermittent missingness allowing them to be informative by being modelled as functions of the latent variables and possibly observed covariates. Models developed in this thesis consider ordinal and binary observed items, because such variables are often met in social surveys, while the underlying latent variables are assumed to be continuous. The proposed models are illustrated by analysing people's perceptions on women's work using three questions from five waves of the British Household Panel Survey.
27

Brownian excursions in mathematical finance

Zhang, You You January 2014 (has links)
The Brownian excursion is defined as a standard Brownian motion conditioned on starting and ending at zero and staying positive in between. The first part of the thesis deals with functionals of the Brownian excursion, including first hitting time, last passage time, maximum and the time it is achieved. Our original contribution to knowledge is the derivation of the joint probability of the maximum and the time it is achieved. We include a financial application of our probabilistic results on Parisian default risk of zero-coupon bonds. In the second part of the thesis the Parisian, occupation and local time of a drifted Brownian motion is considered, using a two-state semi-Markov process. New versions of Parisian options are introduced based on the probabilistic results and explicit formulae for their prices are presented in form of Laplace transforms. The main focus in the last part of the thesis is on the joint probability of Parisian and hitting time of Brownian motion. The difficulty here lies in distinguishing between different scenarios of the sample path. Results are achieved by the use of infinitesimal generators on perturbed Brownian motion and applied to innovative equity exotics as generalizations of the Barrier and Parisian option with the advantage of being highly adaptable to investors’ beliefs in the market.
28

Study of new models for insider trading and impulse control

Shi, Pucheng January 2013 (has links)
This thesis presents the development and study of two stochastic models. The first one is an equilibrium model for a market involving risk-averse insider trading. In particular, the static information model is considered under new assumptions: a) the insider is risk-averse, b) the signal received by the insider is not necessarily Gaussian, and c) the price set by the market maker is a function of a weighted signal that is not necessarily Gaussian either. Conditions on the weighting and pricing functions ensuring the existence of equilibrium are discussed. Equilibrium pricing and weighting functions as well as the insider’s optimal trading strategy are derived. Furthermore, the influence of the risk aversion on the equilibrium outcome is investigated. The second model studied, we derive the explicit solution to an impulse control problem with non-linear penalisation of control expenditure. This solution has several features that are not present in impulse control problems with affine penalisation of control effort. The state dependence of the free-boundaries characterising the optimal strategy is the first one. The possibility for the so-called continuation region to not be an interval and the optimal strategy to involve multiple simultaneous jumps while the problem data is convex are further such aspects.
29

Invesment-consumption model with infinite transaction costs

Zhu, Yedi January 2014 (has links)
This thesis considers optimal intertemporal consumption and investment problems in which the transaction costs on purchases of the risky asset are infinite. Equivalently, the problems can be classified as (infinitely divisible) asset sale problems with the restriction that the asset cannot be (re)-purchased. We will first present the classical Merton [41] model which comprises an agent with constant relative risk aversion (CRRA) who wishes to maximise the expected utility of consumption over an infinite horizon. Further, we introduce the extension of the single-asset Merton model with proportional transaction costs by Davis and Norman [13]. After discussing two preliminary optimal consumption and asset sale problems, we consider the special case of the Davis and Norman model, in which the transaction costs on purchase are infinite. Effectively, the asset cannot be purchased but only be sold. We manage to provide a complete and thorough analysis of the problem with rigorous proofs by a new solution technique, which reduces the problem into a first crossing problem. Based on the new solution technique, we conduct the comparative statics to analyse the optimal strategies and the indifference price, especially their dependance on model parameters. Some surprising results are found and are further discussed. We then consider the optimal consumption and investment problem with multiple risky assets and with infinite transaction costs. We manage to make significant progress towards an analytical solution and completely characterise the different possible behaviours of the agent by understanding the existence and finiteness of a first crossing problem. The monotonicity of the indifference price in model parameters is proved and a comparative statics is conducted.
30

Optimal static and sequential design : a critical review

Ford, Ian January 1976 (has links)
The aim of this thesis is to review and augment the theory and methods of optimal experimental design. In Chapter I the scene is set by considering the possible aims of an experimenter prior to an experiment, the statistical methods one might use to achieve those aims and how experimental design might aid this procedure. It is indicated that, given a criterion for design, a priori optimal design will only be possible in certain instances and, otherwise, some form of sequential procedure would seem to be indicated. In Chapter 2 an exact experimental design problem is formulated mathematically and is compared with its continuous analogue. Motivation is provided for the solution of this continuous problem, and the remainder of the chapter concerns this problem. A necessary and sufficient condition for optimality of a design measure is given. Problems which might arise in testing this condition are discussed, in particular with respect to possible non-differentiability of the criterion function at the design being tested. Several examples are given of optimal designs which may be found analytically and which illustrate the points discussed earlier in the chapter. In Chapter 3 numerical methods of solution of the continuous optimal design problem are reviewed. A new algorithm is presented with illustrations of how it should be used in practice. It is shown that, for reasonably large sample size, continuously optimal designs may be approximated to well by an exact design. In situations where this is not satisfactory algorithms for improvement of this design are reviewed. Chapter 4 consists of a discussion of sequentially designed experiments, with regard to both the philosophies underlying, and the application of the methods of, statistical inference. In Chapter 5 we criticise constructively previous suggestions for fully sequential design procedures. Alternative suggestions are made along with conjectures as to how these might improve performance. Chapter 6 presents a simulation study, the aim of which is to investigate the conjectures of Chapter 5. The results of this study provide empirical support for these conjectures. In Chapter 7 examples are analysed. These suggest aids to sequential experimentation by means of reduction of the dimension of the design space and the possibility of experimenting semi-sequentially. Further examples are considered which stress the importance of the use of prior information in situations of this type. Finally we consider the design of experiments when semi-sequential experimentation is mandatory because of the necessity of taking batches of observations at the same time. In Chapter 8 we look at some of the assumptions which have been made and indicate what may go wrong where these assumptions no longer hold.

Page generated in 0.134 seconds