• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 44
  • 32
  • 10
  • 4
  • 1
  • Tagged with
  • 1313
  • 484
  • 92
  • 86
  • 67
  • 54
  • 49
  • 43
  • 42
  • 41
  • 40
  • 39
  • 36
  • 29
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Sampling properties of tests for goodness-of -fit, normality and stationary of time series

Chanda, K. C. January 1958 (has links)
The last three decades have seen many developments towards the solutions of various problems of statistical inference concerning stochastic processes. The formulation of the inference procedures, when the sampled observations are no longer independent, and hence the classical Fisher-Neyrnan-Pearson-Wald theory does not apply, has been fOlli~dto be rather difficult. As a consequence large sample rules of procedures have been suggested and a parallel development in the probability limit theorems has made it possible to establish various useful asymptotic properties of these inference procedures with special applications to stationary stochastic processes.
22

Analysis of clustered data when the cluster size is informative

Pavlou, M. January 2012 (has links)
Clustered data arise in many scenarios. We may wish to fit a marginal regression model relating outcome measurements to covariates for cluster members. Often the cluster size, the number of members, varies. Informative cluster size (ICS) has been defined to arise when the outcome depends on the cluster size conditional on covariates. If the clusters are considered complete then the population of all cluster members and the population of typical cluster members have been proposed as suitable targets for inference, which will differ between these populations under ICS. However if the variation in cluster size arises from missing data then the clusters are considered incomplete and we seek inference for the population of all members of all complete clusters. We define informative covariate structure to arise when for a particular member the outcome is related to the covariates for other members in the cluster, conditional on the covariates for that member and the cluster size. In this case the proposed populations for inference may be inappropriate and, just as under ICS, standard estimation methods are unsuitable. We propose two further populations and weighted independence estimating equations (WIEE) for estimation. An adaptation of GEE was proposed to provide inference for the population of typical cluster members and increase efficiency, relative to WIEE, by incorporating the intra-cluster correlation. We propose an alternative adaptation which can provide superior efficiency. For each adaptation we explain how bias can arise. This bias was not clearly described when the first adaptation was originally proposed. Several authors have vaguely related ICS to the violation of the ‘missing completely at random’ assumption. We investigate which missing data mechanisms can cause ICS, which might lead to similar inference for the populations of typical cluster members and all members of all complete clusters, and we discuss implications for estimation.
23

Investment decision making under uncertainty : the impact of risk aversion, operational flexibility, and competition

Chronopoulos, M. January 2011 (has links)
Traditional real options analysis addresses investment under uncertainty assuming a risk-neutral decision maker and complete markets. In reality, however, decision makers are often risk averse and markets are incomplete. Additionally, capital projects are seldom now-or-never investments and can be abandoned, suspended, and resumed at any time. In this thesis, we develop a utility-based framework in order to examine the impact of operational flexibility, via suspension and resumption options, on optimal investment policies and option values. Assuming a risk-averse decision maker with perpetual options to suspend and resume a project costlessly, we confirm that risk aversion lowers the probability of investment and demonstrate how this effect can be mitigated by incorporating operational flexibility. Also, we illustrate how increased risk aversion may facilitate the abandonment of a project while delaying its temporary suspension prior to permanent resumption. Besides timing, a firm may have the freedom to scale the investment’s installed capacity. We extend the traditional real options approach to investment under uncertainty with discretion over capacity by allowing for a constant relative risk aversion utility function and operational flexibility in the form of suspension and resumption options. We find that, with the option to delay investment, increased risk aversion facilitates investment and decreases the required investment threshold price by reducing the amount of installed capacity. We explore strategic aspects of decision making under uncertainty by examining how duopolistic competition affects the entry decisions of risk-averse investors. Depending on the discrepancy between the market share of the leader and the follower, greater uncertainty may increase or decrease the discrepancy in the non-pre-emptive leader’s relative value. Furthermore, risk aversion does not affect the loss in the value of the leader for the pre-emptive duopoly setting, but it makes the loss in value relatively less for the leader in a non-preemptive duopoly setting.
24

Investigating whether the Johns Hopkins ACG case-mix system explains variation in UK general practice

O'Sullivan, C. January 2011 (has links)
This thesis describes the first large-scale studies in the United Kingdom to adjust for diagnostic-based morbidity when examining variation in home visits, specialist referrals and prescribing patterns in general practice. The Johns Hopkins ACG Case-Mix System was used since each patient’s overall morbidity is a better predictor of health service resource use than individual diseases. A literature review showed large variations in resource use measures such as consultations, referrals and prescribing practice patterns in general practice both in the UK and elsewhere and highlighted inappropriate use of statistical methodology that has the potential to produce misleading and erroneous conclusions. The review presents a strong argument for adjusting for diagnostic based morbidity when comparing variation in general practice outcomes in the UK. Multilevel models were used to take account of clustering within general practices and partition variation in general practice outcomes into between and within practice variation. Statistical measures for appropriately dealing with the challenging methodological issues were explored with the aim of producing results that could be more easily communicated to policy makers, clinicians, and other healthcare professionals. The datasets used contained detailed patient demographic, social class and diagnostic information from the Morbidity Statistics in General Practice Survey and the General Practice Research Database. This research shows that a combination of measures is required to quantify the effect of model covariates on variability between practices. Morbidity explains a small proportion of total variation between general practices for the home visit and referral outcomes but substantially more for the prescribing outcome compared to age and sex. Most of the variation was within rather than between practices.
25

Single-site point process-based rainfall models in a nonstationary climate

Kaczmarska, J. M. January 2013 (has links)
Long series of simulated rainfall are required at point locations for a range of applications, including hydrological studies. Clustered point-process based rainfall models have been used for generating such simulations for many decades. One of their main advantages is the fact that they generate simulations in continuous time, allowing aggregation to different timescales in a consistent way, and such models generally perform well in representing rainfall at hourly to daily timescales. An important disadvantage, however, is their stationarity. Although seasonality can be allowed for by fitting separate models for each calendar month or season, the models are unsuitable in their basic form for climate impact studies. In this thesis we develop new methodology to address this limitation. We extend the current fitting approach by replacing the discrete covariate, calendar month, with continuous covariates which are more directly related to the incidence and nature of rainfall. The covariate-dependent model parameters are estimated for each time interval using a kernel-based nonparametric approach within a Generalised Method of Moments framework. An empirical study using the new methodology is undertaken using a time series of five-minute rainfall data. In addition to addressing the need for temporal non-stationarity, which is our main focus, we also carry out a systematic comparison of a number of key variants of the basic model, in order to identify which features are required for an optimal fit at sub-hourly resolution. This generates some new insights into the models, leading to the development of a new model extension, which introduces dependence between rainfall intensity and duration in a simple way. The new model retains the “rectangular pulses” (i.e. rain cells with a constant intensity) of the original clustered point-process model, which had previously been considered inappropriate for fine-scale data, obviating the need for a computationally more intensive “instantaneous pulse” model.
26

Hybrid algorithms for efficient Cholesky decomposition and matrix inverse using multicore CPUs with GPU accelerators

Macindoe, G. I. January 2013 (has links)
The use of linear algebra routines is fundamental to many areas of computational science, yet their implementation in software still forms the main computational bottleneck in many widely used algorithms. In machine learning and computational statistics, for example, the use of Gaussian distributions is ubiquitous, and routines for calculating the Cholesky decomposition, matrix inverse and matrix determinant must often be called many thousands of times for common algorithms, such as Markov chain Monte Carlo. These linear algebra routines consume most of the total computational time of a wide range of statistical methods, and any improvements in this area will therefore greatly increase the overall efficiency of algorithms used in many scientific application areas. The importance of linear algebra algorithms is clear from the substantial effort that has been invested over the last 25 years in producing low-level software libraries such as LAPACK, which generally optimise these linear algebra routines by breaking up a large problem into smaller problems that may be computed independently. The performance of such libraries is however strongly dependent on the specific hardware available. LAPACK was originally developed for single core processors with a memory hierarchy, whereas modern day computers often consist of mixed architectures, with large numbers of parallel cores and graphics processing units (GPU) being used alongside traditional CPUs. The challenge lies in making optimal use of these different types of computing units, which generally have very different processor speeds and types of memory. In this thesis we develop novel low-level algorithms that may be generally employed in blocked linear algebra routines, which automatically optimise themselves to take full advantage of the variety of heterogeneous architectures that may be available. We present a comparison of our methods with MAGMA, the state of the art open source implementation of LAPACK designed specifically for hybrid architectures, and demonstrate up to 400% increase in speed that may be obtained using our novel algorithms, specifically when running commonly used Cholesky matrix decomposition, matrix inverse and matrix determinant routines.
27

Stochastic models for head lice infections

Stone, P. M. January 2010 (has links)
Outbreaks of head lice are a persistent problem in schools in the UK and elsewhere, and it is widely reported that the prevalence of head lice infections is increasing, especially since the 1990s. Research has largely focused on clinical trials of insecticidal treatments. Our research aims to construct stochastic models for the infection process that allow the investigation of typical properties of outbreaks of infection, and that might assist in examining the effectiveness of alternative strategies in controlling the spread of infection. We investigate the dynamics of head lice infections in schools, by considering models for endemic infection based on a stochastic SIS (susceptible-infected-susceptible) epidemic model. Firstly we consider the SIS model with the addition of an external source of infection, and deduce a range of properties of the model relating to a single outbreak of infection. We use the stationary distribution of the number of infected individuals, in conjunction with data from a recent study carried out in Welsh schools on the prevalence of head lice infections, to obtain estimates of the model parameters and thus to arrive at numerical estimates for various quantities of interest, such as the mean length of an outbreak. Secondly, we consider the structured nature of the school population, namely its division into classes, and examine the effect of this population structure on the various properties of an outbreak of head lice infection. Estimation of the parameters in a structured model presents certain challenges, due to the complexity of the model and the potentially enormous number of states in the Markov chain. We examine the feasibility of finding reasonable estimates for the parameters in the full structured model (for example, that of a population of seven classes within a school), by considering simpler versions which utilise only subsets or pooled versions of the data.
28

Weighted composite likelihoods

Harden, S. J. January 2013 (has links)
For analysing complex multivariate data, the use of composite surrogates is a well established tool. Composite surrogates involve the creation of a surrogate likelihood that is the product of low dimensional margins of a complex model, and result in acceptable parameter estimators that are relatively inexpensive to calculate. Some work has taken place in adjusting these composite surrogates to restore desirable features of the data generating mechanism, but the adjustments are not specific to the composite world: they could be applied to any surrogate. An issue that has received less attention is the determination of weights to be attached to each marginal component of a composite surrogate. This issue is the main focus of this thesis. We propose a weighting scheme derived analytically from minimising the Kullback-Leibler Divergence (KLD) between the data generating mechanism and the composite surrogate, treating the latter as a bona fide density which requires consideration of a normalising constant (a feature which is usually ignored). We demonstrate the effect of these weights for a simulation. We also derive an explicit formulation for the weights when the composite components are multivariate normal and, in certain cases, show how they can be used to restore the original data generating mechanism.
29

Time-series and real options analysis of energy markets

Heydari, B. S. January 2010 (has links)
After the deregulation of electricity industries on the premise of increasing economic efficiency, market participants have been exposed to financial risks due to uncertain energy prices. Using time-series analysis and the real options approach, we focus on modelling energy prices and optimal decision-making in energy projects. Since energy prices are highly volatile with unexpected spikes, capturing this feature in reduced-form models leads to more informed decision-making in energy investments. In this thesis, non-linear regime-switching models and models with mean-reverting stochastic volatility are compared with ordinary linear models. Our numerical examples suggest that with the aim of valuing a gas-fired power plant, non-linear models with stochastic volatility, specifically for logarithms of electricity prices, provide better out-of-sample forecasts. Among a comprehensive scope of mitigation measures for climate change, CO2 capture and sequestration (CCS) plays a potentially significant role in industrialised countries. Taking the perspective of a coal-fired power plant owner that may decide to invest in either full CCS or partial CCS retrofits given uncertain electricity, CO2, and coal prices, we develop an analytical real options model that values the choice between the two technologies. Our numerical examples show that neither retrofit is optimal immediately, and the optimal stopping boundaries are highly sensitive to CO2 price volatility. Taking the perspective of a load-serving entity (LSE), on the other hand, we value a multiple-exercise interruptible load contract that allows the LSE to curtail electricity provision to a representative consumer multiple times for a specified duration at a defined capacity payment given uncertain wholesale electricity price. Our numerical examples suggest that interruption is desirable at relatively high electricity prices and that uncertainty favours a delay in interrupting. Moreover, we show that a deterministic approximation captures most of the value of the interruptible load contract if the volatility is low and the exercise constraints are not too severe.
30

Bayesian nonparametric clustering based on Dirichlet processes

Murugiah, S. January 2010 (has links)
Following a review of some traditional methods of clustering, we review the Bayesian nonparametric framework for modelling object attribute differences. We focus on Dirichlet Process (DP) mixture models, in which the observed clusters in any particular data set are not viewed as belonging to a fixed set of clusters but rather as representatives of a latent structure in which clusters belong to one of a potentially infinite number of clusters. As more information about attribute differences is revealed, the number of inferred clusters is allowed to grow. We begin by studying DP mixture models for normal data and show how to adapt one of the most widely used conditional methods for computation to improve sampling efficiency. This scheme is then generalized, followed by an application to discrete data. The DP’s dispersion parameter is a critical parameter controlling the number of clusters. We propose a framework for the specification of the hyperparameters for this parameter, using a percentile based method. This research was motivated by the analysis of product trials at the magazine Which?, where brand attributes are usually assessed on a 5-point preference scale by experts or by a random selection of Which? subscribers. We conclude with a simulation study, where we replicate some of the standard trials at Which? and compare the performance of our DP mixture models against various other popular frequentist and Bayesian multiple comparison routines adapted for clustering.

Page generated in 0.0376 seconds