• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 104
  • 14
  • 14
  • 13
  • 7
  • 6
  • 6
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 419
  • 251
  • 52
  • 42
  • 40
  • 35
  • 32
  • 28
  • 26
  • 26
  • 26
  • 24
  • 23
  • 21
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Invesment-consumption model with infinite transaction costs

Zhu, Yedi January 2014 (has links)
This thesis considers optimal intertemporal consumption and investment problems in which the transaction costs on purchases of the risky asset are infinite. Equivalently, the problems can be classified as (infinitely divisible) asset sale problems with the restriction that the asset cannot be (re)-purchased. We will first present the classical Merton [41] model which comprises an agent with constant relative risk aversion (CRRA) who wishes to maximise the expected utility of consumption over an infinite horizon. Further, we introduce the extension of the single-asset Merton model with proportional transaction costs by Davis and Norman [13]. After discussing two preliminary optimal consumption and asset sale problems, we consider the special case of the Davis and Norman model, in which the transaction costs on purchase are infinite. Effectively, the asset cannot be purchased but only be sold. We manage to provide a complete and thorough analysis of the problem with rigorous proofs by a new solution technique, which reduces the problem into a first crossing problem. Based on the new solution technique, we conduct the comparative statics to analyse the optimal strategies and the indifference price, especially their dependance on model parameters. Some surprising results are found and are further discussed. We then consider the optimal consumption and investment problem with multiple risky assets and with infinite transaction costs. We manage to make significant progress towards an analytical solution and completely characterise the different possible behaviours of the agent by understanding the existence and finiteness of a first crossing problem. The monotonicity of the indifference price in model parameters is proved and a comparative statics is conducted.
32

Optimal static and sequential design : a critical review

Ford, Ian January 1976 (has links)
The aim of this thesis is to review and augment the theory and methods of optimal experimental design. In Chapter I the scene is set by considering the possible aims of an experimenter prior to an experiment, the statistical methods one might use to achieve those aims and how experimental design might aid this procedure. It is indicated that, given a criterion for design, a priori optimal design will only be possible in certain instances and, otherwise, some form of sequential procedure would seem to be indicated. In Chapter 2 an exact experimental design problem is formulated mathematically and is compared with its continuous analogue. Motivation is provided for the solution of this continuous problem, and the remainder of the chapter concerns this problem. A necessary and sufficient condition for optimality of a design measure is given. Problems which might arise in testing this condition are discussed, in particular with respect to possible non-differentiability of the criterion function at the design being tested. Several examples are given of optimal designs which may be found analytically and which illustrate the points discussed earlier in the chapter. In Chapter 3 numerical methods of solution of the continuous optimal design problem are reviewed. A new algorithm is presented with illustrations of how it should be used in practice. It is shown that, for reasonably large sample size, continuously optimal designs may be approximated to well by an exact design. In situations where this is not satisfactory algorithms for improvement of this design are reviewed. Chapter 4 consists of a discussion of sequentially designed experiments, with regard to both the philosophies underlying, and the application of the methods of, statistical inference. In Chapter 5 we criticise constructively previous suggestions for fully sequential design procedures. Alternative suggestions are made along with conjectures as to how these might improve performance. Chapter 6 presents a simulation study, the aim of which is to investigate the conjectures of Chapter 5. The results of this study provide empirical support for these conjectures. In Chapter 7 examples are analysed. These suggest aids to sequential experimentation by means of reduction of the dimension of the design space and the possibility of experimenting semi-sequentially. Further examples are considered which stress the importance of the use of prior information in situations of this type. Finally we consider the design of experiments when semi-sequential experimentation is mandatory because of the necessity of taking batches of observations at the same time. In Chapter 8 we look at some of the assumptions which have been made and indicate what may go wrong where these assumptions no longer hold.
33

Interpretable models of genetic drift applied especially to human populations

McIntosh, Alasdair January 2018 (has links)
This thesis aims to develop and implement population genetic models that are directly interpretable in terms of events such as population fission and admixture. Two competing methods of approximating the Wright--Fisher model of genetic drift are critically examined, one due to Balding and Nichols and another to Nicholson and colleagues. The model of population structure consisting of all present-day subpopulations arising from a common ancestral population at a single fission event (first described by Nicholson et al.) is reimplemented and applied to single-nucleotide polymorphism data from the HapMap project. This Bayesian hierarchical model is then elaborated to allow general phylogenetic representations of the genetic heritage of present-day subpopulations and the performance of this model is assessed on simulated and HapMap data. The drift model of Balding and Nichols is found to be problematic for use in this context as the need for allele fixation to be modelled becomes apparent. The model is then further developed to allow the inclusion of admixture events. This new model is, again, demonstrated using HapMap data and its performance compared to that of the TreeMix model of Pickrell and Pritchard, which is also critically evaluated.
34

A study of the use of linked, routinely collected, administrative data at the local level to count and profile populations

Harper, Gill January 2017 (has links)
There is increasing evidence that official population statistics are inaccurate at the local authority level, the fundamental administrative unit of the UK. The main source of official population statistics in the UK comes from the decennial census, last undertaken in 2011. The methodology and results of official population counts have been criticised and described as unfit for purpose. The three main purposes of population statistics are resource allocation, population ratios, and local planning and intelligence. Administrative data are data that is routinely collected for administrative purposes by organisations, government departments or companies and not for statistical or research purposes. This is in contrast with surveys which are designed and carried out as a specific information gathering exercise. This thesis describes a methodology for linking routinely collected administrative data for counting and profiling populations and other purposes at the local level. The benefits of this methodology are that it produces results more quickly than the decennial census, in a format that is more suitable for accurate and detailed analyses. Utilising existing datasets in this way reduces costs and adds value. The need and the evolution of this innovative methodology are set out, and the success and impact it has had are discussed, including how it has helped shape thinking on statistics in the UK. This research preceded the current paradigm shift in the UK for research and national statistics to move towards the use of linked administrative data. Future censuses after 2021 may no longer be in the traditional survey format, and the Office for National Statistics are exploring using a similar administrative data method at the national level as an alternative. The research in this thesis has been part of this inevitable evolution and has helped pave the way for this.
35

Limitations to seasonal weather prediction and crop forecasting due to nonlinearity and model inadequacy

Higgins, Sarah January 2015 (has links)
This Thesis examines the main issues surrounding crop modelling by detailed studies of (i) multi-model ensemble forecasting using a simple dynamical system as a proxy for seasonal weather forecasting, (ii) probabilistic forecasts for crop models and (iii) an analysis of changes in US yield. The ability to forecast crop yield accurately on a seasonal time frame would be hugely beneficial to society in particular farmers, governments and the insurance industry. In addition, advance warning of severe weather patterns that could devastate large areas of crops would allow contingency plans to be put in place before the onset of a widespread famine, potentially averting a humanitarian disaster. There is little experience in the experimental design of ensembles for seasonal weather forecasting. Exploring the stability of the results varying, for example, the sample size aids understanding. For this a series of numerical experiments are conducted in an idealised world based around the Moran Ricker Map. The idealised world is designed to replicate the multi-model ensemble forecasting methods used in seasonal weather forecasting. Given the complexity of the physical weather systems experiments are instead conducted on the Moran Ricker Map [56,70]. Additionally, experiments examine whether including climatology as a separate model or blending with climatology can increase the skill. A method to create probabilistic forecasts from a crop model, the Crop Environment Resource Synthesis Maize model (CERES-Maize) [19, 37] is proposed. New empirical models are created using historical US maize yield. The skill from equally weighting the crop model with a simple empirical model is investigated. Background reviews of weather and yield data is presented in new ways for the largest maize growing state Iowa. A new method separating the impacts of favourable weather from technology increases in a crop yield time series is explored.
36

Four essays in quantitative analysis : artificial intelligence and statistical inference

Hassanniakalager, Arman January 2018 (has links)
This thesis consists of four essays exploring quantitative methods for investment analysis. Chapter 1 is an introduction to the topic where the backgrounds, motivations and contributions of the thesis are discussed. This Chapter proposes an expert system paradigm which accommodates the methodology for all four empirical studies presented in Chapters 2 to 5. In Chapter 2 the profitability of technical analysis and Bayesian Statistics in trading the EUR/USD, GBP/USD, and USD/JPY exchange rates are examined. For this purpose, seven thousand eight hundred forty-six technical rules are generated, and their profitability is assessed through a novel data snooping procedure. Then, the most promising rules are combined with a Naïve Bayes (NB), a Relevance Vector Machine (RVM), a Dynamic Model Averaging (DMA), a Dynamic Model Selection (DMS) and a Bayesian regularised Neural Network (BNN) model. The findings show that technical analysis has value in Foreign eXchange (FX) trading, but the profit margins are small. On the other hand, Bayesian Statistics seems to increase the profitability of technical rules up to four times. Chapter 3 introduces the concept of Conditional Fuzzy (CF) inference. The proposed approach is able to deduct Fuzzy Rules (FRs) conditional to a set of restrictions. This conditional rule selection discards weak rules and the generated forecasts are based only on the most powerful ones. In order to achieve this, an RVM is used to extract the most relevant subset of predictors as the CF inputs. Through this process, it is capable of achieving higher forecasting performance and improving the interpretability of the underlying system. The CF concept is applied in a betting application on football games of three main European championships. CF’s performance in terms of accuracy and profitability over the In-Sample (IS) and Out-Of-Sample (OOS) are benchmarked against the single RVM and an Adaptive Neuro-Fuzzy Inference System (ANFIS) fed with the same CF inputs and an Ordered Probit (OP) fed with the full set of predictors. The results demonstrate that the CF is providing higher statistical accuracy than its benchmarks, while it is offering substantial profits in the designed betting simulation. Chapter 4 proposes the Discrete False Discovery Rate (DFDR+/-) as an approach to compare a large number of hypotheses at the same time. The presented method limits the probability of having lucky findings and accounts for the dependence between candidate models. The performance of this approach is assessed by backtesting the predictive power of technical analysis in stock markets. A pool of twenty-one thousand technical rules is tested for a positive Sharpe ratio. The surviving technical rules are used to construct dynamic portfolios. Twelve categorical and country-specific Morgan Stanley Capital International (MSCI) indexes are examined over ten years (2006-2015). There are three main findings. First, the proposed method has high power in detecting the profitable trading strategies and the time-related anomalies across the chosen financial markets. Second, the emerging and frontier markets are more profitable than the developed markets despite having higher transaction costs. Finally, for a successful portfolio management, it is vital to rebalance the portfolios on a monthly basis or more frequently. Chapter 5 undertakes an extensive investigation of volatility models for six securities in FX, stock index and commodity markets, using daily one-step-ahead forecasts over five years. A discrete false discovery controlling procedure is employed to study one thousand five hundred and twelve volatility models from twenty classes of Generalized AutoRegressive Conditional Heteroskedasticity (GARCH), Exponential Weighted Moving Average (EWMA), Stochastic Volatility (SV), and Heterogeneous AutoRegressive (HAR) families. The results indicate significant differences in forecasting conditional variance. The most accurate models vary across the three market categories and depend on the study period and measurement scale. Time-varying means, Integrated GARCH (IGARCH) and SV, as well as fat-tailed innovation distributions are the dominant specifications for the outperforming models compared to three benchmarks of ARCH (1), GARCH (1,1), and the volatility pool’s 90th percentile. Finally, Chapter 6 puts together the main findings from the four essays and presents the concluding marks.
37

Stochastic models for triangular tables with applications to cohort data and claims reserving

Verrall, Richard John January 1989 (has links)
Stochastic models for triangular data are derived and applied to claims reserving data. The standard actuarial technique, the so-called "chain-ladder technique" is given a sound statistical foundation and considered as a linear model. This linear model, the '"Chain Ladder Linear Model" is extended to encompass Bayesian, empirical Bayes and dynamic estimation. The empirical Bayes results are given a credibility theory interpretation, and the advantages and disadvantages of the various approaches are highlighted. Finally, the methods are extended to two-dimensional systems and results based on classical time series and Kalman filtering theory are produced. The empirical Bayes estimation results are very useful, practically, and can be compared to the Kalman filter estimates. They have the advantage that no prior information is required: the Kalman filter method requires the state and observation variances to be specified. For illustration purposes the estimates from the empirical Bayes procedure are used. The empirical Bayes results can also be compared with credibility theory estimators, although they retain the general statistical advantages of the linear modelling approach. For the classical theory, unbiased estimates of outstanding claims, reserves and variances are derived, and prediction intervals for total outstanding claims are produced. Maximum likelihood theory is utilised to derive the distributions of quantities relating to the column parameters which have actuarial interpretations. The row totals are also considered. Bayesian estimates of similar quantities are derived for the methods based on Bayes theory.
38

Labour supply problems and solutions : econometric model for the State of Bahrain

Kaiksow, Wedad A. January 1999 (has links)
Despite the intentions of the State of Bahrain to implement significant economic and social changes, and their full, or partial, sponsorship of intensive training programmes, neither have the human resources available been exploited to their full potential, nor have the vocational training programmes managed to equip unemployed workers with the skills needed to compete effectively in the labour market. Indeed, the picture is hardly encouraging for future generations, and cries out for fundamental changes. Bahrain has poured investments into projects aimed at reaping the benefits of its relative advantages, into exploring and transforming its oil and gas resources, into developing value added products, and human resources. But changes since have created a whole new world, with no market to speak of but a global one. Any decisions on investment should now be guided by the notion that geographical location is no longer a real issue, and that competition is won more by conceptual than by material advantage. A worker's personal opinion of his job has as much impact on competition as the extent and quality of his training. An entirely new system is therefore needed, one that provides social incentives and traditional rewards for the creation of new concepts, ideas and perspectives. The aim of this thesis, is firstly, to investigate labour supply in Bahrain in a dynamic setting in the light of the neoclassical theory which is based mainly on that labour supply is largely a function of real wage. Secondly, to diagnose the problems and finally to suggest solutions. An econometric model of labour supply for different groups: Bahrainis and non-Bahrainis, primary and secondary workers is introduced. Use is made of cross-section time series data. The econometric contribution of this thesis is the testing of relative wage theory besides the estimation and the identification of labour supply elasticities that can serve as the basis for policy decisions. Unemployment as the most serious economic problem facing Governments is considered in this thesis. Unemployment model is presented and analysed as a function of specific factors that may cause structural unemployment in the economy of Bahrain. Use is made of secondary data. Then unemployment policies are discussed and finally the conclusion of this thesis with prospects is presented.
39

Pension fund valuation

Fujiki, Maso-Hiko January 1994 (has links)
The thesis discusses various actuarial aspects of the management of a pension fund, in particular, those related to valuation of the pension fund. The investigation covers three main areas; the funding mechanism, investment and matching and control of the fund. In the part dealing with the funding mechanism, a model is introduced in order to assist analyses of the mechanism of a pension fund. The model notionally separates the fund into individual pots and a common pool, and notional moves of the assets between them, X-functions, are defined. Using this model, various events in the pension fund are analysed. Particularly, the model is shown to be useful for explaining the financial impact of withdrawals and the problem of cross-subsidy. In the next part, investment and matching are discussed referring to a collection of papers and books written by actuaries and economists. Two different types of matching are defined according to the definition of risk, which are named V-matching and S-matching. Based on this discussion on matching, the meanings of the use of a particular portfolio for valuation purposes are analysed. Finally, various means of controlling a pension fund are discussed in the light of control theory. A particular focus is set on the choice of the valuation basis as a means of control, and an extensive series of long term cashflow projections are carried out to explore the optimum way to choose the valuation basis under various scenarios of changing experience. The projections are carried out separately for three different aspects of the experience; the real rate of investment return, the dividend growth rate and the dividend yield, and the withdrawal rates. The results suggest that the use of averages of past experience over a long period suits best for different circumstances, and that delayed changes in the valuation basis after the corresponding changes in experience are useful for identifying more clearly the trend in the actual experience.
40

Markov type models for large-valued interbank payment systems

Che, Xiaonan January 2011 (has links)
Due to the reform of payment systems from netting settlement systems to Real Time Gross Settlement systems (RTGS) around the world in recent years, there is a dramatic increase in the interest in modeling the large-valued interbank payment system. Recently some queueing facilities have been introduced in the response to the liquidity management within the RTGS systems. Since stochastic process models have been wildly applied in social networks, and some aspects of which have similar statistical properties with the payment system, therefore, based on the existing empirical research, a Markov type model for RTGS payment system with queueing and collateral borrowing facilities was developed. We analysed the effect on the performance of the payment system of the parameters, such as the probabilities of payment delay, the initial cash position of participating banks and the probabilities of cross bank payments. Two models were proposed; one is the simplest model where payments were assumed to be equally distributed among participating banks, the other one is a so-called "cluster" model, that there exists a concentration of payments flow between a few banks according to the evidence from empirical studies. We have found that the performance of the system depends on these parameters. A modest amount of total initial liquidity required by banks would achieve a desired performance, that minimising the number of unsettled payments by the end of a business day and negligible average lifetime of the debts. Because of the change of large-valued interbank payment systems, the concern has shift from credit risk to liquidity risk, and the payment systems around world started considering or already implemented different liquidity saving mechanisms to reduce the high demand of liquidity and maintain the low risk of default in the mean time. We proposed a specified queueing facility to the "cluster" model with modification with the consideration of the feature of the UK RTGS payment system, CHAPS. Some of thepayments would be submitted into a external queue by certain rules, and will be settled according an algorithm of bilateral or multilateral offsetting. While participating banks's post liquidity will be reserved for "important" payments only. The experiment of using simulated data showed that the liquidity saving mechanism was not equally beneficial to every bank, the banks who dominated most of the payment flow even suffered from higher level of debts at the end of a business day comparing with a pure RTGS system without any queueing facility. The stability of the structure of the central queue was verified. There was evidence that banks in the UK payment system would set up limits for other members to prevent unexpected credit exposure, and with these limits, banks also achieved a moderate liquidity saving in CHAPS. Both central bank and participating banks are interested in the probability that the limits are excess. The problem can be reduced to the calculation of boundary crossing probability from a Brownian motion with stochastic boundaries. Boundary crossing problems are very popular in many fields of Statistics. With powerful tools, such as martingales and infinitesimal generator of Brownian motion, we presented an alternative method and derived a set of theorems of boundary crossing probabilities for a Brownian motion with different kinds of stochastic boundaries, especially compound Poisson process boundaries. Both the numerical results and simulation experiments are studies. A variation of the method would be discussed when apply it to other stochastic boundaries, for instances, Gamma process, Inverse Gaussian process and Telegraph process. Finally, we provided a brief survey of approximations of Levy processes. The boundary crossing probabilities theorems derived earlier could be extended to a fair general situation with Levy process boundaries, by using an appropriate approximation.

Page generated in 0.037 seconds