• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 17
  • Tagged with
  • 244
  • 244
  • 42
  • 35
  • 32
  • 27
  • 26
  • 26
  • 26
  • 25
  • 24
  • 22
  • 22
  • 20
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Interpretable models of genetic drift applied especially to human populations

McIntosh, Alasdair January 2018 (has links)
This thesis aims to develop and implement population genetic models that are directly interpretable in terms of events such as population fission and admixture. Two competing methods of approximating the Wright--Fisher model of genetic drift are critically examined, one due to Balding and Nichols and another to Nicholson and colleagues. The model of population structure consisting of all present-day subpopulations arising from a common ancestral population at a single fission event (first described by Nicholson et al.) is reimplemented and applied to single-nucleotide polymorphism data from the HapMap project. This Bayesian hierarchical model is then elaborated to allow general phylogenetic representations of the genetic heritage of present-day subpopulations and the performance of this model is assessed on simulated and HapMap data. The drift model of Balding and Nichols is found to be problematic for use in this context as the need for allele fixation to be modelled becomes apparent. The model is then further developed to allow the inclusion of admixture events. This new model is, again, demonstrated using HapMap data and its performance compared to that of the TreeMix model of Pickrell and Pritchard, which is also critically evaluated.
32

A study of the use of linked, routinely collected, administrative data at the local level to count and profile populations

Harper, Gill January 2017 (has links)
There is increasing evidence that official population statistics are inaccurate at the local authority level, the fundamental administrative unit of the UK. The main source of official population statistics in the UK comes from the decennial census, last undertaken in 2011. The methodology and results of official population counts have been criticised and described as unfit for purpose. The three main purposes of population statistics are resource allocation, population ratios, and local planning and intelligence. Administrative data are data that is routinely collected for administrative purposes by organisations, government departments or companies and not for statistical or research purposes. This is in contrast with surveys which are designed and carried out as a specific information gathering exercise. This thesis describes a methodology for linking routinely collected administrative data for counting and profiling populations and other purposes at the local level. The benefits of this methodology are that it produces results more quickly than the decennial census, in a format that is more suitable for accurate and detailed analyses. Utilising existing datasets in this way reduces costs and adds value. The need and the evolution of this innovative methodology are set out, and the success and impact it has had are discussed, including how it has helped shape thinking on statistics in the UK. This research preceded the current paradigm shift in the UK for research and national statistics to move towards the use of linked administrative data. Future censuses after 2021 may no longer be in the traditional survey format, and the Office for National Statistics are exploring using a similar administrative data method at the national level as an alternative. The research in this thesis has been part of this inevitable evolution and has helped pave the way for this.
33

Limitations to seasonal weather prediction and crop forecasting due to nonlinearity and model inadequacy

Higgins, Sarah January 2015 (has links)
This Thesis examines the main issues surrounding crop modelling by detailed studies of (i) multi-model ensemble forecasting using a simple dynamical system as a proxy for seasonal weather forecasting, (ii) probabilistic forecasts for crop models and (iii) an analysis of changes in US yield. The ability to forecast crop yield accurately on a seasonal time frame would be hugely beneficial to society in particular farmers, governments and the insurance industry. In addition, advance warning of severe weather patterns that could devastate large areas of crops would allow contingency plans to be put in place before the onset of a widespread famine, potentially averting a humanitarian disaster. There is little experience in the experimental design of ensembles for seasonal weather forecasting. Exploring the stability of the results varying, for example, the sample size aids understanding. For this a series of numerical experiments are conducted in an idealised world based around the Moran Ricker Map. The idealised world is designed to replicate the multi-model ensemble forecasting methods used in seasonal weather forecasting. Given the complexity of the physical weather systems experiments are instead conducted on the Moran Ricker Map [56,70]. Additionally, experiments examine whether including climatology as a separate model or blending with climatology can increase the skill. A method to create probabilistic forecasts from a crop model, the Crop Environment Resource Synthesis Maize model (CERES-Maize) [19, 37] is proposed. New empirical models are created using historical US maize yield. The skill from equally weighting the crop model with a simple empirical model is investigated. Background reviews of weather and yield data is presented in new ways for the largest maize growing state Iowa. A new method separating the impacts of favourable weather from technology increases in a crop yield time series is explored.
34

Four essays in quantitative analysis : artificial intelligence and statistical inference

Hassanniakalager, Arman January 2018 (has links)
This thesis consists of four essays exploring quantitative methods for investment analysis. Chapter 1 is an introduction to the topic where the backgrounds, motivations and contributions of the thesis are discussed. This Chapter proposes an expert system paradigm which accommodates the methodology for all four empirical studies presented in Chapters 2 to 5. In Chapter 2 the profitability of technical analysis and Bayesian Statistics in trading the EUR/USD, GBP/USD, and USD/JPY exchange rates are examined. For this purpose, seven thousand eight hundred forty-six technical rules are generated, and their profitability is assessed through a novel data snooping procedure. Then, the most promising rules are combined with a Naïve Bayes (NB), a Relevance Vector Machine (RVM), a Dynamic Model Averaging (DMA), a Dynamic Model Selection (DMS) and a Bayesian regularised Neural Network (BNN) model. The findings show that technical analysis has value in Foreign eXchange (FX) trading, but the profit margins are small. On the other hand, Bayesian Statistics seems to increase the profitability of technical rules up to four times. Chapter 3 introduces the concept of Conditional Fuzzy (CF) inference. The proposed approach is able to deduct Fuzzy Rules (FRs) conditional to a set of restrictions. This conditional rule selection discards weak rules and the generated forecasts are based only on the most powerful ones. In order to achieve this, an RVM is used to extract the most relevant subset of predictors as the CF inputs. Through this process, it is capable of achieving higher forecasting performance and improving the interpretability of the underlying system. The CF concept is applied in a betting application on football games of three main European championships. CF’s performance in terms of accuracy and profitability over the In-Sample (IS) and Out-Of-Sample (OOS) are benchmarked against the single RVM and an Adaptive Neuro-Fuzzy Inference System (ANFIS) fed with the same CF inputs and an Ordered Probit (OP) fed with the full set of predictors. The results demonstrate that the CF is providing higher statistical accuracy than its benchmarks, while it is offering substantial profits in the designed betting simulation. Chapter 4 proposes the Discrete False Discovery Rate (DFDR+/-) as an approach to compare a large number of hypotheses at the same time. The presented method limits the probability of having lucky findings and accounts for the dependence between candidate models. The performance of this approach is assessed by backtesting the predictive power of technical analysis in stock markets. A pool of twenty-one thousand technical rules is tested for a positive Sharpe ratio. The surviving technical rules are used to construct dynamic portfolios. Twelve categorical and country-specific Morgan Stanley Capital International (MSCI) indexes are examined over ten years (2006-2015). There are three main findings. First, the proposed method has high power in detecting the profitable trading strategies and the time-related anomalies across the chosen financial markets. Second, the emerging and frontier markets are more profitable than the developed markets despite having higher transaction costs. Finally, for a successful portfolio management, it is vital to rebalance the portfolios on a monthly basis or more frequently. Chapter 5 undertakes an extensive investigation of volatility models for six securities in FX, stock index and commodity markets, using daily one-step-ahead forecasts over five years. A discrete false discovery controlling procedure is employed to study one thousand five hundred and twelve volatility models from twenty classes of Generalized AutoRegressive Conditional Heteroskedasticity (GARCH), Exponential Weighted Moving Average (EWMA), Stochastic Volatility (SV), and Heterogeneous AutoRegressive (HAR) families. The results indicate significant differences in forecasting conditional variance. The most accurate models vary across the three market categories and depend on the study period and measurement scale. Time-varying means, Integrated GARCH (IGARCH) and SV, as well as fat-tailed innovation distributions are the dominant specifications for the outperforming models compared to three benchmarks of ARCH (1), GARCH (1,1), and the volatility pool’s 90th percentile. Finally, Chapter 6 puts together the main findings from the four essays and presents the concluding marks.
35

Stochastic models for triangular tables with applications to cohort data and claims reserving

Verrall, Richard John January 1989 (has links)
Stochastic models for triangular data are derived and applied to claims reserving data. The standard actuarial technique, the so-called "chain-ladder technique" is given a sound statistical foundation and considered as a linear model. This linear model, the '"Chain Ladder Linear Model" is extended to encompass Bayesian, empirical Bayes and dynamic estimation. The empirical Bayes results are given a credibility theory interpretation, and the advantages and disadvantages of the various approaches are highlighted. Finally, the methods are extended to two-dimensional systems and results based on classical time series and Kalman filtering theory are produced. The empirical Bayes estimation results are very useful, practically, and can be compared to the Kalman filter estimates. They have the advantage that no prior information is required: the Kalman filter method requires the state and observation variances to be specified. For illustration purposes the estimates from the empirical Bayes procedure are used. The empirical Bayes results can also be compared with credibility theory estimators, although they retain the general statistical advantages of the linear modelling approach. For the classical theory, unbiased estimates of outstanding claims, reserves and variances are derived, and prediction intervals for total outstanding claims are produced. Maximum likelihood theory is utilised to derive the distributions of quantities relating to the column parameters which have actuarial interpretations. The row totals are also considered. Bayesian estimates of similar quantities are derived for the methods based on Bayes theory.
36

Labour supply problems and solutions : econometric model for the State of Bahrain

Kaiksow, Wedad A. January 1999 (has links)
Despite the intentions of the State of Bahrain to implement significant economic and social changes, and their full, or partial, sponsorship of intensive training programmes, neither have the human resources available been exploited to their full potential, nor have the vocational training programmes managed to equip unemployed workers with the skills needed to compete effectively in the labour market. Indeed, the picture is hardly encouraging for future generations, and cries out for fundamental changes. Bahrain has poured investments into projects aimed at reaping the benefits of its relative advantages, into exploring and transforming its oil and gas resources, into developing value added products, and human resources. But changes since have created a whole new world, with no market to speak of but a global one. Any decisions on investment should now be guided by the notion that geographical location is no longer a real issue, and that competition is won more by conceptual than by material advantage. A worker's personal opinion of his job has as much impact on competition as the extent and quality of his training. An entirely new system is therefore needed, one that provides social incentives and traditional rewards for the creation of new concepts, ideas and perspectives. The aim of this thesis, is firstly, to investigate labour supply in Bahrain in a dynamic setting in the light of the neoclassical theory which is based mainly on that labour supply is largely a function of real wage. Secondly, to diagnose the problems and finally to suggest solutions. An econometric model of labour supply for different groups: Bahrainis and non-Bahrainis, primary and secondary workers is introduced. Use is made of cross-section time series data. The econometric contribution of this thesis is the testing of relative wage theory besides the estimation and the identification of labour supply elasticities that can serve as the basis for policy decisions. Unemployment as the most serious economic problem facing Governments is considered in this thesis. Unemployment model is presented and analysed as a function of specific factors that may cause structural unemployment in the economy of Bahrain. Use is made of secondary data. Then unemployment policies are discussed and finally the conclusion of this thesis with prospects is presented.
37

Pension fund valuation

Fujiki, Maso-Hiko January 1994 (has links)
The thesis discusses various actuarial aspects of the management of a pension fund, in particular, those related to valuation of the pension fund. The investigation covers three main areas; the funding mechanism, investment and matching and control of the fund. In the part dealing with the funding mechanism, a model is introduced in order to assist analyses of the mechanism of a pension fund. The model notionally separates the fund into individual pots and a common pool, and notional moves of the assets between them, X-functions, are defined. Using this model, various events in the pension fund are analysed. Particularly, the model is shown to be useful for explaining the financial impact of withdrawals and the problem of cross-subsidy. In the next part, investment and matching are discussed referring to a collection of papers and books written by actuaries and economists. Two different types of matching are defined according to the definition of risk, which are named V-matching and S-matching. Based on this discussion on matching, the meanings of the use of a particular portfolio for valuation purposes are analysed. Finally, various means of controlling a pension fund are discussed in the light of control theory. A particular focus is set on the choice of the valuation basis as a means of control, and an extensive series of long term cashflow projections are carried out to explore the optimum way to choose the valuation basis under various scenarios of changing experience. The projections are carried out separately for three different aspects of the experience; the real rate of investment return, the dividend growth rate and the dividend yield, and the withdrawal rates. The results suggest that the use of averages of past experience over a long period suits best for different circumstances, and that delayed changes in the valuation basis after the corresponding changes in experience are useful for identifying more clearly the trend in the actual experience.
38

Markov type models for large-valued interbank payment systems

Che, Xiaonan January 2011 (has links)
Due to the reform of payment systems from netting settlement systems to Real Time Gross Settlement systems (RTGS) around the world in recent years, there is a dramatic increase in the interest in modeling the large-valued interbank payment system. Recently some queueing facilities have been introduced in the response to the liquidity management within the RTGS systems. Since stochastic process models have been wildly applied in social networks, and some aspects of which have similar statistical properties with the payment system, therefore, based on the existing empirical research, a Markov type model for RTGS payment system with queueing and collateral borrowing facilities was developed. We analysed the effect on the performance of the payment system of the parameters, such as the probabilities of payment delay, the initial cash position of participating banks and the probabilities of cross bank payments. Two models were proposed; one is the simplest model where payments were assumed to be equally distributed among participating banks, the other one is a so-called "cluster" model, that there exists a concentration of payments flow between a few banks according to the evidence from empirical studies. We have found that the performance of the system depends on these parameters. A modest amount of total initial liquidity required by banks would achieve a desired performance, that minimising the number of unsettled payments by the end of a business day and negligible average lifetime of the debts. Because of the change of large-valued interbank payment systems, the concern has shift from credit risk to liquidity risk, and the payment systems around world started considering or already implemented different liquidity saving mechanisms to reduce the high demand of liquidity and maintain the low risk of default in the mean time. We proposed a specified queueing facility to the "cluster" model with modification with the consideration of the feature of the UK RTGS payment system, CHAPS. Some of thepayments would be submitted into a external queue by certain rules, and will be settled according an algorithm of bilateral or multilateral offsetting. While participating banks's post liquidity will be reserved for "important" payments only. The experiment of using simulated data showed that the liquidity saving mechanism was not equally beneficial to every bank, the banks who dominated most of the payment flow even suffered from higher level of debts at the end of a business day comparing with a pure RTGS system without any queueing facility. The stability of the structure of the central queue was verified. There was evidence that banks in the UK payment system would set up limits for other members to prevent unexpected credit exposure, and with these limits, banks also achieved a moderate liquidity saving in CHAPS. Both central bank and participating banks are interested in the probability that the limits are excess. The problem can be reduced to the calculation of boundary crossing probability from a Brownian motion with stochastic boundaries. Boundary crossing problems are very popular in many fields of Statistics. With powerful tools, such as martingales and infinitesimal generator of Brownian motion, we presented an alternative method and derived a set of theorems of boundary crossing probabilities for a Brownian motion with different kinds of stochastic boundaries, especially compound Poisson process boundaries. Both the numerical results and simulation experiments are studies. A variation of the method would be discussed when apply it to other stochastic boundaries, for instances, Gamma process, Inverse Gaussian process and Telegraph process. Finally, we provided a brief survey of approximations of Levy processes. The boundary crossing probabilities theorems derived earlier could be extended to a fair general situation with Levy process boundaries, by using an appropriate approximation.
39

Stochastic models and methods for the assessment of earthquake risk in insurance

Jiménez-Huerta, Diego January 2009 (has links)
The problem of earthquake risk assessment and management in insurance is a challenging one at the interface of geophysics, engineering seismology, stochastics, insurance mathematics and economics. In this work, I propose stochastic models and methods for the assessment of earthquake risk from an insurer's point of view, where the aim is not to address problems in the financial mathematics and economics of risk selection, pricing, portfolio management, and risk transfer strategies such as reinsurance and securitisation, but to enable the latter through the characterisation of the foundation of any risk management consideration in insurance: the distribution of losses over a period of time for a portfolio of risks. Insurance losses are assumed to be generated by a loss process that is in turn governed by an earthquake process, a point process marked with the earthquake's hypocentre and magnitude, and a conditional loss distribution for an insurance portfolio, governing the loss size given the hypocentre and magnitude of the earthquake, and the physical characteristics of the portfolio as described in the individual policy records. From the modeling perspective, I examine the (non-trivial) minutiae around the infrastructure underpinning the loss process. A novel model of the earthquake process, a Poisson marked point process with spatial gamma intensity measure on the hypocentral space, and extensions of the Poisson and stress release models through the inclusion of hypocentral location in the mark, are proposed. I discuss the general architectural considerations for constructing the conditional loss distribution, and propose a new model as an alternative to the traditional ground motion attenuation and seismic vulnerability approach in engineering risk assessment. On the actuarial mathematics front, given a fully specified loss process, I address the problem of constructing simulation based and, where possible, analytical approximations to the distribution of portfolio losses over a period of time. I illustrate the applicability of the stochastic models and methods proposed in this work through the analysis of a residential homeowners property catastrophe portfolio exposed to earthquake risk in California. I construct approximations to the distribution of portfolio losses over a period of time under each of the three models of the earthquake process that I propose, and discuss their relative merits.
40

The methodology of flowgraph models

Ren, Yu January 2011 (has links)
Flowgraph models are directed graph models for describing the dynamic changes in a stochastic process. They are one class of multistate models that are applied to analyse time-to-event data. The main motivation of the flowgraph models is to determine the distribution of the total waiting times until an event of interest occurs in a stochastic process that progresses through various states. This thesis applies the methodology of flowgraph models to the study of Markov and SemiMarkov processes. The underlying approach of the thesis is that the access to the moment generating function (MGF) and cumulant generating function (CGF), provided by Mason’s rule enables us to use the Method of Moments (MM) which depends on moments and cumulant. We give a new derivation of the Mason’s rule to compute the total waiting MGF based on the internode transition matrix of a flowgraph. Next, we demonstrate methods to determine and approximate the distribution of total waiting time based on the inversion of the MGF, including an alternative approach using the Pad´e approximation of the MGF, which always yields a closed form density. For parameter estimation, we extend the Expectation-Maximization (EM) algorithm to estimate parameters in the mixture of negative weight exponential density. Our second contribution is to develop a bias correction method in the Method of Moments (BCMM). By investigating methods for tail area approximation, we propose a new way to estimate the total waiting time density function and survival

Page generated in 0.0978 seconds