• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 16
  • 2
  • 1
  • 1
  • Tagged with
  • 23
  • 12
  • 11
  • 8
  • 7
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Modern stochastic claims reserving methods in insurance and their comparison / Modern stochastic claims reserving methods in insurance and their comparison

Vosáhlo, Jaroslav January 2013 (has links)
This thesis deals with an issue of claims reserving for non-life insurance. The issue is approached in a sense of analytical calculation and stochastic modelling. First, Chain-ladder, Bornhuetter-Ferguson, Benktander-Hovinen and Cape-Cod method are introduced. In following chapters, we try to find related stochastic underlying models including Generalized linear models and Mack's distribution-free approaches, we analyze second moments of claims estimates for each of the methods and examine alternative Merz-Wüthrich approach to reserve risk measurement. At the end, bootstrap algorithm and estimates are suggested and simulation results are compared with analytic ones.
2

A comparison of stochastic claim reserving methods

Mann, Eric M. January 1900 (has links)
Master of Science / Department of Statistics / Haiyan Wang / Estimating unpaid liabilities for insurance companies is an extremely important aspect of insurance operations. Consistent underestimation can result in companies requiring more reserves which can lead to lower profits, downgraded credit ratings, and in the worst case scenarios, insurance company insolvency. Consistent overestimation can lead to inefficient capital allocation and a higher overall cost of capital. Due to the importance of these estimates and the variability of these unpaid liabilities, a multitude of methods have been developed to estimate these amounts. This paper compares several actuarial and statistical methods to determine which are relatively better at producing accurate estimates of unpaid liabilities. To begin, the Chain Ladder Method is introduced for those unfamiliar with it. Then a presentation of several Generalized Linear Model (GLM) methods, various Generalized Additive Model (GAM) methods, the Bornhuetter-Ferguson Method, and a Bayesian method that link the Chain Ladder and Bornhuetter-Ferguson methods together are introduced, with all of these methods being in some way connected to the Chain Ladder Method. Historical data from multiple lines of business compiled by the National Association of Insurance Commissioners is used to compare the methods across different loss functions to gain insight as to which methods produce estimates with the minimum loss and to gain a better understanding of the relative strengths and weaknesses of the methods. Key
3

Applying high performance computing to profitability and solvency calculations for life assurance contracts

Tucker, Mark January 2018 (has links)
Throughout Europe, the introduction of Solvency II is forcing companies in the life assurance and pensions provision markets to change how they estimate their liabilities. Historically, each solvency assessment required that the estimation of liabilities was performed once, using actuaries' views of economic and demographic trends. Solvency II requires that each assessment of solvency implies a 1-in-200 chance of not being able to meet the liabilities. The underlying stochastic nature of these requirements has introduced significant challenges if the required calculations are to be performed correctly, without resorting to excessive approximations, within practical timescales. Currently, practitioners within UK pension provision companies consider the calculations required to meet new regulations to be outside the realms of anything which is achievable. This project brings the calculations within reach: this thesis shows that it is possible to perform the required calculations in manageable time scales, using entirely reasonable quantities of hardware. This is achieved through the use of several techniques: firstly, a new algorithm has been developed which reduces the computational complexity of the reserving algorithm from O(T2) to O(T) for T projection steps, and is sufficiently general to be applicable to a wide range of non unit-linked policies; secondly, efficient ab-initio code, which may be tuned to optimise its performance on many current architectures, has been written; thirdly, approximations which do not change the result by a significant amount have been introduced; and, finally, high performance computers have been used to run the code. This project demonstrates that the calculations can be completed in under three minutes when using 12,000 cores of a supercomputer, or in under eight hours when using 80 cores of a moderately sized cluster.
4

Granulární modely škod v rezervování / Granular loss models in reserving

Bílková, Kristýna January 2014 (has links)
Claims reserving methods usually use data aggregated into development triangles, therefore a lot of information that insurance companies possess remains unused. This thesis shows a triangle-free approach using granular information from a claim by claim database. A statistical model for claims development which can further be used for estimation of reserves is built. The statistical model consists of a counting process that drives claims occurrence, distribution of reporting delay and distribution of claims severity. Several suitable distributions are presented, as well as methods for obtaining their parameters from data. Theoretical apparatus is used for real data. The thesis also pursues comparison of the IBNR reserve estimation using the triangle free approach and distribution free Chain ladder method for real data as well as for simulated data sets. For the data used in this thesis the complexity and data requirements of the triangle free approach are in favor of more preciseness and versatility. Powered by TCPDF (www.tcpdf.org)
5

Volatilita škodních rezerv a bootstrap s aplikací na historická data s trendem ve vývoji škod / Claims reserve volatility and bootstrap with aplication on historical data with trend in claims development

Malíková, Kateřina January 2019 (has links)
This thesis deals with the application of stochastic claims reserving methods to given data with some trends in claims development. It describes the chain ladder method and the generalized linear models as its stochastic framework. Some simple functions are suggested for smoothing the origin and development period coefficients from the estimated model. The extrapolation is also considered for estimation of the unobserved tail values. The residual bootstrap is used for the reparameterized model in order to get the predictive distribution of the estimated reserve together with its standard deviation as a measure of volatility. Solvency capital requirement in one year time horizon is also calculated. 1
6

Three Essays in Finance and Actuarial Science

Luca, Regis 25 March 2011 (has links) (PDF)
This thesis is constituted of three chapters. he first part of my Ph.D. dissertation develops a Bayesian stochastic model for computing the reserves of a non-life insurance company. The first chapter is the product of my research experience as an intern at the Risk Management Department of Fondiaria-Sai S.p.A.. I present a short review of the deterministic and stochastic claims reserving methods currently applied in practice and I develop a (standard) Over-Dispersed Poisson (ODP) Bayesian model for the estimation of the Outstanding Loss Liabilities (OLLs) of a line of business (LoB). I present the model, I illustrate the theoretical foundations of the MCMC (Markov Chain Monte Carlo) method and the Metropolis-Hastings algorithm used in order to generate the non-standard posterior distributions. I apply the model to the Motor Third Party Liabil- ity LoB of Fondiaria-Sai S.p.A.. Moreover, I explore the problem of computing the prudential reserve level of a multi-line non-life insurance company. In the second chapter, then, I present a full Bayesian model for assessing the reserve requirement of multiline Non-Life insurance companies. The model combines the Bayesian approach for the estimation of marginal distribution for the single Lines of Business and a Bayesian copula procedure for their aggregation. First, I consider standard copula aggregation for different copula choices. Second, I present the Bayesian copula technique. Up to my knowledge, this approach is totally new to stochastic claims reserving. The model allows to "mix" own-assessments of dependence between LoBs at a company level and market wide estimates. I present an application to an Italian multi-line insurance company and compare the results obtained aggregating using standard copulas and a Bayesian Gaussian copula. In the second part of my Dissertation I propose a theoretical model that studies optimal capital and organizational structure choices of financial groups which incorporate two or more business units. The group faces a VaR-type regulatory capital requirement. Financial conglomerates incorporate activities in different sectors either into a unique integrated entity, into legally separated divisions or in ownership-linked holding company/subsidiary structures. I model these different arrangements in a structural framework through different coinsurance links between units in the form of conditional guarantees issued by equityholders of a firm towards the debtholders of a unit of the same group. I study the effects of the use of such guarantees on optimal capital structural and organizational form choices. I calibrate model parameters to observed financial institutions' characteristics. I study how the capital is optimally held, the costs and benefits of limiting undercapitalization in some units and I address the issues of diversification at the holding's level and regulatory capital arbitrage. The last part of my Ph.D. Dissertation studies the hedging problem of life insurance policies, when the mortality rate is stochastic. The field developed recently, adapting well-established techniques widely used in finance to describe the evolution of rates of mortality. The chapter is joint work with my supervisor, prof. Elisa Luciano and Elena Vigna. It studies the hedging problem of life insurance policies, when the mortality and interest rates are stochastic. We focus primarily on stochastic mortality. We represent death arrival as the first jump time of a doubly stochastic process, i.e. a jump process with stochastic intensity. We propose a Delta-Gamma Hedging technique for mortality risk in this context. The risk factor against which to hedge is the difference between the actual mortality intensity in the future and its "forecast" today, the instantaneous forward intensity. We specialize the hedging technique first to the case in which survival intensities are affine, then to Ornstein-Uhlenbeck and Feller processes, providing actuarial justifications for this restriction. We show that, without imposing no arbitrage, we can get equivalent probability measures under which the HJM condition for no arbitrage is satisfied. Last, we extend our results to the presence of both interest rate and mortality risk, when the forward interest rate follows a constant-parameter Hull and White process. We provide a UK calibrated example of Delta and Gamma Hedging of both mortality and interest rate risk.
7

Analysis of Pricing and Reserving Risks with Applications in Risk-Based Capital Regulation for Property/Casualty Insurance Companies

Kerdpholngarm, Chayanin 06 December 2007 (has links)
The subject of the study for this dissertation is the relationship between pricing and reserving risks for property-casualty insurance companies. Since the risk characteristics of insurers differ based on their structure, objectives and incentives, segmenting the insurers into subgroups would allow for a better understanding of group-specific risks. Based on this approach to analyzing insurer financial risks, we find that, in a given accident year, the pricing and reserving errors are positively correlated, especially in long-tailed lines of business. Large insurers, stock insurers, and multi-state insurers, in general, exhibit a strong correlation between accident-year price and reserve errors. However, only size of insurers appears to be a factor that influences the interaction between price changes and the calendar year loss reserve adjustments. Furthermore, we find that the pricing risk and reserving risk are marginally more homogenous within a market segment when size, type and number of states are employed as criteria for market segmentation, hence insurance regulators should consider the refined market segments for the RBC formula. The empirical results also indicate that, in general, Chain-Ladder reserving method likely contributes to loss reserve errors when there is a change in the loss development pattern and the magnitude of the errors is worse for large insurers. Finally, we find that our proposed measurement method for the product diversification benefit provides support for the notion that the diversification benefit on the incurred losses increases with the number of lines in the portfolio. Yet, the diminishing returns tend to decrease the diversification benefit on the incurred losses for insurers that write the business in more than six of the selected lines. To the contrary, our proposed measure does not provide clear evidence that writing business in many product lines increases the product diversification benefit with respect to adverse loss development. We do find that the diversification benefit for both incurred losses and loss development is higher for larger insurers. Hence, for risk management and regulatory purposes, a stronger case can be made for considering firm size than product diversification.
8

Stochastic claims reserving in non-life insurance : Bootstrap and smoothing models

Björkwall, Susanna January 2011 (has links)
In practice there is a long tradition of actuaries calculating reserve estimates according to deterministic methods without explicit reference to a stochastic model. For instance, the chain-ladder was originally a deterministic reserving method. Moreover, the actuaries often make ad hoc adjustments of the methods, for example, smoothing of the chain-ladder development factors, in order to fit the data set under analysis. However, stochastic models are needed in order to assess the variability of the claims reserve. The standard statistical approach would be to first specify a model, then find an estimate of the outstanding claims under that model, typically by maximum likelihood, and finally the model could be used to find the precision of the estimate. As a compromise between this approach and the actuary's way of working without reference to a model the object of the research area has often been to first construct a model and a method that produces the actuary's estimate and then use this model in order to assess the uncertainty of the estimate. A drawback of this approach is that the suggested models have been constructed to give a measure of the precision of the reserve estimate without the possibility of changing the estimate itself. The starting point of this thesis is the inconsistency between the deterministic approaches used in practice and the stochastic ones suggested in the literature. On one hand, the purpose of Paper I is to develop a bootstrap technique which easily enables the actuary to use other development factor methods than the pure chain-ladder relying on as few model assumptions as possible. This bootstrap technique is then extended and applied to the separation method in Paper II. On the other hand, the purpose of Paper III is to create a stochastic framework which imitates the ad hoc deterministic smoothing of chain-ladder development factors which is frequently used in practice.
9

Useknutá data a stochastické rezervování škod / Truncated data and stochastic claims reserving

Marko, Dominik January 2018 (has links)
In this thesis stochastic claims reserving under the model of randomly trun- cated data is presented. For modelling the claims, a compound Poisson process is assumed. Introducing a random variable representing the delay between oc- currence and reporting of a claim, a probability model of IBNR claims is built. The fact that some claims are incurred but not reported yet leads to truncated data. Basic results of non-parametric statistical estimation under the model of randomly truncated data are shown, which can be used to obtain an estimate of IBNR claims reserves. Theoretical background is then used for application on real data from Czech Insurers' Bureau. 36
10

Micro-Level Loss Reserving in Economic Disability Insurance / Reservsättning för ekonomisk invaliditet på mikronivå

Borgman, Robin, Hellström, Axel January 2018 (has links)
In this thesis we provide a construction of a micro-level reserving model for an economic disability insurance portfolio. The model is based on the mathematical framework developed by Norberg (1993). The data considered is provided by Trygg-Hansa. The micro model tracks the development of each individual claim throughout its lifetime. The model setup is straightforward and in line with the insurance contract for economic disability, with levels of disability categorized by 50%, 75% and 100%. Model parameters are estimated with the reported claim development data, up to the valuation time Τ. Using the estimated model parameters the development of RBNS and IBNR claims are simulated. The results of the simulations are presented on several levels and compared with Mack Chain-Ladder estimates. The distributions of end states and times to settlement from the simulations follow patterns that are representative of the reported data. The estimated ultimate of the micro model is considerably lower than the Mack Chain-ladder estimate. The difference can partly be explained by lower claim occurrence intensity for recent accident years, which is a consequence of the decreasing number of reported claims in data. Furthermore, the standard error of the micro model is lower than the standard error produced by Mack Chain-Ladder. However, no conclusion regarding accuracy of the two reserving models can be drawn. Finally, it is concluded that the opportunities of micro modelling are promising however complemented by some concerns regarding data and parameter estimations. / I detta examensarbete ges ett förslag på uppbyggnaden av en mikro-modell för reservsättning. Modellen är baserad på det matematiska ramverket utvecklat av Norberg (1993). Data som används är tillhandahållen av Trygg-Hansa och berör försäkringar kopplade till ekonomisk invaliditet. Mikro-modellen följer utvecklingen av varje enskild skada, från skadetillfälle till stängning. Modellen har en enkel struktur som följer försäkringsvillkoren för den aktuella portföljen, med tillstånd för invaliditetsgrader om 50%, 75% respektive 100%. Modellparametrarna är estimerade utifrån den historiska utvecklingen på skador, fram till och med utvärderingstillfället Τ. Med hjälp av de estimerade parametrarna simuleras den framtida utvecklingen av RBNS- och IBNR-skador. Resultat av simuleringarna presenteras på era nivåer och jämförs med Mack Chain-Ladder estimatet. Den simulerade fördelningen av sluttillstånd och tid mellan rapportering och stängning, följer mönster som stöds av rapporterade data. Den estimerade slutkostnaden från mikro-modellen är betydlig lägre än motsvarande från Mack Chain-Ladder. Skillnaden kan delvis förklaras av en låg skadeintensitet för de senaste skadeåren, vilket är en konsekvens av färre rapporterade skador i data. Vidare så är standardfelet lägre för simuleringarna från mikro-modellen jämfört med standardfelet för Mack Chain-Ladder. Däremot kan inga slutsatser angående reservsättningsmetodernas precision dras. Slutligen, framförs möjligheterna för mikro-modellering som intressanta, kompletterat med några svårigheter gällande datautbud och parameterestimering.

Page generated in 0.0651 seconds