• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 17
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 30
  • 30
  • 11
  • 10
  • 8
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Statistical disclosure control for frequency tables

Antal, Laszlo January 2016 (has links)
Disclosure risk assessment of statistical data, such as frequency tables, is a prerequisite for data dissemination. This thesis investigates the problem of disclosure risk assessment of frequency tables from the perspective of a statistical institute. In the research reported here, disclosure risk is measured by a mathematical function designed for the data according to a disclosure risk scenario. Such functions are called disclosure risk measures. A disclosure risk measure is defined for frequency tables based on the entire population using information theory. If the disclosure risk of a population based frequency table is high, a statistical institute will apply a statistical disclosure control (SDC) method possibly perturbing the table. It is known that the application of any SDC method lowers the disclosure risk. However, measuring the disclosure risk of the perturbed frequency table is a difficult problem. The disclosure risk measure proposed in the first paper of the thesis is also extended to assess the disclosure risk of perturbed frequency tables. SDC methods can be applied to either the microdata from which the frequency table is generated or directly to the frequency table. The two classes of methods are called pre- and post-tabular methods accordingly. It is shown that the two classes are closely related and that the proposed disclosure risk measure can account for both methods. In the second paper, the disclosure risk measure is extended to assess the disclosure risk of sample based frequency tables. Probabilistic models are used to estimate the population frequencies from sample frequencies which can then be used in the proposed disclosure risk measures. In the final paper of the thesis, we investigate an application of building a flexible table generator where disclosure risk and data utility measures must be calculated on-the-fly. We show that the proposed disclosure risk measure and a related information loss measure are adaptable to these settings. An example implementation of the disclosure risk and data utility assessment using the proposed disclosure risk measure is given.
2

Modelování finančních rizik pomocí kopul / Financial risks with copulas

Prelecová, Natália January 2014 (has links)
The aim of this thesis is the thorough description of the copula theory. It deals with the theory's basic definitions, classes and characteristics. In addition, relations between copulas and dependence measures are explained. Furthermore, we evaluate the possibilities of copula's parametres estimation and selecting the right copula for real data. Then, the copula theory is interconnected with the basic risk measures in finance. We describe the elementary categorization of financial risks and standard risk measurement approaches. We also define basic risk measures with the emphasis on value at risk. Lastly, we present a real data case study of a selected portfolio.
3

Optimální portfolia / Optimal portfolios

Vacek, Lukáš January 2018 (has links)
In this diploma thesis, selected techniques for construction of optimal portfo- lios are presented. Risk measures and other criteria (Markowitz approach, Value at risk, Conditional value at risk, Mean absolute deviation, Spectral risk measure and Kelly criterion) are defined in the first part. We derived analytical solution for some cases of optimization problems, in some other cases there exists numeri- cal solution only however. Advantages and disadvantages, theoretical properties and practical aspects of software implementation in Wolfram Mathematica are also mentioned. Simulation methods suitable for portfolio optimization are brie- fly presented with their motivation in the second part. Multivariate distributions: normal, t-distribution and skewed t-distribution are presented in the third part with connection to optimization of portfolio with assumption of multivariate dis- tribution of financial losses. Optimization methods are illustrated on real data in the fourth part of this thesis. Analytical methods are compared with numerical ones. 1
4

Hurricane Loss Modeling and Extreme Quantile Estimation

Yang, Fan 26 January 2012 (has links)
This thesis reviewed various heavy tailed distributions and Extreme Value Theory (EVT) to estimate the catastrophic losses simulated from Florida Public Hurricane Loss Projection Model (FPHLPM). We have compared risk measures such as Probable Maximum Loss (PML) and Tail Value at Risk (TVaR) of the selected distributions with empirical estimation to capture the characteristics of the loss data as well as its tail distribution. Generalized Pareto Distribution (GPD) is the main focus for modeling the tail losses in this application. We found that the hurricane loss data generated from FPHLPM were consistent with historical losses and were not as heavy as expected. The tail of the stochastic annual maximum losses can be explained by an exponential distribution. This thesis also touched on the philosophical implication of small probability, high impact events such as Black Swan and discussed the limitations of quantifying catastrophic losses for future inference using statistical methods.
5

Coherent Beta Risk Measures for Capital Requirements

Wirch, Julia Lynn January 1999 (has links)
This thesis compares insurance premium principles with current financial risk paradigms and uses distorted probabilities, a recent development in premium principle literature, to synthesize the current models for financial risk measures in banking and insurance. This work attempts to broaden the definition of value-at-risk beyond the percentile measures. Examples are used to show how the percentile measure fails to give consistent results, and how it can be manipulated. A new class of consistent risk measures is investigated.
6

Solvabilité 2 : une réelle avancée ? / Solvency 2 : an improvement ?

Derien, Anthony 30 September 2010 (has links)
Les futures normes de solvabilité pour l’industrie de l’assurance, Solvabilité 2, ont pour buts d’améliorer la gestion des risques au travers de l’identification de différentes classes et modules de risque, et en autorisant les compagnies à utiliser des modèles internes pour estimer leur capital réglementaire. La formule standard définit ce capital comme étant égal à une VaR à 99.5% sur un horizon d’un an pour chaque module de risque. Puis, à chaque niveau de consolidation intermédiaire, les différentes VaR sont agrégées au travers d’une matrice de corrélation. Plusieurs problèmes apparaissent avec cette méthode : – Le régulateur utilise le terme de “VaR” sans communiquer de distributions marginales ni globale. Cette mesure de risque multi-variée n’est pertinente que si chaque risque suit une distribution normale. – L’horizon temporel à un an ne correspond pas à celui des engagements d’une compagnie d’assurance, et pose des problèmes d`es lors qu’il faut déterminer la fréquence de mises à jour des modèles internes. – La structure de dépendance proposée par la formule standard ne correspond pas à celle habituellement mise en place par les compagnies et est difficilement utilisable dans un modèle interne. La première partie présentera en détail les points clés de la réforme et donnera des axes de réflexion sur son application dans la gestion des risques. Dans une deuxième partie, il sera montré que cette mesure de risque multi-variée ne satisfait pas aux principaux axiomes d’une mesure de risque. De plus, elle ne permet pas de comparer les exigences de capital entre compagnies, puisqu’elle n’est pas universelle. La troisième partie démontrera que pour évaluer un capital à un point intermédiaire avant l’échéance, une mesure de risque doit pouvoir s’ajuster à différentes périodes, et donc être multi-périodique. Enfin, la quatrième partie mettra l’accent sur une alternative à la matrice de corrélation pour modéliser la dépendance, à savoir les copules. / The new rules of solvency for the insurance industry, Solvency II, aim to improve the risk management in the insurance industry by identifying different classes / modules of risk, and by allowing insurance companies to use an internal model to estimate their capital. The standard formula sets the capital requirement at a VaR of 99.5% level for a one year horizon for each sub risk module. Then at each consolidation level, the different VaR are aggregated through a correlation matrix. Some problems may appear with this method : – The regulator uses “VaR” term while he provides neither marginal distributions nor the global one. This multivariate risk measure is relevant only if each risk follows a normal distribution. – This short term horizon does not match the time horizon of the liabilities of an insurance company and leads to some problems in updating the capital requirement during the year. – The dependance structure given in the standard formula does not correspond to a practical one, and cannot be used in an internal model. The first part will present a detailed discussion about the reform and give some example of its application from risk management’s point of view. In the second part, it will be establish that this multivariate risk measure does not satisfy the main axioms that a risk measure should fulfill. With this approach, there is not uniqueness among the insurance companies, so the solvency capital requirement cannot be compared across the industry. The third part will demonstrate that a risk measure which adjusts to different periods should be used to evaluate the capital at a point in time, a multiperiod risk measure. At last, the fourth part will emphasize on an alternative to the correlation matrix to aggregate risks, the copula.
7

Inférence statistique des modèles conditionnellement hétéroscédastiques avec innovations stables, contraste non gaussien et volatilité mal spécifiée / Statistical inference of conditionally heteroskedastic models with stable innovations, non Gaussian contrast and missspecified volatility

Lepage, Guillaume 13 December 2012 (has links)
Dans cette thèse, nous nous intéressons à l'estimation de modèles conditionnellement hétéroscédastiques (CH) sous différentes hypothèses. Dans une première partie, en modifiant l'hypothèse d'identification usuelle du modèle, nous définissions un estimateur de quasi-maximum de vraisemblance (QMV) non gaussien et nous montrons que, sous certaines conditions, cet estimateur est plus efficace que l'estimateur du quasi maximum de vraisemblance gaussien. Nous étudions dans une deuxième partie l'inférence d'un modèle CH dans le cas où le processus des innovations est distribué selon une loi alpha stable. Nous établissons la consistance et la normalité asymptotique de l'estimateur du maximum de vraisemblance. La loi alpha stable n'apparaissant que comme loi limite, nous étudions ensuite le comportement de ce même estimateur dans le cas où la loi du processus des innovations n'est plus une loi alpha stable mais est dans le domaine d'attraction d'une telle loi. Dans la dernière partie, nous étudions l'estimation d'un modèle GARCH lorsque le processus générateur de données est un modèle CH dont les coefficients sont sujets à des changements de régimes markoviens. Nous montrons que cet estimateur, dans un cadre mal spécifié, converge vers une pseudo vraie valeur et nous établissons sa loi asymptotique. Nous étudions cet estimateur lorsque le processus observé est stationnaire mais nous détaillons également ses propriétés asymptotiques lorsque ce processus est non stationnaire et explosif. Par des simulations, nous étudions les capacités prédictives du modèle GARCH mal spécifié. Nous déterminons ainsi la robustesse de ce modèle et de l'estimateur du QMV à une erreur de spécification de la volatilité. / In this thesis, we focus on the inference of conditionally heteroskedastic models under different assumptions. This thesis consists of three parts and an introductory chapter. In the first part, we use an alternate identification assumption of the model and we define a non Gaussian quasi maximum likelihood estimator. We show that, under certain conditions, this estimator is more efficient than the Gaussian quasi maximum likelihood estimator. In a second part, we study the inference of a conditionally heteroskedastic model when the process of the innovations is distributed as an alpha stable law. We establish the consistency and the asymptotic normality of the maximum likelihood estimator. Since the alpha stable laws appear in general as a limit, we then focus of the behavior of this same estimator when the law of the innovation process is not stable but in the domain of attraction of a stable law. In the last part of this thesis, we study the estimation of a GARCH model when the data generating process is a conditionally heteroskedastic model whose coefficients are subject to Markov switching regimes. We show that, in a missspecified framework, this estimator converges toward a pseudo true value and we establish its asymptotic properties when this process is non stationary and explosive. Through simulations, we investigate the predictive ability of the missspecified GARCH model. Thus we determinate the robustness of the model and of the estimator of the quasi maximum likelihood to the missspecification of the volatility
8

Coherent Beta Risk Measures for Capital Requirements

Wirch, Julia Lynn January 1999 (has links)
This thesis compares insurance premium principles with current financial risk paradigms and uses distorted probabilities, a recent development in premium principle literature, to synthesize the current models for financial risk measures in banking and insurance. This work attempts to broaden the definition of value-at-risk beyond the percentile measures. Examples are used to show how the percentile measure fails to give consistent results, and how it can be manipulated. A new class of consistent risk measures is investigated.
9

A comparative analysis of generic models to an individualised approach in portfolio selection

Van Niekerk, Melissa January 2021 (has links)
The portfolio selection problem has been widely understood and practised for millennia, but it was rst formalised by Markowitz (1952) with the proposition of a risk-reward trade-o model. Since then, portfolio selection models have continued to evolve. The general consensus is that three objectives, to maximise the uncertain Rate Of Return (ROR), to maximise liquidity and to minimise risk, should be considered. It was found that there are opportunities for improvement within the existing portfolio selection models. This can be attributed to three gaps within the existing models. Generally, existing portfolio selection models are generic, especially in how they incorporate risk, they generally do not incorporate Socially Responsible Investing (SRI), and generally they are considered to be unvalidated. This dissertation set out to address these gaps and compare the real-world performance of generic and individualised portfolio selection models. A new method of accounting for risk was developed that consolidates the portfolio's market risk with the investor's nancial risk tolerance. Two portfolio selection models that incorporate individualised risk and SRI objectives were developed. These two models were called the risk-adjusted and social models, respectively. These individualised models were compared to an existing generic Markowitz model. These models were formulated using stochastic goal programming. A sample of 208 companies JSE Limited companies was selected and two independent datasets were extracted for these companies, a training (2010/01/01 { 2016/12/31) and testing (2017/01/01 { 2019/12/31) dataset. The models solved were in LINGO using the training dataset and tested on an unknown future by using the testing dataset. It was found that in the training period, the individualised risk-adjusted model outperformed the generic Markowitz model and the individualised social model. Furthermore, it was found that it would not be bene cial for an investor to be Socially Responsible (SR). Nevertheless, investors invest to achieve their ROR and SRI goals in the future, not in the present. Thus, it was necessary to evaluate how the portfolios selected by all three models would have performed in an unknown future. In the testing period, both the generic Markowitz model and the risk-adjusted models had dismal performance and were signi cantly outperformed by the South African market and unit trusts. Thus, these models are not useful or suitable for their intended purpose. On the contrary, the social model portfolios achieved high ROR values, were SR, and outperformed the market and the unit trusts. Thus, this model was useful and suitable for its intended purpose. The individualised social model signi cantly outperformed the other two models. Thus, it was concluded that an individualised approach that incorporates SRI outperforms a generic portfolio selection approach. Given its unparalleled performance and novel model formulation, the social model makes a contribution to the eld of portfolio selection. This dissertation also highlighted the importance of testing portfolio selection models on an unknown future and demonstrated the potentially horri c consequences of neglecting this analysis. / Dissertation (MEng (Industrial Engineering))--University of Pretoria 2021. / Industrial and Systems Engineering / MEng (Industrial Engineering) / Unrestricted
10

Value at Risk Models for a Nonlinear Hedged Portfolio

Liu, Guochun 30 April 2004 (has links)
This thesis addresses some practical issues that are similar to what a risk manager would be facing. To protect portfolio against unexpected turbulent drop, risk managers might use options to hedge the portfolio. Since the price of an option is not a linear function of the price of the underlying security or index, consequently option hedged portfolio's value is a not linear combination of the market prices of the underlying securities. Three Value-at-Risk (VaR) models, traditional estimate based Monte Carlo model, GARCH based Monte Carlo model, and resampling model, are developed to estimate risk of non-linear portfolios. The results from the models by setting different levels of hedging strategies are useful to evaluate and compare these strategies, and therefore may assist risk managers in making practical decisions in risk management.

Page generated in 0.1141 seconds