Spelling suggestions: "subject:"heavy detailed"" "subject:"heavy entailed""
11 |
Adaptive risk managementChen, Ying 13 February 2007 (has links)
In den vergangenen Jahren ist die Untersuchung des Risikomanagements vom Baselkomitee angeregt, um die Kredit- und Bankwesen regelmäßig zu aufsichten. Für viele multivariate Risikomanagementmethoden gibt es jedoch Beschränkungen von: 1) verlässt sich die Kovarianzschätzung auf eine zeitunabhängige Form, 2) die Modelle beruhen auf eine unrealistischen Verteilungsannahme und 3) numerische Problem, die bei hochdimensionalen Daten auftreten. Es ist das primäre Ziel dieser Doktorarbeit, präzise und schnelle Methoden vorzuschlagen, die diesen Beschränkungen überwinden. Die Grundidee besteht darin, zuerst aus einer hochdimensionalen Zeitreihe die stochastisch unabhängigen Komponenten (IC) zu extrahieren und dann die Verteilungsparameter der resultierenden IC beruhend auf eindimensionale Heavy-Tailed Verteilungsannahme zu identifizieren. Genauer gesagt werden zwei lokale parametrische Methoden verwendet, um den Varianzprozess jeder IC zu schätzen, das lokale Moving Window Average (MVA) Methode und das lokale Exponential Smoothing (ES) Methode. Diese Schätzungen beruhen auf der realistischen Annahme, dass die IC Generalized Hyperbolic (GH) verteilt sind. Die Berechnung ist schneller und erreicht eine höhere Genauigkeit als viele bekannte Risikomanagementmethoden. / Over recent years, study on risk management has been prompted by the Basel committee for the requirement of regular banking supervisory. There are however limitations of many risk management methods: 1) covariance estimation relies on a time-invariant form, 2) models are based on unrealistic distributional assumption and 3) numerical problems appear when applied to high-dimensional portfolios. The primary aim of this dissertation is to propose adaptive methods that overcome these limitations and can accurately and fast measure risk exposures of multivariate portfolios. The basic idea is to first retrieve out of high-dimensional time series stochastically independent components (ICs) and then identify the distributional behavior of every resulting IC in univariate space. To be more specific, two local parametric approaches, local moving window average (MWA) method and local exponential smoothing (ES) method, are used to estimate the volatility process of every IC under the heavy-tailed distributional assumption, namely ICs are generalized hyperbolic (GH) distributed. By doing so, it speeds up the computation of risk measures and achieves much better accuracy than many popular risk management methods.
|
12 |
Intervalos de confiança para altos quantis oriundos de distribuições de caudas pesadas / Confidence intervals for high quantiles from heavy-tailed distributions.Montoril, Michel Helcias 10 March 2009 (has links)
Este trabalho tem como objetivo calcular intervalos de confiança para altos quantis oriundos de distribuições de caudas pesadas. Para isso, utilizamos os métodos da aproximação pela distribuição normal, razão de verossimilhanças, {\\it data tilting} e gama generalizada. Obtivemos, através de simulações, que os intervalos calculados a partir do método da gama generalizada apresentam probabilidades de cobertura bem próximas do nível de confiança, com amplitudes médias menores do que os outros três métodos, para dados gerados da distribuição Weibull. Todavia, para dados gerados da distribuição Fréchet, o método da razão de verossimilhanças fornece os melhores intervalos. Aplicamos os métodos utilizados neste trabalho a um conjunto de dados reais, referentes aos pagamentos de indenizações, em reais, de seguros de incêndio, de um determinado grupo de seguradoras no Brasil, no ano de 2003 / In this work, confidence intervals for high quantiles from heavy-tailed distributions were computed. More specifically, four methods, namely, normal approximation method, likelihood ratio method, data tilting method and generalised gamma method are used. A simulation study with data generated from Weibull distribution has shown that the generalised gamma method has better coverage probabilities with the smallest average length intervals. However, from data generated from Fréchet distribution, the likelihood ratio method gives the better intervals. Moreover, the methods used in this work are applied on a real data set from 1758 Brazilian fire claims
|
13 |
Statistické odhady a chvosty jejich rozdělení pravděpodobností / Statistické odhady a chvosty jejich rozdělení pravděpodobnostíVeverková, Jana January 2012 (has links)
Master Thesis Statistical estimators and their tail behavior provides description of two type of characteristics of robustness of estimators - tail behavior and break- down point. Description is made for translation equivariant estimators in general and also for some concrete type of estimators, sample mean, sample median, trimmed mean, Huber estimator and Hodges Lehmann estimator. Tail behavior of these estimator is illustrated for random sample coming from t-distribution with 1 to 5 degrees of freedom. Ilustration is based on simulations made in Mathematica. 1
|
14 |
Stochastic volatility : maximum likelihood estimation and specification testingWhite, Scott Ian January 2006 (has links)
Stochastic volatility (SV) models provide a means of tracking and forecasting the variance of financial asset returns. While SV models have a number of theoretical advantages over competing variance modelling procedures they are notoriously difficult to estimate. The distinguishing feature of the SV estimation literature is that those algorithms that provide accurate parameter estimates are conceptually demanding and require a significant amount of computational resources to implement. Furthermore, although a significant number of distinct SV specifications exist, little attention has been paid to how one would choose the appropriate specification for a given data series. Motivated by these facts, a likelihood based joint estimation and specification testing procedure for SV models is introduced that significantly overcomes the operational issues surrounding existing estimators. The estimation and specification testing procedures in this thesis are made possible by the introduction of a discrete nonlinear filtering (DNF) algorithm. This procedure uses the nonlinear filtering set of equations to provide maximum likelihood estimates for the general class of nonlinear latent variable problems which includes the SV model class. The DNF algorithm provides a fast and accurate implementation of the nonlinear filtering equations by treating the continuously valued state-variable as if it were a discrete Markov variable with a large number of states. When the DNF procedure is applied to the standard SV model, very accurate parameter estimates are obtained. Since the accuracy of the DNF is comparable to other procedures, its advantages are seen as ease and speed of implementation and the provision of online filtering (prediction) of variance. Additionally, the DNF procedure is very flexible and can be used for any dynamic latent variable problem with closed form likelihood and transition functions. Likelihood based specification testing for non-nested SV specifications is undertaken by formulating and estimating an encompassing model that nests two competing SV models. Likelihood ratio statistics are then used to make judgements regarding the optimal SV specification. The proposed framework is applied to SV models that incorporate either extreme returns or asymmetries.
|
15 |
Intervalos de confiança para altos quantis oriundos de distribuições de caudas pesadas / Confidence intervals for high quantiles from heavy-tailed distributions.Michel Helcias Montoril 10 March 2009 (has links)
Este trabalho tem como objetivo calcular intervalos de confiança para altos quantis oriundos de distribuições de caudas pesadas. Para isso, utilizamos os métodos da aproximação pela distribuição normal, razão de verossimilhanças, {\\it data tilting} e gama generalizada. Obtivemos, através de simulações, que os intervalos calculados a partir do método da gama generalizada apresentam probabilidades de cobertura bem próximas do nível de confiança, com amplitudes médias menores do que os outros três métodos, para dados gerados da distribuição Weibull. Todavia, para dados gerados da distribuição Fréchet, o método da razão de verossimilhanças fornece os melhores intervalos. Aplicamos os métodos utilizados neste trabalho a um conjunto de dados reais, referentes aos pagamentos de indenizações, em reais, de seguros de incêndio, de um determinado grupo de seguradoras no Brasil, no ano de 2003 / In this work, confidence intervals for high quantiles from heavy-tailed distributions were computed. More specifically, four methods, namely, normal approximation method, likelihood ratio method, data tilting method and generalised gamma method are used. A simulation study with data generated from Weibull distribution has shown that the generalised gamma method has better coverage probabilities with the smallest average length intervals. However, from data generated from Fréchet distribution, the likelihood ratio method gives the better intervals. Moreover, the methods used in this work are applied on a real data set from 1758 Brazilian fire claims
|
16 |
Studies on Asymptotic Analysis of GI/G/1-type Markov Chains / GI/G/1型マルコフ連鎖の漸近解析に関する研究Kimura, Tatsuaki 23 March 2017 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(情報学) / 甲第20517号 / 情博第645号 / 新制||情||111(附属図書館) / 京都大学大学院情報学研究科システム科学専攻 / (主査)教授 髙橋 豊, 教授 太田 快人, 教授 大塚 敏之, 准教授 増山 博之 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
|
17 |
Velké odchylky a jejich aplikace v pojistné matematice / Large deviations and their applications in insurance mathematicsFuchsová, Lucia January 2011 (has links)
Title: Large deviations and their applications in insurance mathematics Author: Lucia Fuchsová Department: Department of Probability and Mathematical Statistics Supervisor: RNDr. Zbyněk Pawlas, Ph.D. Supervisor's e-mail address: Zbynek.Pawlas@mff.cuni.cz Abstract: In the present work we study large deviations theory. We discuss heavy-tailed distributions, which describe the probability of large claim oc- curence. We are interested in the use of large deviations theory in insurance. We simulate claim sizes and their arrival times for Cramér-Lundberg model and first we analyze the probability that ruin happens in dependence on the parameters of our model for Pareto distributed claim size, next we compare ruin probability for other claim size distributions. For real life data we model the probability of large claim size occurence by generalized Pareto distribu- tion. 1
|
18 |
The Principle of Scaling of Geographic Space and its Application in Urban StudiesLiu, Xintao January 2012 (has links)
Geographic space is the large-scale and continuous space that encircles the earth and in which human activities occur. The study of geographic space has drawn attention in many different fields and has been applied in a variety of studies, including those on cognition, urban planning and navigation systems. A scaling property indicates that small objects are far more numerous than large ones, i.e., the size of objects is extremely diverse. The concept of scaling resembles a fractal in geometric terms and a power law distribution from the perspective of statistical physics, but it is different from both in terms of application. Combining the concepts of geographic space and scaling, this thesis proposes the concept of the scaling of geographic space, which refers to the phenomenon that small geographic objects or representations are far more numerous than large ones. From the perspectives of statistics and mathematics, the scaling of geographic space can be characterized by the fact that the sizes of geographic objects follow heavy-tailed distributions, i.e., the special non-linear relationships between variables and their probability. In this thesis, the heavy-tailed distributions refer to the power law, lognormal, exponential, power law with an exponential cutoff and stretched exponential. The first three are the basic distributions, and the last two are their degenerate versions. If the measurements of the geographic objects follow a heavy-tailed distribution, then their mean value can divide them into two groups: large ones (a low percentage) whose values lie above the mean value and small ones (a high percentage) whose values lie below. This regularity is termed as the head/tail division rule. That is, a two-tier hierarchical structure can be obtained naturally. The scaling property of geographic space and the head/tail division rule are verified at city and country levels from the perspectives of axial lines and blocks, respectively. In the study of geographic space, the most important concept is geographic representation, which represents or partitions a large-scale geographic space into numerous small pieces, e.g., vector and raster data in conventional spatial analysis. In a different context, each geographic representation possesses different geographic implications and a rich partial knowledge of space. The emergence of geographic information science (GIScience) and volunteered geographic information (VGI) greatly enable the generation of new types of geographic representations. In addition to the old axial lines, this thesis generated several types of representations of geographic space: (a) blocks that were decomposed from road segments, each of which forms a minimum cycle such as city and field blocks (b) natural streets that were generated from street center lines using the Gestalt principle of good continuity; (c) new axial lines that were defined as the least number of individual straight line segments mutually intersected along natural streets; (d) the fewest-turn map direction (route) that possesses the hierarchical structure and indicates the scaling of geographic space; (e) spatio-temporal clusters of the stop points in the trajectories of large-scale floating car data. Based on the generated geographic representations, this thesis further applies the scaling property and the head/tail division rule to these representations for urban studies. First, all of the above geographic representations demonstrate the scaling property, which indicates the scaling of geographic space. Furthermore, the head/tail division rule performs well in obtaining the hierarchical structures of geographic objects. In a sense, the scaling property reveals the hierarchical structures of geographic objects. According to the above analysis and findings, several urban studies are performed as follows: (1) generate new axial lines based on natural streets for a better understanding of urban morphologies; (2) compute the fewest-turn and shortest map direction; (3) identify urban sprawl patches based on the statistics of blocks and natural cities; (4) categorize spatio-temporal clusters of long stop points into hotspots and traffic jams; and (5) perform an across-country comparison of hierarchical spatial structures. The overall contribution of this thesis is first to propose the principle of scaling of geographic space as well as the head/tail division rule, which provide a new and quantitative perspective to efficiently reduce the high degree of complexity and effectively solve the issues in urban studies. Several successful applications prove that the scaling of geographic space and the head/tail division rule are inspiring and can in fact be applied as a universal law, in particular, to urban studies and other fields. The data sets that were generated via an intensive geo-computation process are as large as hundreds of gigabytes and will be of great value to further data mining studies. / <p>QC 20120301</p> / Hägerstrand project entitled “GIS-based mobility information for sustainable urban planning and design”
|
19 |
Dépendance et événements extrêmes en théorie de la ruine : étude univariée et multivariée, problèmes d'allocation optimale / Dependence and extreme events in ruin theory : univariate and multivariate study, optimal allocation problemsBiard, Romain 07 October 2010 (has links)
Cette thèse présente de nouveaux modèles et de nouveaux résultats en théorie de la ruine, lorsque les distributions des montants de sinistres sont à queue épaisse. Les hypothèses classiques d’indépendance et de stationnarité, ainsi que l’analyse univariée sont parfois jugées trop restrictives pour décrire l’évolution complexe des réserves d’une compagnie d’assurance. Dans un contexte de dépendance entre les montants de sinistres, des équivalents de la probabilité deruine univariée en temps fini sont obtenus. Cette dépendance, ainsi que les autres paramètres du modèle sont modulés par un processus Markovien d’environnement pour prendre en compte des possibles crises de corrélation. Nous introduisons ensuite des modèles de dépendance entre les montants de sinistres et les temps inter-sinistres pour des risques de type tremblements de terre et inondations. Dans un cadre multivarié, nous présentons divers critères de risques tels que la probabilité de ruine multivariée ou l’espérance de l’intégrale temporelle de la partie négative du processus de risque. Nous résolvons des problèmes d’allocation optimale pour ces différentes mesures de risque. Nous étudions alors l’impact de la dangerosité des risques et de la dépendance entre les branches sur cette allocation optimale / This PhD thesis presents new models and new results in ruin theory, in the case where claim amounts are heavy-tailed distributed. Classical assumptions like independence and stationarity and univariate analysis are sometimes too restrictive to describe the complex evolution of the reserves of an insurance company. In a dependence context, asymptotics of univariate finite-time ruin probability are computed. This dependence, and the other model parameters are modulated by a Markovian environment process to take into account possible correlation crisis. Then, we introduce some models which describe dependence between claim amounts and claim interarrival times we can find in earthquake or flooding risks. In multivariate framework, we present some risk criteria like multivariate ruin probability or the expectation of the timeintegrated negative part of the risk process. We solve some problems of optimal allocation for these risk measures. Then, we study the impact of the risk dangerousness and of the dependence between lines on this optimal allocation.
|
20 |
Quantile-based inference and estimation of heavy-tailed distributionsDominicy, Yves 18 April 2014 (has links)
This thesis is divided in four chapters. The two first chapters introduce a parametric quantile-based estimation method of univariate heavy-tailed distributions and elliptical distributions, respectively. If one is interested in estimating the tail index without imposing a parametric form for the entire distribution function, but only on the tail behaviour, we propose a multivariate Hill estimator for elliptical distributions in chapter three. In the first three chapters we assume an independent and identically distributed setting, and so as a first step to a dependent setting, using quantiles, we prove in the last chapter the asymptotic normality of marginal sample quantiles for stationary processes under the S-mixing condition.<p><p><p>The first chapter introduces a quantile- and simulation-based estimation method, which we call the Method of Simulated Quantiles, or simply MSQ. Since it is based on quantiles, it is a moment-free approach. And since it is based on simulations, we do not need closed form expressions of any function that represents the probability law of the process. Thus, it is useful in case the probability density functions has no closed form or/and moments do not exist. It is based on a vector of functions of quantiles. The principle consists in matching functions of theoretical quantiles, which depend on the parameters of the assumed probability law, with those of empirical quantiles, which depend on the data. Since the theoretical functions of quantiles may not have a closed form expression, we rely on simulations.<p><p><p>The second chapter deals with the estimation of the parameters of elliptical distributions by means of a multivariate extension of MSQ. In this chapter we propose inference for vast dimensional elliptical distributions. Estimation is based on quantiles, which always exist regardless of the thickness of the tails, and testing is based on the geometry of the elliptical family. The multivariate extension of MSQ faces the difficulty of constructing a function of quantiles that is informative about the covariation parameters. We show that the interquartile range of a projection of pairwise random variables onto the 45 degree line is very informative about the covariation.<p><p><p>The third chapter consists in constructing a multivariate tail index estimator. In the univariate case, the most popular estimator for the tail exponent is the Hill estimator introduced by Bruce Hill in 1975. The aim of this chapter is to propose an estimator of the tail index in a multivariate context; more precisely, in the case of regularly varying elliptical distributions. Since, for univariate random variables, our estimator boils down to the Hill estimator, we name it after Bruce Hill. Our estimator is based on the distance between an elliptical probability contour and the exceedance observations. <p><p><p>Finally, the fourth chapter investigates the asymptotic behaviour of the marginal sample quantiles for p-dimensional stationary processes and we obtain the asymptotic normality of the empirical quantile vector. We assume that the processes are S-mixing, a recently introduced and widely applicable notion of dependence. A remarkable property of S-mixing is the fact that it doesn't require any higher order moment assumptions to be verified. Since we are interested in quantiles and processes that are probably heavy-tailed, this is of particular interest.<p> / Doctorat en Sciences économiques et de gestion / info:eu-repo/semantics/nonPublished
|
Page generated in 0.0485 seconds