• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 8
  • 8
  • 8
  • 8
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A distribuição normal-valor extremo generalizado para a modelagem de dados limitados no intervalo unitá¡rio (0,1) / The normal-generalized extreme value distribution for the modeling of data restricted in the unit interval (0,1)

Benites, Yury Rojas 28 June 2019 (has links)
Neste trabalho é introduzido um novo modelo estatístico para modelar dados limitados no intervalo continuo (0;1). O modelo proposto é construído sob uma transformação de variáveis, onde a variável transformada é resultado da combinação de uma variável com distribuição normal padrão e a função de distribuição acumulada da distribuição valor extremo generalizado. Para o novo modelo são estudadas suas propriedades estruturais. A nova família é estendida para modelos de regressão, onde o modelo é reparametrizado na mediana da variável resposta e este conjuntamente com o parâmetro de dispersão são relacionados com covariáveis através de uma função de ligação. Procedimentos inferênciais são desenvolvidos desde uma perspectiva clássica e bayesiana. A inferência clássica baseia-se na teoria de máxima verossimilhança e a inferência bayesiana no método de Monte Carlo via cadeias de Markov. Além disso estudos de simulação foram realizados para avaliar o desempenho das estimativas clássicas e bayesianas dos parâmetros do modelo. Finalmente um conjunto de dados de câncer colorretal é considerado para mostrar a aplicabilidade do modelo. / In this research a new statistical model is introduced to model data restricted in the continuous interval (0;1). The proposed model is constructed under a transformation of variables, in which the transformed variable is the result of the combination of a variable with standard normal distribution and the cumulative distribution function of the generalized extreme value distribution. For the new model its structural properties are studied. The new family is extended to regression models, in which the model is reparametrized in the median of the response variable and together with the dispersion parameter are related to covariables through a link function. Inferential procedures are developed from a classical and Bayesian perspective. The classical inference is based on the theory of maximum likelihood, and the Bayesian inference is based on the Markov chain Monte Carlo method. In addition, simulation studies were performed to evaluate the performance of the classical and Bayesian estimates of the model parameters. Finally a set of colorectal cancer data is considered to show the applicability of the model
2

Modely a statistická analýza procesu rekordů / Models and statistical analysis of record processes

Tůmová, Alena January 2011 (has links)
In this work we model the historical development of best performances in men's 100, 200, 400 and 800m running events. We suppose that the years best performances are independent random variables with generalized extreme value distribution for minima and that there is a decreasing trend in location. Parameters of the models are estimated by using maximum likelihood techniques. The data of years best performances are missing for some years, we treat them as right censored data that are censored by value of world record valid at that time. Graphic tools used for models diagnostics are adjusted to the censoring. The models we get are used to estimate the ultimate records and to predict new records in next years. At the end of the work we estimate several models describing historical development of years best performances for more events at one time.
3

Metody modelování a statistické analýzy procesu extremálních hodnot / Methods of modelling and statistical analysis of an extremal value process

Jelenová, Klára January 2012 (has links)
In the present work we deal with the problem of etremal value of time series, especially of maxima. We study times and values of maximum by an approach of point process and we model distribution of extremal values by statistical methods. We estimate parameters of distribution using different methods, namely graphical methods of data analysis and subsequently we test the estimated distribution by tests of goodness of fit. We study the stationary case and also the cases with a trend. In connection with distribution of excesess and exceedances over a threshold we deal with generalized Pareto distribution.
4

Teorie extrémních hodnot v aktuárských vědách / Extreme Value Theory in Actuarial Sciences

Jamáriková, Zuzana January 2013 (has links)
This thesis is focused on the models based on extreme value theory and their practical applications. Specifically are described the block maxima models and the models based on threshold exceedances. Both of these methods are described in thesis theoretically. Apart from theoretical description there are also practical calculations based on simulated or real data. The applications of block maxima models are focused on choice of block size, suitability of the models for specific data and possibilities of extreme data analysis. The applications of models based on threshold exceedances are focused on choice of threshold and on suitability of the models. There is an example of the model used for calculations of reinsurance premium for extreme claims in the case of nonproportional reinsurance.
5

Modelování operačního rizika / Operational risk modelling

Mináriková, Eva January 2013 (has links)
In the present thesis we will firstly familiarize ourselves with the term of operational risk, it's definition presented in the directives Basel II and Solvency II, and afterwards with the methods of calculation Capital Requirements for Operational Risk, set by these directives. In the second part of the thesis we will concentrate on the methods of modelling operational loss data. We will introduce the Extreme Value Theory which describes possible approaches to modelling data with significant values that occur infrequently; the typical characteristic of operational risk data. We will mainly focus on the model for threshold exceedances which utilizes Generalized Pareto Distribution to model the distribution of those excesses. The teoretical knowledge of this theory and the appropriate modelling will be applied on simulated loss data. Finally we will test the ability of presented methods to model loss data distributions.
6

Fitting extreme value distributions to the Zambezi River flood water levels recorded at Katima Mulilo in Namibia (1965-2003)

Kamwi, Innocent Silibelo January 2005 (has links)
>Magister Scientiae - MSc / This study sought to identify and fit the appropriate extreme value distribution to flood data, using the method of maximum likelihood. To examine the uncertainty of the estimated parameters and evaluate the goodness of fit of the model identified. The study revealed that the three parameter Weibull and the generalised extreme value (GEV) distributions fit the data very well. Standard errors for the estimated parameters were calculated from the empirical information matrix. An upper limit to the flood levels followed from the fitted distribution.
7

Modeling Extreme Values / Modelování extrémních hodnot

Shykhmanter, Dmytro January 2013 (has links)
Modeling of extreme events is a challenging statistical task. Firstly, there is always a limit number of observations and secondly therefore no experience to back test the result. One way of estimating higher quantiles is to fit one of theoretical distributions to the data and extrapolate to the tail. The shortcoming of this approach is that the estimate of the tail is based on the observations in the center of distribution. Alternative approach to this problem is based on idea to split the data into two sub-populations and model body of the distribution separately from the tail. This methodology is applied to non-life insurance losses, where extremes are particularly important for risk management. Never the less, even this approach is not a conclusive solution of heavy tail modeling. In either case, estimated 99.5% percentiles have such high standard errors, that the their reliability is very low. On the other hand this approach is theoretically valid and deserves to be considered as one of the possible methods of extreme value analysis.
8

Développement d'un outil statistique pour évaluer les charges maximales subies par l'isolation d'une cuve de méthanier au cours de sa période d'exploitation / Development of a statistical tool to determine sloshing loads to be applied on cargo containment system of a LNG carrier for structural strength assessment

Fillon, Blandine 19 December 2014 (has links)
Ce travail de thèse porte sur les outils statistiques pour l'évaluation des maxima de charges de sloshing dans les cuves de méthaniers. Selon les caractéristiques du navire, son chargement et les conditions de navigation, un ballotement hydrodynamique est observé à l'intérieur des cuves, phénomène communément appelé sloshing. La détermination des charges qui s'appliquent à la structure est basée sur des mesures de pression d'impact au moyen d'essais sur maquette. Les maxima de pression par impact, extraits des mesures, sont étudiés. La durée d'un essai est équivalente à 5 heures au réel et insuffisante pour déterminer des maxima de pression associés à de grandes périodes de retour (40 ans). Un modèle probabiliste est nécessaire pour extrapoler les maxima de pression. Le modèle usuel est une loi de Weibull. Comme ce sont les valeurs extrêmes des échantillons qui nous intéressent, les ajustements sont aussi effectués par les lois des valeurs extrêmes et de Pareto généralisées via les méthodes de maximum par bloc et d'excès au-dessus d'un seuil.L'originalité du travail repose sur l'emploi d'un système alternatif, plus pertinent pour la capture des maxima de pression et d'une quantité de 480 heures de mesures disponible pour les mêmes conditions d'essai. Cela fournit une distribution de référence pour les maxima de pression et nous permet d'évaluer la pertinence des modèles sélectionnés. Nous insistons sur l'importance d'évaluer la qualité des ajustements par des tests statistiques et de quantifier les incertitudes sur les estimations obtenues. La méthodologie fournie a été implémentée dans un logiciel nommé Stat_R qui facilite la manipulation et le traitement des résultats. / This thesis focuses on statistical tools for the assessment of maxima sloshing loads in LNG tanks. According to ship features, tank cargo and sailing conditions, a sloshing phenomenon is observed inside LNG tanks. The determination of sloshing loads supported by the tank structure is derived from impact pressure measurements performed on a test rig. Pressure maxima per impact, extracted from test measurements, are investigated. Test duration is equivalent to 5 hours in full scale. This duration is not sufficient to determine pressure maxima associated with high return periods (40 years). It is necessary to use a probabilistic model in order to extrapolate pressure maxima. Usually, a Weibull model is used. As we focus on extreme values from samples, fittings are also performed with the generalized extreme value distribution and the generalized Pareto distribution using block maximum method and peaks over threshold method.The originality of this work is based on the use of an alternate measurement system which is more relevant than usual measurement system to get pressure maxima and a 480 hours measured data available for same test conditions. This provides a reference distribution for pressure maxima which is used to assess the relevance of the selected probabilistic models. Particular attention is paid to the assessment of fittings quality using statistical tests and to the quantification of uncertainties on estimated values.The provided methodology has been implemented in a software called Stat_R which makes the manipulation and the treatment of results easier.

Page generated in 0.1186 seconds