• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 139
  • 27
  • 19
  • 13
  • 11
  • 9
  • 7
  • 5
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 263
  • 263
  • 175
  • 68
  • 61
  • 51
  • 40
  • 34
  • 31
  • 30
  • 28
  • 25
  • 25
  • 23
  • 23
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Estimation des limites d'extrapolation par les lois de valeurs extrêmes. Application à des données environnementales / Estimation of extrapolation limits based on extreme-value distributions.Application to environmental data.

Albert, Clément 17 December 2018 (has links)
Cette thèse se place dans le cadre de la Statistique des valeurs extrêmes. Elle y apporte trois contributions principales. L'estimation des quantiles extrêmes se fait dans la littérature en deux étapes. La première étape consiste à utiliser une approximation des quantiles basée sur la théorie des valeurs extrêmes. La deuxième étape consiste à estimer les paramètres inconnus de l'approximation en question, et ce en utilisant les valeurs les plus grandes du jeu de données. Cette décomposition mène à deux erreurs de nature différente, la première étant une erreur systémique de modèle, dite d'approximation ou encore d'extrapolation, la seconde consituant une erreur d'estimation aléatoire. La première contribution de cette thèse est l'étude théorique de cette erreur d'extrapolation mal connue.Cette étude est menée pour deux types d'estimateur différents, tous deux cas particuliers de l'approximation dite de la "loi de Pareto généralisée" : l'estimateur Exponential Tail dédié au domaine d'attraction de Gumbel et l'estimateur de Weissman dédié à celui de Fréchet.Nous montrons alors que l'erreur en question peut s'interpréter comme un reste d'ordre un d'un développement de Taylor. Des conditions nécessaires et suffisantes sont alors établies de telle sorte que l'erreur tende vers zéro quand la taille de l'échantillon augmente. De manière originale, ces conditions mènent à une division du domaine d'attraction de Gumbel en trois parties distinctes. En comparaison, l'erreur d'extrapolation associée à l'estimateur de Weissman présente un comportement unifié sur tout le domaine d'attraction de Fréchet. Des équivalents de l'erreur sont fournis et leur comportement est illustré numériquement. La deuxième contribution est la proposition d'un nouvel estimateur des quantiles extrêmes. Le problème est abordé dans le cadre du modèle ``log Weibull-tail'' généralisé, où le logarithme de l'inverse du taux de hasard cumulé est supposé à variation régulière étendue. Après une discussion sur les conséquences de cette hypothèse, nous proposons un nouvel estimateur des quantiles extrêmes basé sur ce modèle. La normalité asymptotique dudit estimateur est alors établie et son comportement en pratique est évalué sur données réelles et simulées.La troisième contribution de cette thèse est la proposition d'outils permettant en pratique de quantifier les limites d'extrapolation d'un jeu de données. Dans cette optique, nous commençons par proposer des estimateurs des erreurs d'extrapolation associées aux approximations Exponential Tail et Weissman. Après avoir évalué les performances de ces estimateurs sur données simulées, nous estimons les limites d'extrapolation associées à deux jeux de données réelles constitués de mesures journalières de variables environnementales. Dépendant de l'aléa climatique considéré, nous montrons que ces limites sont plus ou moins contraignantes. / This thesis takes place in the extreme value statistics framework. It provides three main contributions to this area. The extreme quantile estimation is a two step approach. First, it consists in proposing an extreme value based quantile approximation. Then, estimators of the unknown quantities are plugged in the previous approximation leading to an extreme quantile estimator.The first contribution of this thesis is the study of this previous approximation error. These investigations are carried out using two different kind of estimators, both based on the well-known Generalized Pareto approximation: the Exponential Tail estimator dedicated to the Gumbel maximum domain of attraction and the Weissman estimator dedicated to the Fréchet one.It is shown that the extrapolation error can be interpreted as the remainder of a first order Taylor expansion. Necessary and sufficient conditions are then provided such that this error tends to zero as the sample size increases. Interestingly, in case of the so-called Exponential Tail estimator, these conditions lead to a subdivision of Gumbel maximum domain of attraction into three subsets. In constrast, the extrapolation error associated with Weissmanestimator has a common behavior over the whole Fréchet maximum domain of attraction. First order equivalents of the extrapolation error are thenderived and their accuracy is illustrated numerically.The second contribution is the proposition of a new extreme quantile estimator.The problem is addressed in the framework of the so-called ``log-Generalized Weibull tail limit'', where the logarithm of the inverse cumulative hazard rate function is supposed to be of extended regular variation. Based on this model, a new estimator of extreme quantiles is proposed. Its asymptotic normality is established and its behavior in practice is illustrated on both real and simulated data.The third contribution of this thesis is the proposition of new mathematical tools allowing the quantification of extrapolation limits associated with a real dataset. To this end, we propose estimators of extrapolation errors associated with the Exponentail Tail and the Weissman approximations. We then study on simulated data how these two estimators perform. We finally use these estimators on real datasets to show that, depending on the climatic phenomena,the extrapolation limits can be more or less stringent.
162

Contribuições em inferência e modelagem de valores extremos / Contributions to extreme value inference and modeling.

Eliane Cantinho Pinheiro 04 December 2013 (has links)
A teoria do valor extremo é aplicada em áreas de pesquisa tais como hidrologia, estudos de poluição, engenharia de materiais, controle de tráfego e economia. A distribuição valor extremo ou Gumbel é amplamente utilizada na modelagem de valores extremos de fenômenos da natureza e no contexto de análise de sobrevivência para modelar o logaritmo do tempo de vida. A modelagem de valores extremos de fenômenos da natureza tais como velocidade de vento, nível da água de rio ou mar, altura de onda ou umidade é importante em estatística ambiental pois o conhecimento de valores extremos de tais eventos é crucial na prevenção de catátrofes. Ultimamente esta teoria é de particular interesse pois fenômenos extremos da natureza têm sido mais comuns e intensos. A maioria dos artigos sobre teoria do valor extremo para modelagem de dados considera amostras de tamanho moderado ou grande. A distribuição Gumbel é frequentemente incluída nas análises mas a qualidade do ajuste pode ser pobre em função de presença de ouliers. Investigamos modelagem estatística de eventos extremos com base na teoria de valores extremos. Consideramos um modelo de regressão valor extremo introduzido por Barreto-Souza & Vasconcellos (2011). Os autores trataram da questão de corrigir o viés do estimador de máxima verossimilhança para pequenas amostras. Nosso primeiro objetivo é deduzir ajustes para testes de hipótese nesta classe de modelos. Derivamos a estatística da razão de verossimilhanças ajustada de Skovgaard (2001) e cinco ajustes da estatística da razão de verossimilhanças sinalizada, que foram propostos por Barndorff-Nielsen (1986, 1991), DiCiccio & Martin (1993), Skovgaard (1996), Severini (1999) e Fraser et al. (1999). As estatísticas ajustadas são aproximadamente distribuídas como uma distribuição $\\chi^2$ e normal padrão com alto grau de acurácia. Os termos dos ajustes têm formas compactas simples que podem ser facilmente implementadas em softwares disponíveis. Comparamos a performance do teste da razão de verossimilhanças, do teste da razão de verossimilanças sinalizada e dos testes ajustados obtidos neste trabalho em amostras pequenas. Ilustramos uma aplicação dos testes usuais e suas versões modificadas em conjuntos de dados reais. As distribuições das estatísticas ajustadas são mais próximas das respectivas distribuições limites comparadas com as distribuições das estatísticas usuais quando o tamanho da amostra é relativamente pequeno. Os resultados de simulação indicaram que as estatísticas ajustadas são recomendadas para inferência em modelo de regressão valor extremo quando o tamanho da amostra é moderado ou pequeno. Parcimônia é importante quando os dados são escassos, mas flexibilidade também é crucial pois um ajuste pobre pode levar a uma conclusão completamente errada. Uma revisão da literatura foi feita para listar as distribuições que são generalizações da distribuição Gumbel. Nosso segundo objetivo é avaliar a parcimônia e flexibilidade destas distribuições. Com este propósito, comparamos tais distribuições através de momentos, coeficientes de assimetria e de curtose e índice da cauda. As famílias mais amplas obtidas pela inclusão de parâmetros adicionais, que têm a distribuição Gumbel como caso particular, apresentam assimetria e curtose flexíveis enquanto a distribuição Gumbel apresenta tais características constantes. Dentre estas distribuições, a distribuição valor extremo generalizada é a única com índice da cauda que pode ser qualquer número real positivo enquanto os índices da cauda das outras distribuições são zero. Observamos que algumas generalizações da distribuição Gumbel estudadas na literatura são não identificáveis. Portanto, para estes modelos a interpretação e estimação de parâmetros individuais não é factível. Selecionamos as distribuições identificáveis e as ajustamos a um conjunto de dados simulado e a um conjunto de dados reais de velocidade de vento. Como esperado, tais distribuições se ajustaram bastante bem ao conjunto de dados simulados de uma distribuição Gumbel. A distribuição valor extremo generalizada e a mistura de duas distribuições Gumbel produziram melhores ajustes aos dados do que as outras distribuições na presença não desprezível de observações discrepantes que não podem ser acomodadas pela distribuição Gumbel e, portanto, sugerimos que tais distribuições devem ser utilizadas neste contexto. / The extreme value theory is applied in research fields such as hydrology, pollution studies, materials engineering, traffic management, economics and finance. The Gumbel distribution is widely used in statistical modeling of extreme values of a natural process such as rainfall and wind. Also, the Gumbel distribution is important in the context of survival analysis for modeling lifetime in logarithmic scale. The statistical modeling of extreme values of a natural process such as wind or humidity is important in environmental statistics; for example, understanding extreme wind speed is crucial in catastrophe/disaster protection. Lately this is of particular interest as extreme natural phenomena/episodes are more common and intense. The majority of papers on extreme value theory for modeling extreme data is supported by moderate or large sample sizes. The Gumbel distribution is often considered but the resulting fit may be poor in the presence of ouliers since its skewness and kurtosis are constant. We deal with statistical modeling of extreme events data based on extreme value theory. We consider a general extreme-value regression model family introduced by Barreto-Souza & Vasconcellos (2011). The authors addressed the issue of correcting the bias of the maximum likelihood estimators in small samples. Here, our first goal is to derive hypothesis test adjustments in this class of models. We derive Skovgaard\'s adjusted likelihood ratio statistics Skovgaard (2001) and five adjusted signed likelihood ratio statistics, which have been proposed by Barndorff-Nielsen (1986, 1991), DiCiccio & Martin (1993), Skovgaard (1996), Severini (1999) and Fraser et al. (1999). The adjusted statistics are approximately distributed as $\\chi^2$ and standard normal with high accuracy. The adjustment terms have simple compact forms which may be easily implemented by readily available software. We compare the finite sample performance of the likelihood ratio test, the signed likelihood ratio test and the adjusted tests obtained in this work. We illustrate the application of the usual tests and their modified versions in real datasets. The adjusted statistics are closer to the respective limiting distribution compared to the usual ones when the sample size is relatively small. Simulation results indicate that the adjusted statistics can be recommended for inference in extreme value regression model with small or moderate sample size. Parsimony is important when data are scarce, but flexibility is also crucial since a poor fit may lead to a completely wrong conclusion. A literature review was conducted to list distributions which nest the Gumbel distribution. Our second goal is to evaluate their parsimony and flexibility. For this purpose, we compare such distributions regarding moments, skewness, kurtosis and tail index. The larger families obtained by introducing additional parameters, which have Gumbel embedded in, present flexible skewness and kurtosis while the Gumbel distribution skewness and kurtosis are constant. Among these distributions the generalized extreme value is the only one with tail index that can be any positive real number while the tail indeces of the other distributions investigated here are zero. We notice that some generalizations of the Gumbel distribution studied in the literature are not indetifiable. Hence, for these models meaningful interpretation and estimation of individual parameters are not feasible. We select the identifiable distributions and fit them to a simulated dataset and to real wind speed data. As expected, such distributions fit the Gumbel simulated data quite well. The generalized extreme value distribution and the two-component extreme value distribution fit the data better than the others in the non-negligible presence of outliers that cannot be accommodated by the Gumbel distribution, and therefore we suggest them to be applied in this context.
163

The Two-Sample t-test and the Influence of Outliers : - A simulation study on how the type I error rate is impacted by outliers of different magnitude.

Widerberg, Carl January 2019 (has links)
This study investigates how outliers of different magnitude impact the robustness of the twosample t-test. A simulation study approach is used to analyze the behavior of type I error rates when outliers are added to generated data. Outliers may distort parameter estimates such as the mean and variance and cause misleading test results. Previous research has shown that Welch’s ttest performs better than the traditional Student’s t-test when group variances are unequal. Therefore these two alternative statistics are compared in terms of type I error rates when outliers are added to the samples. The results show that control of type I error rates can be maintained in the presence of a single outlier. Depending on the magnitude of the outlier and the sample size, there are scenarios where the t-test is robust. However, the sensitivity of the t-test is illustrated by deteriorating type I error rates when more than one outlier are included. The comparison between Welch’s t-test and Student’s t-test shows that the former is marginally more robust against outlier influence.
164

Tail Estimation for Large Insurance Claims, an Extreme Value Approach.

Nilsson, Mattias January 2010 (has links)
In this thesis are extreme value theory used to estimate the probability that large insuranceclaims are exceeding a certain threshold. The expected claim size, given that the claimhas exceeded a certain limit, are also estimated. Two different models are used for thispurpose. The first model is based on maximum domain of attraction conditions. A Paretodistribution is used in the other model. Different graphical tools are used to check thevalidity for both models. Länsförsäkring Kronoberg has provided us with insurance datato perform the study.Conclusions, which have been drawn, are that both models seem to be valid and theresults from both models are essential equal. / I detta arbete används extremvärdesteori för att uppskatta sannolikheten att stora försäkringsskadoröverträffar en vis nivå. Även den förväntade storleken på skadan, givetatt skadan överstiger ett visst belopp, uppskattas. Två olika modeller används. Den förstamodellen bygger på antagandet att underliggande slumpvariabler tillhör maximat aven extremvärdesfördelning. I den andra modellen används en Pareto fördelning. Olikagrafiska verktyg används för att besluta om modellernas giltighet. För att kunna genomförastudien har Länsförsäkring Kronoberg ställt upp med försäkringsdata.Slutsatser som dras är att båda modellerna verkar vara giltiga och att resultaten ärlikvärdiga.
165

金融風險測度與極值相依之應用─以台灣金融市場為例 / Measuring financial risk and extremal dependence between financial markets in Taiwan

劉宜芳 Unknown Date (has links)
This paper links two applications of Extreme Value Theory (EVT) to analyze Taiwanese financial markets: 1. computation of Value at Risk (VaR) and Expected Shortfall (ES) 2. estimates of cross-market dependence under extreme events. Daily data from the Taiwan Stock Exchange Capitalization Weight Stock Index (TAIEX) and the foreign exchange rate, USD/NTD, are employed to analyze the behavior of each return and the dependence structure between the foreign exchange market and the equity market. In the univariate case, when computing risk measures, EVT provides us a more accurate way to estimate VaR. In bivariate case, when measuring extremal dependence, the results of whole period data show the extremal dependence between two markets is asymptotically independent, and the analyses of subperiods illustrate that the relation is slightly dependent in specific periods. Therefore, there is no significant evidence that extreme events appeared in one market (the equity market or the foreign exchange market) will affect another in Taiwan.
166

An Asymptotic Approach to Progressive Censoring

Hofmann, Glenn, Cramer, Erhard, Balakrishnan, N., Kunert, Gerd 10 December 2002 (has links) (PDF)
Progressive Type-II censoring was introduced by Cohen (1963) and has since been the topic of much research. The question stands whether it is sensible to use this sampling plan by design, instead of regular Type-II right censoring. We introduce an asymptotic progressive censoring model, and find optimal censoring schemes for location-scale families. Our optimality criterion is the determinant of the 2x2 covariance matrix of the asymptotic best linear unbiased estimators. We present an explicit expression for this criterion, and conditions for its boundedness. By means of numerical optimization, we determine optimal censoring schemes for the extreme value, the Weibull and the normal distributions. In many situations, it is shown that these progressive schemes significantly improve upon regular Type-II right censoring.
167

Peak Sidelobe Level Distribution Computation for Ad Hoc Arrays using Extreme Value Theory

Krishnamurthy, Siddhartha 25 February 2014 (has links)
Extreme Value Theory (EVT) is used to analyze the peak sidelobe level distribution for array element positions with arbitrary probability distributions. Computations are discussed in the context of linear antenna arrays using electromagnetic energy. The results also apply to planar arrays of random elements that can be transformed into linear arrays. / Engineering and Applied Sciences
168

Extreme-Value Analysis of Self-Normalized Increments / Extremwerteigenschaften der normierten Inkremente

Kabluchko, Zakhar 23 April 2007 (has links)
No description available.
169

Statistische Multiresolutions-Schätzer in linearen inversen Problemen - Grundlagen und algorithmische Aspekte / Statistical Multiresolution Estimatiors in Linear Inverse Problems - Foundations and Algorithmic Aspects

Marnitz, Philipp 27 October 2010 (has links)
No description available.
170

台灣銀行業系統重要性之衡量 / Measuring Systemic Importance of Taiwan’s Banking System

林育慈, Lin, Yu Tzu Unknown Date (has links)
本文利用Gravelle and Li (2013)提出之系統重要性指標來衡量國內九家上市金控銀行對於系統風險之貢獻程度。此種衡量方法係將特定銀行之系統重要性定義為該銀行發生危機造成系統風險增加的幅度,並以多變量極值理論進行機率的估算。實證結果顯示:一、系統重要性最高者為第一銀行;最低者為中國信託銀行。其中除中國信託銀行之重要性顯著低於其他銀行外,其餘銀行之系統重要性均無顯著差異。二、經營期間較長之銀行其系統重要性較高;具公股色彩之銀行對於系統風險之貢獻程度平均而言高於民營銀行。三、銀行規模與其對系統風險之貢獻大致呈現正向關係,即規模越大之銀行其重要性越高。在此情況下可能會有銀行大到不能倒的問題發生。四、存放比較低之銀行系統重要性亦較低,而資本適足率與系統重要性間並無明顯關係。 / In this thesis, we apply the measure proposed by Gravelle and Li (2013) to examine the systemic importance of certain Taiwanese banks. The systemic importance is defined as the increase in the systemic risk conditioned on the crash of a particular bank, and is estimated by the multivariate extreme value theory. Our empirical evidence shows that the most systemically important bank is First Commercial Bank, and the CTBC Bank is significantly less important than other banks, while the differences among the remaining banks are not significant. Second, banks established earlier have higher systemic importance; and the contribution to systemic risk of public banks, on average, is higher than the contribution of private banks. Third, we also find out that the size of a bank and its risk contribution have positive relationship. That is, the bigger a bank is, the more important it is. Under this circumstances, the too big to fail problem may occur. Last, the bank which has lower loan-to-deposit ratio will be less systemically important than those with higher ones, while the relation between capital adequacy ratio and systemic importance is unclear.

Page generated in 0.0446 seconds