• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 100
  • 14
  • 13
  • 7
  • 6
  • 5
  • 5
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 168
  • 168
  • 168
  • 54
  • 33
  • 30
  • 25
  • 24
  • 21
  • 20
  • 20
  • 19
  • 18
  • 16
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Aplicação da Teoria dos valores extremos em estratégias "Long-Short"

Monte-mor, Danilo Soares 17 December 2010 (has links)
Made available in DSpace on 2016-12-23T14:00:36Z (GMT). No. of bitstreams: 1 Danilo Soares Monte-Mor.pdf: 964390 bytes, checksum: 749870f88ee1c9c692cf782e397379ec (MD5) Previous issue date: 2010-12-17 / Increasingly has appeared on the market of investment Absolute Return Funds (Hedge Funds), which have the main objective to improve their performance through arbitrage strategies, as long-short strategies. It is the disproportionate evolution and even antagonistic of active prices that allows the players to structure strategies to generate additional returns, higher than the opportunity costs and independent of the movement of the market. In this work we used Extreme Value Theory (EVT), an important segment of probability, to model the series of direct relationship between prices of two pairs of assets. The quantiles obtained from such modeling and the quantile provided by normal were superimposed on data for periods subsequent to the period analyzed. From the comparison of such data we created a new quantitative long-short arbitrage strategy, called GEV Long-Short Strategy / Cada vez mais tem surgido no mercado de investimento fundos de retorno absoluto (Hedge Funds) que têm como objetivo principal melhorar seus desempenhos através de estratégias de arbitragem, como é o caso das estratégias long-short. É o comportamento desproporcional e até mesmo antagônico dos preços dos ativos que permite aos players estruturar estratégias para gerar retornos adicionais, superiores aos custos de oportunidade e independentes ao movimento do mercado. Neste trabalho foi utilizada a Teoria de Valores Extremos (TVE), um importante ramo da probabilidade, para que fossem modeladas as séries da relação direta entre preços de dois pares de ativos. Os quantis obtidos a partir de tal modelagem, juntamente com os quantis fornecidos pela normal, foram superpostos aos dados para períodos subsequentes ao período analisado. A partir da comparação desses dados foi criada uma nova estratégia quantitativa long-short de arbitragem, a qual denominamos GEV Long-Short Strategy
112

Value at Risk no mercado financeiro internacional: avaliação da performance dos modelos nos países desenvolvidos e emergentes / Value at Risk in international finance: evaluation of the models performance in developed and emerging countries

Luiz Eduardo Gaio 01 April 2015 (has links)
Diante das exigências estipuladas pelos órgãos reguladores pelos acordos internacionais, tendo em vistas as inúmeras crises financeiras ocorridas nos últimos séculos, as instituições financeiras desenvolveram diversas ferramentas para a mensuração e controle do risco inerente aos negócios. Apesar da crescente evolução das metodologias de cálculo e mensuração do risco, o Value at Risk (VaR) se tornou referência como ferramenta de estimação do risco de mercado. Nos últimos anos novas técnicas de cálculo do Value at Risk (VaR) vêm sendo desenvolvidas. Porém, nenhuma tem sido considerada como a que melhor ajusta os riscos para diversos mercados e em diferentes momentos. Não existe na literatura um modelo conciso e coerente com as diversidades dos mercados. Assim, o presente trabalho tem por objetivo geral avaliar os estimadores de risco de mercado, gerados pela aplicação de modelos baseados no Value at Risk (VaR), aplicados aos índices das principais bolsas dos países desenvolvidos e emergentes, para os períodos normais e de crise financeira, de modo a apurar os mais efetivos nessa função. Foram considerados no estudo os modelos VaR Não condicional, pelos modelos tradicionais (Simulação Histórica, Delta-Normal e t-Student) e baseados na Teoria de Valores Extremos; o VaR Condicional, comparando os modelos da família ARCH e Riskmetrics e o VaR Multivariado, com os modelos GARCH bivariados (Vech, Bekk e CCC), funções cópulas (t-Student, Clayton, Frank e Gumbel) e por Redes Neurais Artificiais. A base de dados utilizada refere-se as amostras diárias dos retornos dos principais índices de ações dos países desenvolvidos (Alemanha, Estados Unidos, França, Reino Unido e Japão) e emergentes (Brasil, Rússia, Índia, China e África do Sul), no período de 1995 a 2013, contemplando as crises de 1997 e 2008. Os resultados do estudo foram, de certa forma, distintos das premissas iniciais estabelecidas pelas hipóteses de pesquisa. Diante de mais de mil modelagens realizadas, os modelos condicionais foram superiores aos não condicionais, na maioria dos casos. Em específico o modelo GARCH (1,1), tradicional na literatura, teve uma efetividade de ajuste de 93% dos casos. Para a análise Multivariada, não foi possível definir um modelo mais assertivo. Os modelos Vech, Bekk e Cópula - Clayton tiveram desempenho semelhantes, com bons ajustes em 100% dos testes. Diferentemente do que era esperado, não foi possível perceber diferenças significativas entre os ajustes para países desenvolvidos e emergentes e os momentos de crise e normal. O estudo contribuiu na percepção de que os modelos utilizados pelas instituições financeiras não são os que apresentam melhores resultados na estimação dos riscos de mercado, mesmo sendo recomendados pelas instituições renomadas. Cabe uma análise mais profunda sobre o desempenho dos estimadores de riscos, utilizando simulações com as carteiras de cada instituição financeira. / Given the requirements stipulated by regulatory agencies for international agreements, in considering the numerous financial crises in the last centuries, financial institutions have developed several tools to measure and control the risk of the business. Despite the growing evolution of the methodologies of calculation and measurement of Value at Risk (VaR) has become a reference tool as estimate market risk. In recent years new calculation techniques of Value at Risk (VaR) have been developed. However, none has been considered the one that best fits the risks for different markets and in different times. There is no literature in a concise and coherent model with the diversity of markets. Thus, this work has the objective to assess the market risk estimates generated by the application of models based on Value at Risk (VaR), applied to the indices of the major stock exchanges in developed and emerging countries, for normal and crisis periods financial, in order to ascertain the most effective in that role. Were considered in the study models conditional VaR, the conventional models (Historical Simulation, Delta-Normal and Student t test) and based on Extreme Value Theory; Conditional VaR by comparing the models of ARCH family and RiskMetrics and the Multivariate VaR, with bivariate GARCH (VECH, Bekk and CCC), copula functions (Student t, Clayton, Frank and Gumbel) and Artificial Neural Networks. The database used refers to the daily samples of the returns of major stock indexes of developed countries (Germany, USA, France, UK and Japan) and emerging (Brazil, Russia, India, China and South Africa) from 1995 to 2013, covering the crisis in 1997 and 2008. The results were somewhat different from the initial premises established by the research hypotheses. Before more than 1 mil modeling performed, the conditional models were superior to non-contingent, in the majority of cases. In particular the GARCH (1,1) model, traditional literature, had a 93% adjustment effectiveness of cases. For multivariate analysis, it was not possible to set a more assertive style. VECH models, and Bekk, Copula - Clayton had similar performance with good fits to 100% of the tests. Unlike what was expected, it was not possible to see significant differences between the settings for developed and emerging countries and the moments of crisis and normal. The study contributed to the perception that the models used by financial institutions are not the best performing in the estimation of market risk, even if recommended by renowned institutions. It is a deeper analysis on the performance of the estimators of risk, using simulations with the portfolios of each financial institution.
113

Théorie des valeurs extrêmes et applications en environnement / Extreme value theory and applications in environment

Rietsch, Théo 14 November 2013 (has links)
Les deux premiers chapitres de cette thèse s'attachent à répondre à des questions cruciales en climatologie. La première est de savoir si un changement dans le comportement des extrêmes de température peut être détecté entre le début du siècle et aujourd'hui. Nous utilisons la divergence de Kullback Leibler, que nous adaptons au contexte des extrêmes. Des résultats théoriques et des simulations permettent de valider notre approche. La deuxième question est de savoir où retirer des stations météo pour perdre le moins d'information sur le comportement des extrêmes. Un algorithme, le Query By Committee, est développé puis appliqué à un jeu de données réelles. Le dernier chapitre de la thèse traite de l'estimation robuste du paramètre de queue d'une distribution de type Weibull en présence de co-variables aléatoires. Nous proposons un estimateur robuste basé sur un critère de minimisation de la divergence entre deux densités et étudions ses propriétés. / In the first two chapters, we try to answer two questions that are critical in climatology. The first one is to know whether a change in the behaviour of the temperature extremes occured between the beginning of the century and today. We suggest to use a version of the Kullback Leibler divergence tailored for the extreme value context. We provide some theoretical and simulation results to justify our approach. The second question is to decide where to remove stations from a network to lose the least information about the behaviour of the extremes. An algorithm called the Query By Committee is developed and applied to real data. The last chapter of the thesis deals with a more theoretical subject which is the robust estimation of a Weibull type tail index in presence of random covariates. We propose a robust estimator based on a criterion ofminimization of the divergence between two densities and study its properties.
114

Novelty detection with extreme value theory in vital-sign monitoring

Hugueny, Samuel Y. January 2013 (has links)
Every year in the UK, tens of thousands of hospital patients suffer adverse events, such as un-planned transfers to Intensive Therapy Units or unexpected cardiac arrests. Studies have shown that in a large majority of cases, significant physiological abnormalities can be observed within the 24-hour period preceding such events. Such warning signs may go unnoticed, if they occur between observations by the nursing staff, or are simply not identified as such. Timely detection of these warning signs and appropriate escalation schemes have been shown to improve both patient outcomes and the use of hospital resources, most notably by reducing patients’ length of stay. Automated real-time early-warning systems appear to be cost-efficient answers to the need for continuous vital-sign monitoring. Traditionally, a limitation of such systems has been their sensitivity to noisy and artefactual measurements, resulting in false-alert rates that made them unusable in practice, or earned them the mistrust of clinical staff. Tarassenko et al. (2005) and Hann (2008) proposed a novelty detection approach to the problem of continuous vital-sign monitoring, which, in a clinical trial, was shown to yield clinically acceptable false alert rates. In this approach, an observation is compared to a data fusion model, and its “normality” assessed by comparing a chosen statistic to a pre-set threshold. The method, while informed by large amounts of training data, has a number of heuristic aspects. This thesis proposes a principled approach to multivariate novelty detection in stochastic time- series, where novelty scores have a probabilistic interpretation, and are explicitly linked to the starting assumptions made. Our approach stems from the observation that novelty detection using complex multivariate, multimodal generative models is generally an ill-defined problem when attempted in the data space. In situations where “novel” is equivalent to “improbable with respect to a probability distribution ”, formulating the problem in a univariate probability space allows us to use classical results of univariate statistical theory. Specifically, we propose a multivariate extension to extreme value theory and, more generally, order statistics, suitable for performing novelty detection in time-series generated from a multivariate, possibly multimodal model. All the methods introduced in this thesis are applied to a vital-sign monitoring problem and compared to the existing method of choice. We show that it is possible to outperform the existing method while retaining a probabilistic interpretation. In addition to their application to novelty detection for vital-sign monitoring, contributions in this thesis to existing extreme value theory and order statistics are also valid in the broader context of data-modelling, and may be useful for analysing data from other complex systems.
115

Doba nezaměstnanosti v České republice pohledem analýzy přežití / Unemployment Duration in the Czech Republic Through the Lens of Survival Analysis

Čabla, Adam January 2017 (has links)
In the presented thesis the aim is to apply methods of survival analysis to the data from the Labour Force Survey, which are interval-censored. With regard to this type of data, I use specific methods designed to handle them, especially Turnbull estimate, weighted log-rank test and the AFT model. Other objective of the work is the design and application of a methodology for creating a model of unemployment duration, depending on the available factors and its interpretation. Other aim is to evaluate evolution of the probability distribution of unemployment duration and last but not least aim is to create more accurate estimate of the tail using extreme value theory. The main benefits of the thesis can include the creation of a methodology for examining the data from the Labour Force Survey based on standard techniques of survival analysis. Since the data are internationally comparable, the methodology is applicable at the level of European Union countries and several others. Another benefit of this work is estimation of the parameters of the generalized Pareto distribution on interval-censored data and creation and comparison of the models of piecewise connected distribution functions with solution of the connection problem. Work brought empirical results, most important of which is the comparison of results from three different data approaches and specific relationship between selected factors and time to find a job or spell of unemployment.
116

Approche algébrique et théorie des valeurs extrêmes pour la détection de ruptures : Application aux signaux biomédicaux / Algebraic approach and extreme value theory for change-point detection : Application to the biomedical signals

Debbabi, Nehla 14 December 2015 (has links)
Ce travail développe des techniques non-supervisées de détection et de localisation en ligne de ruptures dans les signaux enregistrés dans un environnement bruité. Ces techniques reposent sur l'association d'une approche algébrique avec la TVE. L'approche algébrique permet d'appréhender aisément les ruptures en les caractérisant en termes de distributions de Dirac retardées et leurs dérivées dont la manipulation est facile via le calcul opérationnel. Cette caractérisation algébrique, permettant d'exprimer explicitement les instants d'occurrences des ruptures, est complétée par une interprétation probabiliste en termes d'extrêmes : une rupture est un évènement rare dont l'amplitude associée est relativement grande. Ces évènements sont modélisés dans le cadre de la TVE, par une distribution de Pareto Généralisée. Plusieurs modèles hybrides sont proposés dans ce travail pour décrire à la fois le comportement moyen (bruit) et les comportements extrêmes (les ruptures) du signal après un traitement algébrique. Des algorithmes entièrement non-supervisés sont développés pour l'évaluation de ces modèles hybrides, contrairement aux techniques classiques utilisées pour les problèmes d'estimation en question qui sont heuristiques et manuelles. Les algorithmes de détection de ruptures développés dans cette thèse ont été validés sur des données générées, puis appliqués sur des données réelles provenant de différents phénomènes, où les informations à extraire sont traduites par l'apparition de ruptures. / This work develops non supervised techniques for on-line detection and location of change-points in noisy recorded signals. These techniques are based on the combination of an algebraic approach with the Extreme Value Theory (EVT). The algebraic approach offers an easy identification of the change-points. It characterizes them in terms of delayed Dirac distributions and their derivatives which are easily handled via operational calculus. This algebraic characterization, giving rise to an explicit expression of the change-points locations, is completed with a probabilistic interpretation in terms of extremes: a change point is seen as a rare and extreme event. Based on EVT, these events are modeled by a Generalized Pareto Distribution.Several hybrid multi-components models are proposed in this work, modeling at the same time the mean behavior (noise) and the extremes ones (change-points) of the signal after an algebraic processing. Non supervised algorithms are proposed to evaluate these hybrid models, avoiding the problems encountered with classical estimation methods which are graphical ad hoc ones. The change-points detection algorithms developed in this thesis are validated on generated data and then applied on real data, stemming from different phenomenons, where change-points represent the information to be extracted.
117

Investigating Systematics In The Cosmological Data And Possible Departures From Cosmological Principle

Gupta, Shashikant 08 1900 (has links) (PDF)
This thesis contributes to the field of dark energy and observational cosmology. We have investigated possible direction dependent systematic signal and non-Gaussian features in the supernovae (SNe) Type Ia data. To detect these effects we propose a new method of analysis. Although We have used this technique on SNe Ia data, it is quite general and can be applied to other data sets as well. SNe Ia are the most precise known distance indicators at the cosmological distances. Their constant peak luminosity(after correction) makesthem standard candles and hence one can measure the distances in the universe using SNe Ia. This distance measurement can determine various cosmological parameters such as the Hubble constant, various components of matter density and dark energy from, the SNe Ia observations. Recent SNe Ia observations have shown that the expansion of the universe is currently accelerating. This recent acceleration is explained by invoking a component in the universe having negative pressure and is termed as dark energy. It can be described by a homogeneous and isotropic fluid with the equation of state P = wρ, where w is allowed to be negative. A constant(Λ) in the Einstein equation(known as cosmological constant) can explain the acceleration, in the fluid model it can be modeled with w = -1. Other models of dark energy with w = -1 can also explain the acceleration, however the precise nature of this mysterious component remains unknown. Although there exist a wide range of dark energy models, cosmological constant provides the simplest explanation to the acceleration of the expansion of the Universe. The equation of state parameter w has been investigated by recent surveys but the results are still consistent with a wide range of dark energy models. In order to discriminate among various cosmological models we need an even more precise measurement of distance and error bars in the SNe Ia data. From the central limit theorem we expect Gaussian errors in any experiment that is free from systematic noise. However in astronomy we do not have a control over the observed phenomena and thus can not control the systematic errors (due to some physical processes in the Universe) in the observed data. The only possible way to deal with such data is by using appropriate statistical techniques. Among these systematic features the direction dependent features are more dangerous ones since they may indicate a preferred direction in the Universe. To address the issue of direction dependent features we have developed a new technique(Δ statistic henceforth) which is based on the extreme value theory. We have applied this technique to the available high-z SNe Ia data from Riess et al.(2004)and Riess et al.(2007). In addition we have applied it to the HST data from HST key project for H0 measurement. Below we summarize the material presented in the thesis. Chapter wise summary of the thesis In the first chapter we present an introductory discussion of the various basic cosmological notions eg. Cosmological Principle (CP), observational evidence in support of CP and departures from it, distance measures and large scale structure. The observed departures from the CP could be present due to the systematic errors and/or non-Gaussian error bars in the data. We discuss the errors involved in the measurement process Basics of statistical techniques : In the next two chapters we discuss basics of the statistical techniques used in this thesis and extreme value theory. Extreme value theory describes how to calculate the distribution of extreme events. The simplest of the distributions of the extremes is known as the Gumbel distribution. We discuss features of the Gumbel distribution since it is used extensively in our analysis. Δ statistic and features in the SNe data : In the fourth chapter we derive Δ statistic and apply it to the SNe Ia data sets. An outline of the Δ statistic is as follows : a) We define a plane which cuts the sky into hemispheres. This plane will divide the data into two subsets, one in each hemisphere. b) Now we calculate the χ2 in each hemisphere for an FRW universe assuming a flat geometry. c) The difference of χ2 in the two hemisphere is calculated and maximized by rotating the plane. This maximum should follow the Gumbel distribution. Since it is difficult to calculate the analytic form of Gumbel distribution we calculate it numerically assuming Gaussian error bars. This gives the theoretical distribution for the above calculated maximum of difference of χ2 . The results indicate that GD04 shows systematic effects as well non-Gaussian features while the set GD07 is better in terms of systematic effects and non-Gaussian features. Non-Gaussian features in the H0 data : HST key project measures the value of Hubble constant at the level of 10% accuracy, which requires precise measurement of the distances. It uses various methods to measure distance for instance SNe Ia, Tully-Fisher relation, surface-brightness fluctuations etc. In the fifth chapter we apply Δ statistic to the HST Key Project data in order to check the presence of non-Gaussian and direction dependent features. Our results show that although this data set seems to be free of direction dependent features, it is inconsistent with the Gaussian errors. Analytic Marginalization : The quantities of real interest in cosmology are ΩM and ΩΛ, Hubble constant could in principle be treated as a nuisance parameter. It would be useful to marginalize over the nuisance parameter. Although it can be done numerically using Bayesian method, Δ statistic does not allow it. In chapter six we propose a method to marginalize over H0 analytically. The χ2 in this case is a complicated function of errors in the data. We compare this analytic method with the Bayesian marginalization method and results show that the two methods are quite consistent. We apply the Δ statistic to the SNe data after the analytic marginalization. Results do not change much indicating the insensitivity of the direction de-pendent features to the Hubble constant. A variation to the Δ statistic: As has been discussed earlier that, it is difficult to calculate the theoretical distribution of Δ in general. However if the parent distribution follows certain conditions it is possible to derive the analytic form for the Gumbel distribution for Δ. In the seventh chapter we derive a variation to the Δ statistic in a way that allows us to calculate the analytic distribution. The results in this case are different from those presented earlier, but they confirm the same direction dependence and non-Gaussian features in the data.
118

Evaluation et optimisation des performances de fonctions pour la surveillance de turboréacteurs / Evaluation and optimization of function performances for the monitoring of turbojet engines

Hmad, Ouadie 06 December 2013 (has links)
Cette thèse concerne les systèmes de surveillance des turboréacteurs. Le développement de tels systèmes nécessite une phase d’évaluation et d’optimisation des performances, préalablement à la mise en exploitation. Le travail a porté sur cette phase, et plus précisément sur les performances des fonctions de détection et de pronostic de deux systèmes. Des indicateurs de performances associés à chacune de ces fonctions ainsi que leur estimation ont été définis. Les systèmes surveillés sont d’une part la séquence de démarrage pour la fonction de détection et d’autre part la consommation d’huile pour la fonction de pronostic. Les données utilisées venant de vols en exploitation sans dégradations, des simulations ont été nécessaires pour l’évaluation des performances. L’optimisation des performances de détection a été obtenue par réglage du seuil sur la statistique de décision en tenant compte des exigences des compagnies aériennes exprimées en termes de taux de bonne détection et de taux d’alarme fausse. Deux approches ont été considérées et leurs performances ont été comparées pour leurs meilleures configurations. Les performances de pronostic de surconsommations d’huile, simulées à l’aide de processus Gamma, ont été évaluées en fonction de la pertinence de la décision de maintenance induite par le pronostic. Cette thèse a permis de quantifier et d’améliorer les performances des fonctions considérées pour répondre aux exigences. D’autres améliorations possibles sont proposées comme perspectives pour conclure ce mémoire / This thesis deals with monitoring systems of turbojet engines. The development of such systems requires a performance evaluation and optimization phase prior to their introduction in operation. The work has been focused on this phase, and more specifically on the performance of the detection and the prognostic functions of two systems. Performances metrics related to each of these functions as well as their estimate have been defined. The monitored systems are, on the one hand, the start sequence for the detection function and on the other hand, the oil consumption for the prognostic function. The used data come from flights in operation without degradation, simulations of degradation were necessary for the performance assessment. Optimization of detection performance was obtained by tuning a threshold on the decision statistics taking into account the airlines requirements in terms of good detection rate and false alarm rate. Two approaches have been considered and their performances have been compared for their best configurations. Prognostic performances of over oil consumption, simulated using Gamma processes, have been assessed on the basis of the relevance of maintenance decision induced by the prognostic. This thesis has allowed quantifying and improving the performance of the two considered functions to meet the airlines requirements. Other possible improvements are proposed as prospects to conclude this thesis
119

Využití teorie extrémních hodnot při řízení operačních rizik / Extreme Value Theory in Operational Risk Management

Vojtěch, Jan January 2009 (has links)
Currently, financial institutions are supposed to analyze and quantify a new type of banking risk, known as operational risk. Financial institutions are exposed to this risk in their everyday activities. The main objective of this work is to construct an acceptable statistical model of capital requirement computation. Such a model must respect specificity of losses arising from operational risk events. The fundamental task is represented by searching for a suitable distribution, which describes the probabilistic behavior of losses arising from this type of risk. There is a strong utilization of the Pickands-Balkema-de Haan theorem used in extreme value theory. Roughly speaking, distribution of a random variable exceeding a given high threshold, converges in distribution to generalized Pareto distribution. The theorem is subsequently used in estimating the high percentile from a simulated distribution. The simulated distribution is considered to be a compound model for the aggregate loss random variable. It is constructed as a combination of frequency distribution for the number of losses random variable and the so-called severity distribution for individual loss random variable. The proposed model is then used to estimate a fi -nal quantile, which represents a searched amount of capital requirement. This capital requirement is constituted as the amount of funds the bank is supposed to retain, in order to make up for the projected lack of funds. There is a given probability the capital charge will be exceeded, which is commonly quite small. Although a combination of some frequency distribution and some severity distribution is the common way to deal with the described problem, the final application is often considered to be problematic. Generally, there are some combinations for severity distribution of two or three, for instance, lognormal distributions with different location and scale parameters. Models like these usually do not have any theoretical background and in particular, the connecting of distribution functions has not been conducted in the proper way. In this work, we will deal with both problems. In addition, there is a derivation of maximum likelihood estimates of lognormal distribution for which hold F_LN(u) = p, where u and p is given. The results achieved can be used in the everyday practices of financial institutions for operational risks quantification. In addition, they can be used for the analysis of a variety of sample data with so-called heavy tails, where standard distributions do not offer any help. As an integral part of this work, a CD with source code of each function used in the model is included. All of these functions were created in statistical programming language, in S-PLUS software. In the fourth annex, there is the complete description of each function and its purpose and general syntax for a possible usage in solving different kinds of problems.
120

Exchange market pressure: an evaluation using extreme value theory / Napětí na devizovém trhu: měření pomocí teorie extrémních hodnot

Zuzáková, Barbora January 2013 (has links)
This thesis discusses the phenomenon of currency crises, in particular it is devoted to empirical identification of crisis periods. As a crisis indicator, we aim to utilize an exchange market pressure index which has been revealed as a very powerful tool for the exchange market pressure quantification. Since enumeration of the exchange market pressure index is crucial for further analysis, we pay special attention to different approaches of its construction. In the majority of existing literature on exchange market pressure models, a currency crisis is defined as a period of time when the exchange market pressure index exceeds a predetermined level. In contrast to this, we incorporate a probabilistic approach using the extreme value theory. Our goal is to prove that stochastic methods are more accurate, in other words they are more reliable instruments for crisis identification. We illustrate the application of the proposed method on a selected sample of four central European countries over the period 1993 - 2012, or 1993 - 2008 respectively, namely the Czech Republic, Hungary, Poland and Slovakia. The choice of the sample is motivated by the fact that these countries underwent transition reforms to market economies at the beginning of 1990s and therefore could have been exposed to speculative attacks on their newly arisen currencies. These countries are often assumed to be relatively homogeneous group of countries at similar stage of the integration process. Thus, a resembling development of exchange market pressure, particularly during the last third of the estimation period, would not be surprising.

Page generated in 0.0702 seconds