• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 41
  • 21
  • 9
  • 5
  • 5
  • 4
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 95
  • 95
  • 23
  • 23
  • 22
  • 21
  • 16
  • 14
  • 13
  • 12
  • 12
  • 12
  • 12
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Modélisation de la dynamique des rentabilités des hedge funds : dépendance, effets de persistance et problèmes d’illiquidité / Hedge Funds return modelling : Serial correlation, persistence effects and liquidity problems

Limam, Mohamed-Ali 15 December 2015 (has links)
Dans cette thèse nous combinons les processus à mémoire longue ainsi que les modèles à changement de régime markovien afin d’étudier la dynamique non linéaire des rentabilités des hedge funds et leur exposition au risque de marché. L’attractivité des hedge funds réside dans leur capacité à générer des rentabilités décorrélées avec celles des actifs traditionnels tout en permettant d’améliorer les rentabilités et/ou de réduire le risque, indépendamment des conditions de marché. Cependant, certaines spécificités des rentabilités des hedge funds (non linéarité, asymétrie et présence d’une forte autocorrélation émanant des problèmes d’illiquidités) remettent en cause cet aspect qui n’est valable que dans un univers gaussien. Nous adoptons de ce fait une approche économétrique permettant de réconcilier la notion de mémoire longue et celle de la persistance pure des performances. Nous mettons l’accent sur le risque de confusion entre vraie mémoire longue et mémoire longue fallacieuse dans la mesure où certains processus peuvent générer des caractéristiques similaires à celles des processus à mémoire longue. Il ressort de cette étude non seulement l’insuffisance des modèles standards à prendre en compte les caractéristiques des séries des rentabilités financières mais aussi la pertinence du recours aux modèles mixtes pour mieux cerner l’ensemble de ces spécificités dans un cadre unifié. Le modèle Beta Switching ARFIMA-FIGARCH que nous proposons révèle la complexité de la dynamique des rentabilités des hedge funds. Il est donc nécessaire de mieux appréhender cette dynamique afin d'expliquer convenablement les interactions qui existent entre les hedge funds eux-mêmes et entre les hedge funds et les marchés standards. La composante mémoire longue est prise en compte à la fois au niveau de la moyenne conditionnelle à travers le processus ARFIMA ainsi qu’au niveau de la variance conditionnelle à travers plusieurs spécifications des processus hétéroscédastiques fractionnaires notamment les processus FIGARCH, FIAPARCH et HYGARCH. Cette modélisation mieux adaptée aux spécificités des hedge funds met en évidence le risque caché de ces derniers et représente une nouvelle perspective vers laquelle les gérants et les responsables d’agence pourraient s’orienter. / In this thesis we combine long memory processes and regime switching models to study the nonlinear dynamics of hedge funds returns and their exposure to market risk. The attractiveness of hedge funds lies in their ability to generate returns uncorrelated to those of traditional assets while allowing to improve returns and/or reduce the risk, regardless of market conditions. However, some specificity of returns of hedge funds as their nonlinear and asymmetric nature as well as the presence of a strong autocorrelation in related to illiquidity problems make this aspect only valid in a Gaussian framework. In this study, we adopt an econometric approach that reconciles the notion of long memory and that of pure performance persistence. In this regard, we focus on the risk of confusion between real and spurious long memory long memory since certain processes can generate similar characteristics to that of long memory processes. It appears from this study not only the inadequacy of standard models to take into account the characteristics of the series of financial returns but also the relevance of using mixed models to better understand all of these features within a unified framework. The Beta Switching ARFIMA-FIGARCH mode we suggest reveals the complexity of hedge fund return dynamics and proves the need to better understand the dynamics of returns of hedge funds in order to explain the interactions between hedge funds themselves and between hedge funds and standard markets. The long memory component is taken into account both at the conditional mean through the ARFIMA process and at the conditional variance through several specifications heteroscedatic fractional processes including FIGARCH, FIAPARCH and HYGARCH models. This model take into account several features of hedge fund returns, highlights their hidden risks and represents a new perspective to which managers could move.
62

Modelos de memória longa, GARCH e GARCH com memória longa para séries financeiras / Long memory, GARCH and long memory GARCH models for financial time series

Grazielle Yumi Solda 10 April 2008 (has links)
O objetivo deste trabalho é apresentar e comparar diferentes métodos de modelagem da volatilidade (variância condicional) de séries temporais financeiras. O modelo ARFIMA é empregado para capturar o comportamento de memória longa observado na volatilidade de séries financeiras. Por sua vez, o modelo GARCH é utilizado para modelar a volatilidade variando no tempo destas séries. Finalmente, o modelo FIGARCH é utilizado para modelar a dinâmica dos retornos de séries temporais financeiras juntamente com sua volatilidade. Serão apresentados alguns estimadores para os parâmetros dos modelos estudados. Foram realizadas simulações dos três tipos de modelos com o objetivo de comparar o comportamento dos estimadores para diferentes valores dos parâmetros. Por fim, serão apresentadas aplicações em séries reais. / The goal of this project is to present and compare differents methods of modeling volatility (conditional variance) in financial time series. ARFIMA model is applied to capture long memory behavior of volatility in financial time series. GARCH model is used to model the temporal variation in financial volatility. Finally, FIGARCH model is used to model dynamic of financial time series returns as well as its volatility behavior. We present some estimators for the studied models. Estimators behavior of the three types of models for different parameters is assessed through a simulation study. At last, applications to real data are presented.
63

Blackovy-Scholesovy modely oceňování opcí / Black-Scholes models of option pricing

Čekal, Martin January 2013 (has links)
Title: Black-Scholes Models of Option Pricing Author: Martin Cekal Department: Department of Probability and Mathematical Statistics Supervisor: prof. RNDr. Bohdan Maslowski, DrSc., Charles University in Prague, Faculty of Mathematics and Physics, Department of Probability and Mathematical Statistics. Abstract: In the present master thesis we study a generalization of Black-Scholes model using fractional Brownian motion and jump processes. The main goal is a derivation of the price of call option in a fractional jump market model. The first chapter introduces long memory and its modelling by discrete and continuous time models. In the second chapter fractional Brownian motion is defined, appropriate stochastic analysis is developed and we generalize the notion of Lévy and jump processes. The third chapter introduces fractional Black-Scholes model. In the fourth chapter, tools developed in the second chapter are used for the construction of jump fractional Black-Scholes model and derivation of explicit formula for the price of european call option. In the fifth chapter, we analyze long memory contained in simulated and empirical time series. Keywords: Black-Scholes model, fractional Brownian motion, fractional jump process, long- memory, options pricing.
64

Asymptotiques et fluctuations des plus grandes valeurs propres de matrices de covariance empirique associées à des processus stationnaires à longue mémoire / Asymptotics and fluctuations of largest eigenvalues of empirical covariance matrices associated with long memory stationary processes

Tian, Peng 10 December 2018 (has links)
Les grandes matrices de covariance constituent certainement l’un des modèles les plus utiles pour les applications en statistiques en grande dimension, en communication numérique, en biologie mathématique, en finance, etc. Les travaux de Marcenko et Pastur (1967) ont permis de décrire le comportement asymptotique de la mesure spectrale de telles matrices formées à partir de N copies indépendantes de n observations d’une suite de variables aléatoires iid et sa convergence vers une distribution de probabilité déterministe lorsque N et n convergent vers l’infini à la même vitesse. Plus récemment, Merlevède et Peligrad (2016) ont démontré que dans le cas de grandes matrices de covariance issues de copies indépendantes d’observations d’un processus strictement stationnaire centré, de carré intégrable et satisfaisant des conditions faibles de régularité, presque sûrement, la distribution spectrale empirique convergeait étroitement vers une distribution non aléatoire ne dépendant que de la densité spectrale du processus sous-jacent. En particulier, si la densité spectrale est continue et bornée (ce qui est le cas des processus linéaires dont les coefficients sont absolument sommables), alors la distribution spectrale limite a un support compact. Par contre si le processus stationnaire exhibe de la longue mémoire (en particulier si les covariances ne sont pas absolument sommables), le support de la loi limite n'est plus compact et des études plus fines du comportement des valeurs propres sont alors nécessaires. Ainsi, cette thèse porte essentiellement sur l’étude des asymptotiques et des fluctuations des plus grandes valeurs propres de grandes matrices de covariance associées à des processus stationnaires à longue mémoire. Dans le cas où le processus stationnaire sous-jacent est Gaussien, l’étude peut être simplifiée via un modèle linéaire dont la matrice de covariance de population sous-jacente est une matrice de Toeplitz hermitienne. On montrera ainsi que dans le cas de processus stationnaires gaussiens à longue mémoire, les fluctuations des plus grandes valeurs propres de la grande matrice de covariance empirique convenablement renormalisées sont gaussiennes. Ce comportement indique une différence significative par rapport aux grandes matrices de covariance empirique issues de processus à courte mémoire, pour lesquelles les fluctuations de la plus grande valeur propre convenablement renormalisée suivent asymptotiquement la loi de Tracy-Widom. Pour démontrer notre résultat de fluctuations gaussiennes, en plus des techniques usuelles de matrices aléatoires, une étude fine du comportement des valeurs propres et vecteurs propres de la matrice de Toeplitz sous-jacente est nécessaire. On montre en particulier que dans le cas de la longue mémoire, les m plus grandes valeurs propres de la matrice de Toeplitz convergent vers l’infini et satisfont une propriété de type « trou spectral multiple ». Par ailleurs, on démontre une propriété de délocalisation de leurs vecteurs propres associés. Dans cette thèse, on s’intéresse également à l’universalité de nos résultats dans le cas du modèle simplifié ainsi qu’au cas de grandes matrices de covariance lorsque les matrices de Toeplitz sont remplacées par des matrices diagonales par blocs / Large covariance matrices play a fundamental role in the multivariate analysis and high-dimensional statistics. Since the pioneer’s works of Marcenko and Pastur (1967), the asymptotic behavior of the spectral measure of such matrices associated with N independent copies of n observations of a sequence of iid random variables is known: almost surely, it converges in distribution to a deterministic law when N and n tend to infinity at the same rate. More recently, Merlevède and Peligrad (2016) have proved that in the case of large covariance matrices associated with independent copies of observations of a strictly stationary centered process which is square integrable and satisfies some weak regularity assumptions, almost surely, the empirical spectral distribution converges weakly to a nonrandom distribution depending only on the spectral density of the underlying process. In particular, if the spectral density is continuous and bounded (which is the case for linear processes with absolutely summable coefficients), the limiting spectral distribution has a compact support. However, if the underlying stationary process exhibits long memory, the support of the limiting distribution is not compact anymore and studying the limiting behavior of the eigenvalues and eigenvectors of the associated large covariance matrices can give more information on the underlying process. This thesis is in this direction and aims at studying the asymptotics and the fluctuations of the largest eigenvalues of large covariance matrices associated with stationary processes exhibiting long memory. In the case where the underlying stationary process is Gaussian, the study can be simplified by a linear model whose underlying population covariance matrix is a Hermitian Toeplitz matrix. In the case of stationary Gaussian processes exhibiting long memory, we then show that the fluctuations of the largest eigenvalues suitably renormalized are Gaussian. This limiting behavior shows a difference compared to the one when large covariance matrices associated with short memory processes are considered. Indeed in this last case, the fluctuations of the largest eigenvalues suitably renormalized follow asymptotically the Tracy-Widom law. To prove our results on Gaussian fluctuations, additionally to usual techniques developed in random matrices analysis, a deep study of the eigenvalues and eigenvectors behavior of the underlying Toeplitz matrix is necessary. In particular, we show that in the case of long memory, the largest eigenvalues of the Toeplitz matrix converge to infinity and satisfy a property of “multiple spectral gaps”. Moreover, we prove a delocalization property of their associated eigenvectors. In this thesis, we are also interested in the universality of our results in the case of the simplified model and also in the case of large covariance matrices when the Toeplitz matrices are replaced by bloc diagonal matrices
65

Essays on long memory processes. / Ensaios sobre processos de memória longa.

Fernandes Neto, Fernando 28 November 2016 (has links)
The present work aims at discussing the main theoretical aspects related to the occurrence of long memory processes and its respective application in economics and finance. In order to discuss the main theoretical aspects of its occurrence, it is worth starting from the complex systems approach and emergent phenomena, keeping in mind that many of these are computationally irreducible. In other words, the current state of the system depends on all previous states, in such a way that any change in the initial configuration must cause a significant difference in all posterior states. That is, there is a persistence of information over time - this is a concept directly related to long memory processes. Hence, based on complex systems simulations, three factors (possibly there are many others) were related to the rise of long memory processes: agents\' heterogeneity, occurrence of large deviations from the steady states (in conjunction with the motion laws of each system) and spatial complexity (which must influence on information propagation and on the dynamics of agents competition). In relation to the applied knowledge, first it is recognized that the explanatory factors for the rise of long memory processes are common to the structures/characteristics of real markets and it is possible to identify potential stylized facts when filtering the long memory components from time series - a considerable part of information present in time series is a consequence of the autocorrelation structure, which is directly related to the specificities of each market. Given that, in this thesis was developed a new risk contagion technique that does not need any further intervention. This technique is basically given by the calculation of rolling correlations between long memory filtered series of the conditional variances for different economies, such that these filtered series contain the stylized facts (risk peaks), free from possible overreactions caused by market idiosyncrasies. Then, based on the identification of risk contagion episodes related to the 2007/2008 Subprime Crisis in the U.S. and its respective contagion to the Brazilian economy, it was filtered out from the conditional variance of the Brazilian assets (which are an uncertainty measure) aiming at eliminating the contagion episodes and, consequently, it was made a counterfactual projection of what would have happened to the Brazilian economy if the risk contagion episodes had not occurred. Moreover, in conjunction with the evolutionary trend of the Brazilian economy prior to the crisis, it is possible to conclude that 70% of the economic crisis posterior to the 2008 events was caused by macroeconomic policies and only 30% is due to the effects of risk contagion episodes from the U.S. / O presente trabalho tem como objetivo discutir os principais aspectos teóricos ligados à ocorrência dos processos de memória longa e sua respectiva aplicação em economia e finanças. Para discutir os principais aspectos teóricos da sua ocorrência, recorre-se primeiramente à abordagem de sistemas complexos e fenômenos emergentes, tendo em vista que muitos destes são irredutíveis computacionalmente, ou seja, o estado atual do sistema depende de todos os estados anteriores, tal que, qualquer mudança nos instantes iniciais deve causar significativa diferença nos estados posteriores. Em outras palavras, há uma persistência da informação - conceito este intimamente ligado à memória longa. Portanto, com base em simulações de sistemas complexos computacionais, três fatores (podendo haver outros mais) foram relacionados ao surgimento de processos de memória longa: heterogeneidade dos agentes, ocorrência de grandes desvios do equilíbrio do sistema (em consonância com as respectivas leis do movimento de cada sistema estudado) e a complexidade espacial (que deve influenciar na propagação da informação e na dinâmica competitiva dos agentes). Em relação à aplicação do conhecimento, primeiro é reconhecido que os fatores explicativos para o surgimento de processos de memória longa são inerentes a estruturas/características de mercados reais e que é possível identificar potenciais fatos estilizados, ao filtrar as componentes de memória longa de séries temporais - grande parte da informação presente nas séries é função da estrutura de autocorrelação que advém das especificidades de cada mercado. Com base nisso, nesta tese foi desenvolvida uma nova técnica de estimação de contágio de risco, que não necessita intervenções adicionais, tendo em vista a identificação prévia de potenciais fatos estilizados em diferentes economias, utilizando as séries filtradas de variância condicional, tal que a partir destas séries filtradas é calculada uma correlação com horizonte móvel de observações entre choques (picos de risco) de curto prazo livres de possíveis reações causadas por idiossincrasias de cada mercado. Posteriormente, com base na identificação dos episódios ligados à Crise do Subprime de 2007/2008 nos Estados Unidos e seu respectivo contágio para a economia brasileira, filtrou-se a variância condicional dos ativos brasileiros (que é uma medida de incerteza), objetivando-se eliminar os eventos de contágio e, consequentemente, foi feita uma projeção contrafactual da evolução da economia, caso os episódios da crise não tivessem ocorrido. Com base nestes dados e com uma análise da tendência evolutiva da economia brasileira no período anterior à crise, constatou-se que 70% da crise econômica vivenciada no Brasil no período pós-2008 é decorrente de falhas na condução da política macroeconômica e somente 30% decorre dos efeitos do cenário externo na economia.
66

Estimação do índice de memória em processos estocásticos com memória longa: uma abordagem via ABC / Estimation of the memory index of stochastic processes with long memory: an ABC approach

Andrade, Plinio Lucas Dias 28 March 2016 (has links)
Neste trabalho propomos o uso de um método Bayesiano para estimar o parâmetro de memória de um processo estocástico com memória longa quando sua função de verossimilhança é intratável ou não está disponível. Esta abordagem fornece uma aproximação para a distribuição a posteriori sobre a memória e outros parâmetros e é baseada numa aplicação simples do método conhecido como computação Bayesiana aproximada (ABC). Alguns estimadores populares para o parâmetro de memória serão revisados e comparados com esta abordagem. O emprego de nossa proposta viabiliza a solução de problemas complexos sob o ponto de vista Bayesiano e, embora aproximativa, possui um desempenho muito satisfatório quando comparada com métodos clássicos. / In this work we propose the use of a Bayesian method for estimating the memory parameter of a stochastic process with long-memory when its likelihood function is intractable or unavailable. Such approach provides an approximation for the posterior distribution on the memory and other parameters and it is based on a simple application of the so-called approximate Bayesian computation (ABC). Some popular existing estimators for the memory parameter are reviewed and compared to this method. The use of our proposal allows for the solution of complex problems under a Bayesian point of view and this proposal, although approximative, has a satisfactory performance when compared to classical methods.
67

Tests de l'efficience faible à partir des ondelettes de Haar / Tests of weak form efficiency with Haar wavelet

Belsuz, Autran 24 November 2017 (has links)
Cette thèse proposée utilise les ondelettes de Haar à créer de nouveaux indicateurs techniques, d’en évaluer leurs performances afin de tester la validité de l’efficience faible des marchés financiers. L’approche choisie vise à mettre en œuvre les capacités des indicateurs techniques à capter la mémoire longue présente dans les indices boursiers américains et européens à travers l’estimation de la tendance par le processus de lissage. De plus, cette dernière est une composante importante dans les séries économiques et financières. En effet, elle a fait l’objet d’innombrables investigations tant en analyse technique, qu’en traitement du signal et dans la théorie des cycles économiques. Toutefois, sa présence n’entre pas en ligne de compte dans la théorie classique de la finance, car les principaux modèles utilisés se focalisent sur les variations des cours boursiers. À cet effet, la tendance constitue une source de non-stationnarité entraînant des difficultés majeures pour la modélisation économétrique ou financière. Exploiter cette tendance s’affranchit, dans ce cas, des hypothèses de non-stationnarité tendancielle ou de racine unitaire. En plus, à l’issue des résultats que nous avons obtenus à partir du modèle à changement de régime. Nous confirmons qu’il est possible d’exploiter la présence de mémoire longue dans les cours, et également de battre le marché en présence de coûts de transactions sur les marchés américains et européens. / This proposed thesis uses the Haar wavelets to create new technical indicators, to evaluate their performance in order to test the validity of the weak form of efficient market hypothesis. The chosen approach aims to implement the capabilities of technical indicators to capture the long memory present in the US and European stock indices through the estimation of the trend by the smoothing process. Moreover, the trend is an important component in the economic and financial series. Indeed, it has been the subject of innumerable investigations in technical analysis, in signal processing and in the theory business cycle theory. However, its presence is not taken into account in the classic theory of finance because the main models used focus on changes in stock prices. For this purpose, the trend constitutes a source of non-stationarity leading to major difficulties for econometric or financial modeling. Exploit trend is freed, in this case, from the hypotheses of tendancy or unit root. In addition, the issue of the results we obtained from the regime change model. We confirm that it is possible to exploit the presence of long memory in the series, and also to beat the market in the presence of transaction costs on the American and European markets.
68

Théorèmes limites pour des processus à longue mémoire saisonnière

Ould Mohamed Abdel Haye, Mohamedou 30 December 2001 (has links) (PDF)
Nous étudions le comportement asymptotique de statistiques ou fonctionnelles liées à des processus à longue mémoire saisonnière. Nous nous concentrons sur les lignes de Donsker et sur le processus empirique. Les suites considérées sont de la forme $G(X_n)$ où $(X_n)$ est un processus gaussien ou linéaire. Nous montrons que les résultats que Taqqu et Dobrushin ont obtenus pour des processus à longue mémoire dont la covariance est à variation régulière à l'infini peuvent être en défaut en présence d'effets saisonniers. Les différences portent aussi bien sur le coefficient de normalisation que sur la nature du processus limite. Notamment nous montrons que la limite du processus empirique bi-indexé, bien que restant dégénérée, n'est plus déterminée par le degré de Hermite de la fonction de répartition des données. En particulier, lorsque ce degré est égal à 1, la limite n'est plus nécessairement gaussienne. Par exemple on peut obtenir une combinaison de processus de Rosenblatt indépendants. Ces résultats sont appliqués à quelques problèmes statistiques comme le comportement asymptotique des U-statistiques, l'estimation de la densité et la détection de rupture.
69

Detection of long-range dependence : applications in climatology and hydrology

Rust, Henning January 2007 (has links)
It is desirable to reduce the potential threats that result from the variability of nature, such as droughts or heat waves that lead to food shortage, or the other extreme, floods that lead to severe damage. To prevent such catastrophic events, it is necessary to understand, and to be capable of characterising, nature's variability. Typically one aims to describe the underlying dynamics of geophysical records with differential equations. There are, however, situations where this does not support the objectives, or is not feasible, e.g., when little is known about the system, or it is too complex for the model parameters to be identified. In such situations it is beneficial to regard certain influences as random, and describe them with stochastic processes. In this thesis I focus on such a description with linear stochastic processes of the FARIMA type and concentrate on the detection of long-range dependence. Long-range dependent processes show an algebraic (i.e. slow) decay of the autocorrelation function. Detection of the latter is important with respect to, e.g. trend tests and uncertainty analysis. Aiming to provide a reliable and powerful strategy for the detection of long-range dependence, I suggest a way of addressing the problem which is somewhat different from standard approaches. Commonly used methods are based either on investigating the asymptotic behaviour (e.g., log-periodogram regression), or on finding a suitable potentially long-range dependent model (e.g., FARIMA[p,d,q]) and test the fractional difference parameter d for compatibility with zero. Here, I suggest to rephrase the problem as a model selection task, i.e.comparing the most suitable long-range dependent and the most suitable short-range dependent model. Approaching the task this way requires a) a suitable class of long-range and short-range dependent models along with suitable means for parameter estimation and b) a reliable model selection strategy, capable of discriminating also non-nested models. With the flexible FARIMA model class together with the Whittle estimator the first requirement is fulfilled. Standard model selection strategies, e.g., the likelihood-ratio test, is for a comparison of non-nested models frequently not powerful enough. Thus, I suggest to extend this strategy with a simulation based model selection approach suitable for such a direct comparison. The approach follows the procedure of a statistical test, with the likelihood-ratio as the test statistic. Its distribution is obtained via simulations using the two models under consideration. For two simple models and different parameter values, I investigate the reliability of p-value and power estimates obtained from the simulated distributions. The result turned out to be dependent on the model parameters. However, in many cases the estimates allow an adequate model selection to be established. An important feature of this approach is that it immediately reveals the ability or inability to discriminate between the two models under consideration. Two applications, a trend detection problem in temperature records and an uncertainty analysis for flood return level estimation, accentuate the importance of having reliable methods at hand for the detection of long-range dependence. In the case of trend detection, falsely concluding long-range dependence implies an underestimation of a trend and possibly leads to a delay of measures needed to take in order to counteract the trend. Ignoring long-range dependence, although present, leads to an underestimation of confidence intervals and thus to an unjustified belief in safety, as it is the case for the return level uncertainty analysis. A reliable detection of long-range dependence is thus highly relevant in practical applications. Examples related to extreme value analysis are not limited to hydrological applications. The increased uncertainty of return level estimates is a potentially problem for all records from autocorrelated processes, an interesting examples in this respect is the assessment of the maximum strength of wind gusts, which is important for designing wind turbines. The detection of long-range dependence is also a relevant problem in the exploration of financial market volatility. With rephrasing the detection problem as a model selection task and suggesting refined methods for model comparison, this thesis contributes to the discussion on and development of methods for the detection of long-range dependence. / Die potentiellen Gefahren und Auswirkungen der natürlicher Klimavariabilitäten zu reduzieren ist ein wünschenswertes Ziel. Solche Gefahren sind etwa Dürren und Hitzewellen, die zu Wasserknappheit führen oder, das andere Extrem, Überflutungen, die einen erheblichen Schaden an der Infrastruktur nach sich ziehen können. Um solche katastrophalen Ereignisse zu vermeiden, ist es notwendig die Dynamik der Natur zu verstehen und beschreiben zu können. Typischerweise wird versucht die Dynamik geophysikalischer Datenreihen mit Differentialgleichungssystemen zu beschreiben. Es gibt allerdings Situationen in denen dieses Vorgehen nicht zielführend oder technisch nicht möglich ist. Dieses sind Situationen in denen wenig Wissen über das System vorliegt oder es zu komplex ist um die Modellparameter zu identifizieren. Hier ist es sinnvoll einige Einflüsse als zufällig zu betrachten und mit Hilfe stochastischer Prozesse zu modellieren. In dieser Arbeit wird eine solche Beschreibung mit linearen stochastischen Prozessen der FARIMA-Klasse angestrebt. Besonderer Fokus liegt auf der Detektion von langreichweitigen Korrelationen. Langreichweitig korrelierte Prozesse sind solche mit einer algebraisch, d.h. langsam, abfallenden Autokorrelationsfunktion. Eine verläßliche Erkennung dieser Prozesse ist relevant für Trenddetektion und Unsicherheitsanalysen. Um eine verläßliche Strategie für die Detektion langreichweitig korrelierter Prozesse zur Verfügung zu stellen, wird in der Arbeit ein anderer als der Standardweg vorgeschlagen. Gewöhnlich werden Methoden eingesetzt, die das asymptotische Verhalten untersuchen, z.B. Regression im Periodogramm. Oder aber es wird versucht ein passendes potentiell langreichweitig korreliertes Modell zu finden, z.B. aus der FARIMA Klasse, und den geschätzten fraktionalen Differenzierungsparameter d auf Verträglichkeit mit dem trivialen Wert Null zu testen. In der Arbeit wird vorgeschlagen das Problem der Detektion langreichweitiger Korrelationen als Modellselektionsproblem umzuformulieren, d.h. das beste kurzreichweitig und das beste langreichweitig korrelierte Modell zu vergleichen. Diese Herangehensweise erfordert a) eine geeignete Klasse von lang- und kurzreichweitig korrelierten Prozessen und b) eine verläßliche Modellselektionsstrategie, auch für nichtgenestete Modelle. Mit der flexiblen FARIMA-Klasse und dem Whittleschen Ansatz zur Parameterschätzung ist die erste Voraussetzung erfüllt. Hingegen sind standard Ansätze zur Modellselektion, wie z.B. der Likelihood-Ratio-Test, für nichtgenestete Modelle oft nicht trennscharf genug. Es wird daher vorgeschlagen diese Strategie mit einem simulationsbasierten Ansatz zu ergänzen, der insbesondere für die direkte Diskriminierung nichtgenesteter Modelle geeignet ist. Der Ansatz folgt einem statistischen Test mit dem Quotienten der Likelihood als Teststatistik. Ihre Verteilung wird über Simulationen mit den beiden zu unterscheidenden Modellen ermittelt. Für zwei einfache Modelle und verschiedene Parameterwerte wird die Verläßlichkeit der Schätzungen für p-Wert und Power untersucht. Das Ergebnis hängt von den Modellparametern ab. Es konnte jedoch in vielen Fällen eine adäquate Modellselektion etabliert werden. Ein wichtige Eigenschaft dieser Strategie ist, dass unmittelbar offengelegt wird, wie gut sich die betrachteten Modelle unterscheiden lassen. Zwei Anwendungen, die Trenddetektion in Temperaturzeitreihen und die Unsicherheitsanalyse für Bemessungshochwasser, betonen den Bedarf an verläßlichen Methoden für die Detektion langreichweitiger Korrelationen. Im Falle der Trenddetektion führt ein fälschlicherweise gezogener Schluß auf langreichweitige Korrelationen zu einer Unterschätzung eines Trends, was wiederum zu einer möglicherweise verzögerten Einleitung von Maßnahmen führt, die diesem entgegenwirken sollen. Im Fall von Abflußzeitreihen führt die Nichtbeachtung von vorliegenden langreichweitigen Korrelationen zu einer Unterschätzung der Unsicherheit von Bemessungsgrößen. Eine verläßliche Detektion von langreichweitig Korrelierten Prozesse ist somit von hoher Bedeutung in der praktischen Zeitreihenanalyse. Beispiele mit Bezug zu extremem Ereignissen beschränken sich nicht nur auf die Hochwasseranalyse. Eine erhöhte Unsicherheit in der Bestimmung von extremen Ereignissen ist ein potentielles Problem von allen autokorrelierten Prozessen. Ein weiteres interessantes Beispiel ist hier die Abschätzung von maximalen Windstärken in Böen, welche bei der Konstruktion von Windrädern eine Rolle spielt. Mit der Umformulierung des Detektionsproblems als Modellselektionsfrage und mit der Bereitstellung geeigneter Modellselektionsstrategie trägt diese Arbeit zur Diskussion und Entwicklung von Methoden im Bereich der Detektion von langreichweitigen Korrelationen bei.
70

Essays on long memory processes. / Ensaios sobre processos de memória longa.

Fernando Fernandes Neto 28 November 2016 (has links)
The present work aims at discussing the main theoretical aspects related to the occurrence of long memory processes and its respective application in economics and finance. In order to discuss the main theoretical aspects of its occurrence, it is worth starting from the complex systems approach and emergent phenomena, keeping in mind that many of these are computationally irreducible. In other words, the current state of the system depends on all previous states, in such a way that any change in the initial configuration must cause a significant difference in all posterior states. That is, there is a persistence of information over time - this is a concept directly related to long memory processes. Hence, based on complex systems simulations, three factors (possibly there are many others) were related to the rise of long memory processes: agents\' heterogeneity, occurrence of large deviations from the steady states (in conjunction with the motion laws of each system) and spatial complexity (which must influence on information propagation and on the dynamics of agents competition). In relation to the applied knowledge, first it is recognized that the explanatory factors for the rise of long memory processes are common to the structures/characteristics of real markets and it is possible to identify potential stylized facts when filtering the long memory components from time series - a considerable part of information present in time series is a consequence of the autocorrelation structure, which is directly related to the specificities of each market. Given that, in this thesis was developed a new risk contagion technique that does not need any further intervention. This technique is basically given by the calculation of rolling correlations between long memory filtered series of the conditional variances for different economies, such that these filtered series contain the stylized facts (risk peaks), free from possible overreactions caused by market idiosyncrasies. Then, based on the identification of risk contagion episodes related to the 2007/2008 Subprime Crisis in the U.S. and its respective contagion to the Brazilian economy, it was filtered out from the conditional variance of the Brazilian assets (which are an uncertainty measure) aiming at eliminating the contagion episodes and, consequently, it was made a counterfactual projection of what would have happened to the Brazilian economy if the risk contagion episodes had not occurred. Moreover, in conjunction with the evolutionary trend of the Brazilian economy prior to the crisis, it is possible to conclude that 70% of the economic crisis posterior to the 2008 events was caused by macroeconomic policies and only 30% is due to the effects of risk contagion episodes from the U.S. / O presente trabalho tem como objetivo discutir os principais aspectos teóricos ligados à ocorrência dos processos de memória longa e sua respectiva aplicação em economia e finanças. Para discutir os principais aspectos teóricos da sua ocorrência, recorre-se primeiramente à abordagem de sistemas complexos e fenômenos emergentes, tendo em vista que muitos destes são irredutíveis computacionalmente, ou seja, o estado atual do sistema depende de todos os estados anteriores, tal que, qualquer mudança nos instantes iniciais deve causar significativa diferença nos estados posteriores. Em outras palavras, há uma persistência da informação - conceito este intimamente ligado à memória longa. Portanto, com base em simulações de sistemas complexos computacionais, três fatores (podendo haver outros mais) foram relacionados ao surgimento de processos de memória longa: heterogeneidade dos agentes, ocorrência de grandes desvios do equilíbrio do sistema (em consonância com as respectivas leis do movimento de cada sistema estudado) e a complexidade espacial (que deve influenciar na propagação da informação e na dinâmica competitiva dos agentes). Em relação à aplicação do conhecimento, primeiro é reconhecido que os fatores explicativos para o surgimento de processos de memória longa são inerentes a estruturas/características de mercados reais e que é possível identificar potenciais fatos estilizados, ao filtrar as componentes de memória longa de séries temporais - grande parte da informação presente nas séries é função da estrutura de autocorrelação que advém das especificidades de cada mercado. Com base nisso, nesta tese foi desenvolvida uma nova técnica de estimação de contágio de risco, que não necessita intervenções adicionais, tendo em vista a identificação prévia de potenciais fatos estilizados em diferentes economias, utilizando as séries filtradas de variância condicional, tal que a partir destas séries filtradas é calculada uma correlação com horizonte móvel de observações entre choques (picos de risco) de curto prazo livres de possíveis reações causadas por idiossincrasias de cada mercado. Posteriormente, com base na identificação dos episódios ligados à Crise do Subprime de 2007/2008 nos Estados Unidos e seu respectivo contágio para a economia brasileira, filtrou-se a variância condicional dos ativos brasileiros (que é uma medida de incerteza), objetivando-se eliminar os eventos de contágio e, consequentemente, foi feita uma projeção contrafactual da evolução da economia, caso os episódios da crise não tivessem ocorrido. Com base nestes dados e com uma análise da tendência evolutiva da economia brasileira no período anterior à crise, constatou-se que 70% da crise econômica vivenciada no Brasil no período pós-2008 é decorrente de falhas na condução da política macroeconômica e somente 30% decorre dos efeitos do cenário externo na economia.

Page generated in 0.059 seconds