• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 50
  • 21
  • 9
  • 7
  • 5
  • 4
  • 4
  • 3
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 107
  • 107
  • 26
  • 25
  • 24
  • 23
  • 17
  • 16
  • 14
  • 13
  • 13
  • 12
  • 12
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Théorèmes limites pour des processus à longue mémoire saisonnière

Ould Mohamed Abdel Haye, Mohamedou 30 December 2001 (has links) (PDF)
Nous étudions le comportement asymptotique de statistiques ou fonctionnelles liées à des processus à longue mémoire saisonnière. Nous nous concentrons sur les lignes de Donsker et sur le processus empirique. Les suites considérées sont de la forme $G(X_n)$ où $(X_n)$ est un processus gaussien ou linéaire. Nous montrons que les résultats que Taqqu et Dobrushin ont obtenus pour des processus à longue mémoire dont la covariance est à variation régulière à l'infini peuvent être en défaut en présence d'effets saisonniers. Les différences portent aussi bien sur le coefficient de normalisation que sur la nature du processus limite. Notamment nous montrons que la limite du processus empirique bi-indexé, bien que restant dégénérée, n'est plus déterminée par le degré de Hermite de la fonction de répartition des données. En particulier, lorsque ce degré est égal à 1, la limite n'est plus nécessairement gaussienne. Par exemple on peut obtenir une combinaison de processus de Rosenblatt indépendants. Ces résultats sont appliqués à quelques problèmes statistiques comme le comportement asymptotique des U-statistiques, l'estimation de la densité et la détection de rupture.
82

Detection of long-range dependence : applications in climatology and hydrology

Rust, Henning January 2007 (has links)
It is desirable to reduce the potential threats that result from the variability of nature, such as droughts or heat waves that lead to food shortage, or the other extreme, floods that lead to severe damage. To prevent such catastrophic events, it is necessary to understand, and to be capable of characterising, nature's variability. Typically one aims to describe the underlying dynamics of geophysical records with differential equations. There are, however, situations where this does not support the objectives, or is not feasible, e.g., when little is known about the system, or it is too complex for the model parameters to be identified. In such situations it is beneficial to regard certain influences as random, and describe them with stochastic processes. In this thesis I focus on such a description with linear stochastic processes of the FARIMA type and concentrate on the detection of long-range dependence. Long-range dependent processes show an algebraic (i.e. slow) decay of the autocorrelation function. Detection of the latter is important with respect to, e.g. trend tests and uncertainty analysis. Aiming to provide a reliable and powerful strategy for the detection of long-range dependence, I suggest a way of addressing the problem which is somewhat different from standard approaches. Commonly used methods are based either on investigating the asymptotic behaviour (e.g., log-periodogram regression), or on finding a suitable potentially long-range dependent model (e.g., FARIMA[p,d,q]) and test the fractional difference parameter d for compatibility with zero. Here, I suggest to rephrase the problem as a model selection task, i.e.comparing the most suitable long-range dependent and the most suitable short-range dependent model. Approaching the task this way requires a) a suitable class of long-range and short-range dependent models along with suitable means for parameter estimation and b) a reliable model selection strategy, capable of discriminating also non-nested models. With the flexible FARIMA model class together with the Whittle estimator the first requirement is fulfilled. Standard model selection strategies, e.g., the likelihood-ratio test, is for a comparison of non-nested models frequently not powerful enough. Thus, I suggest to extend this strategy with a simulation based model selection approach suitable for such a direct comparison. The approach follows the procedure of a statistical test, with the likelihood-ratio as the test statistic. Its distribution is obtained via simulations using the two models under consideration. For two simple models and different parameter values, I investigate the reliability of p-value and power estimates obtained from the simulated distributions. The result turned out to be dependent on the model parameters. However, in many cases the estimates allow an adequate model selection to be established. An important feature of this approach is that it immediately reveals the ability or inability to discriminate between the two models under consideration. Two applications, a trend detection problem in temperature records and an uncertainty analysis for flood return level estimation, accentuate the importance of having reliable methods at hand for the detection of long-range dependence. In the case of trend detection, falsely concluding long-range dependence implies an underestimation of a trend and possibly leads to a delay of measures needed to take in order to counteract the trend. Ignoring long-range dependence, although present, leads to an underestimation of confidence intervals and thus to an unjustified belief in safety, as it is the case for the return level uncertainty analysis. A reliable detection of long-range dependence is thus highly relevant in practical applications. Examples related to extreme value analysis are not limited to hydrological applications. The increased uncertainty of return level estimates is a potentially problem for all records from autocorrelated processes, an interesting examples in this respect is the assessment of the maximum strength of wind gusts, which is important for designing wind turbines. The detection of long-range dependence is also a relevant problem in the exploration of financial market volatility. With rephrasing the detection problem as a model selection task and suggesting refined methods for model comparison, this thesis contributes to the discussion on and development of methods for the detection of long-range dependence. / Die potentiellen Gefahren und Auswirkungen der natürlicher Klimavariabilitäten zu reduzieren ist ein wünschenswertes Ziel. Solche Gefahren sind etwa Dürren und Hitzewellen, die zu Wasserknappheit führen oder, das andere Extrem, Überflutungen, die einen erheblichen Schaden an der Infrastruktur nach sich ziehen können. Um solche katastrophalen Ereignisse zu vermeiden, ist es notwendig die Dynamik der Natur zu verstehen und beschreiben zu können. Typischerweise wird versucht die Dynamik geophysikalischer Datenreihen mit Differentialgleichungssystemen zu beschreiben. Es gibt allerdings Situationen in denen dieses Vorgehen nicht zielführend oder technisch nicht möglich ist. Dieses sind Situationen in denen wenig Wissen über das System vorliegt oder es zu komplex ist um die Modellparameter zu identifizieren. Hier ist es sinnvoll einige Einflüsse als zufällig zu betrachten und mit Hilfe stochastischer Prozesse zu modellieren. In dieser Arbeit wird eine solche Beschreibung mit linearen stochastischen Prozessen der FARIMA-Klasse angestrebt. Besonderer Fokus liegt auf der Detektion von langreichweitigen Korrelationen. Langreichweitig korrelierte Prozesse sind solche mit einer algebraisch, d.h. langsam, abfallenden Autokorrelationsfunktion. Eine verläßliche Erkennung dieser Prozesse ist relevant für Trenddetektion und Unsicherheitsanalysen. Um eine verläßliche Strategie für die Detektion langreichweitig korrelierter Prozesse zur Verfügung zu stellen, wird in der Arbeit ein anderer als der Standardweg vorgeschlagen. Gewöhnlich werden Methoden eingesetzt, die das asymptotische Verhalten untersuchen, z.B. Regression im Periodogramm. Oder aber es wird versucht ein passendes potentiell langreichweitig korreliertes Modell zu finden, z.B. aus der FARIMA Klasse, und den geschätzten fraktionalen Differenzierungsparameter d auf Verträglichkeit mit dem trivialen Wert Null zu testen. In der Arbeit wird vorgeschlagen das Problem der Detektion langreichweitiger Korrelationen als Modellselektionsproblem umzuformulieren, d.h. das beste kurzreichweitig und das beste langreichweitig korrelierte Modell zu vergleichen. Diese Herangehensweise erfordert a) eine geeignete Klasse von lang- und kurzreichweitig korrelierten Prozessen und b) eine verläßliche Modellselektionsstrategie, auch für nichtgenestete Modelle. Mit der flexiblen FARIMA-Klasse und dem Whittleschen Ansatz zur Parameterschätzung ist die erste Voraussetzung erfüllt. Hingegen sind standard Ansätze zur Modellselektion, wie z.B. der Likelihood-Ratio-Test, für nichtgenestete Modelle oft nicht trennscharf genug. Es wird daher vorgeschlagen diese Strategie mit einem simulationsbasierten Ansatz zu ergänzen, der insbesondere für die direkte Diskriminierung nichtgenesteter Modelle geeignet ist. Der Ansatz folgt einem statistischen Test mit dem Quotienten der Likelihood als Teststatistik. Ihre Verteilung wird über Simulationen mit den beiden zu unterscheidenden Modellen ermittelt. Für zwei einfache Modelle und verschiedene Parameterwerte wird die Verläßlichkeit der Schätzungen für p-Wert und Power untersucht. Das Ergebnis hängt von den Modellparametern ab. Es konnte jedoch in vielen Fällen eine adäquate Modellselektion etabliert werden. Ein wichtige Eigenschaft dieser Strategie ist, dass unmittelbar offengelegt wird, wie gut sich die betrachteten Modelle unterscheiden lassen. Zwei Anwendungen, die Trenddetektion in Temperaturzeitreihen und die Unsicherheitsanalyse für Bemessungshochwasser, betonen den Bedarf an verläßlichen Methoden für die Detektion langreichweitiger Korrelationen. Im Falle der Trenddetektion führt ein fälschlicherweise gezogener Schluß auf langreichweitige Korrelationen zu einer Unterschätzung eines Trends, was wiederum zu einer möglicherweise verzögerten Einleitung von Maßnahmen führt, die diesem entgegenwirken sollen. Im Fall von Abflußzeitreihen führt die Nichtbeachtung von vorliegenden langreichweitigen Korrelationen zu einer Unterschätzung der Unsicherheit von Bemessungsgrößen. Eine verläßliche Detektion von langreichweitig Korrelierten Prozesse ist somit von hoher Bedeutung in der praktischen Zeitreihenanalyse. Beispiele mit Bezug zu extremem Ereignissen beschränken sich nicht nur auf die Hochwasseranalyse. Eine erhöhte Unsicherheit in der Bestimmung von extremen Ereignissen ist ein potentielles Problem von allen autokorrelierten Prozessen. Ein weiteres interessantes Beispiel ist hier die Abschätzung von maximalen Windstärken in Böen, welche bei der Konstruktion von Windrädern eine Rolle spielt. Mit der Umformulierung des Detektionsproblems als Modellselektionsfrage und mit der Bereitstellung geeigneter Modellselektionsstrategie trägt diese Arbeit zur Diskussion und Entwicklung von Methoden im Bereich der Detektion von langreichweitigen Korrelationen bei.
83

Essays on long memory processes. / Ensaios sobre processos de memória longa.

Fernando Fernandes Neto 28 November 2016 (has links)
The present work aims at discussing the main theoretical aspects related to the occurrence of long memory processes and its respective application in economics and finance. In order to discuss the main theoretical aspects of its occurrence, it is worth starting from the complex systems approach and emergent phenomena, keeping in mind that many of these are computationally irreducible. In other words, the current state of the system depends on all previous states, in such a way that any change in the initial configuration must cause a significant difference in all posterior states. That is, there is a persistence of information over time - this is a concept directly related to long memory processes. Hence, based on complex systems simulations, three factors (possibly there are many others) were related to the rise of long memory processes: agents\' heterogeneity, occurrence of large deviations from the steady states (in conjunction with the motion laws of each system) and spatial complexity (which must influence on information propagation and on the dynamics of agents competition). In relation to the applied knowledge, first it is recognized that the explanatory factors for the rise of long memory processes are common to the structures/characteristics of real markets and it is possible to identify potential stylized facts when filtering the long memory components from time series - a considerable part of information present in time series is a consequence of the autocorrelation structure, which is directly related to the specificities of each market. Given that, in this thesis was developed a new risk contagion technique that does not need any further intervention. This technique is basically given by the calculation of rolling correlations between long memory filtered series of the conditional variances for different economies, such that these filtered series contain the stylized facts (risk peaks), free from possible overreactions caused by market idiosyncrasies. Then, based on the identification of risk contagion episodes related to the 2007/2008 Subprime Crisis in the U.S. and its respective contagion to the Brazilian economy, it was filtered out from the conditional variance of the Brazilian assets (which are an uncertainty measure) aiming at eliminating the contagion episodes and, consequently, it was made a counterfactual projection of what would have happened to the Brazilian economy if the risk contagion episodes had not occurred. Moreover, in conjunction with the evolutionary trend of the Brazilian economy prior to the crisis, it is possible to conclude that 70% of the economic crisis posterior to the 2008 events was caused by macroeconomic policies and only 30% is due to the effects of risk contagion episodes from the U.S. / O presente trabalho tem como objetivo discutir os principais aspectos teóricos ligados à ocorrência dos processos de memória longa e sua respectiva aplicação em economia e finanças. Para discutir os principais aspectos teóricos da sua ocorrência, recorre-se primeiramente à abordagem de sistemas complexos e fenômenos emergentes, tendo em vista que muitos destes são irredutíveis computacionalmente, ou seja, o estado atual do sistema depende de todos os estados anteriores, tal que, qualquer mudança nos instantes iniciais deve causar significativa diferença nos estados posteriores. Em outras palavras, há uma persistência da informação - conceito este intimamente ligado à memória longa. Portanto, com base em simulações de sistemas complexos computacionais, três fatores (podendo haver outros mais) foram relacionados ao surgimento de processos de memória longa: heterogeneidade dos agentes, ocorrência de grandes desvios do equilíbrio do sistema (em consonância com as respectivas leis do movimento de cada sistema estudado) e a complexidade espacial (que deve influenciar na propagação da informação e na dinâmica competitiva dos agentes). Em relação à aplicação do conhecimento, primeiro é reconhecido que os fatores explicativos para o surgimento de processos de memória longa são inerentes a estruturas/características de mercados reais e que é possível identificar potenciais fatos estilizados, ao filtrar as componentes de memória longa de séries temporais - grande parte da informação presente nas séries é função da estrutura de autocorrelação que advém das especificidades de cada mercado. Com base nisso, nesta tese foi desenvolvida uma nova técnica de estimação de contágio de risco, que não necessita intervenções adicionais, tendo em vista a identificação prévia de potenciais fatos estilizados em diferentes economias, utilizando as séries filtradas de variância condicional, tal que a partir destas séries filtradas é calculada uma correlação com horizonte móvel de observações entre choques (picos de risco) de curto prazo livres de possíveis reações causadas por idiossincrasias de cada mercado. Posteriormente, com base na identificação dos episódios ligados à Crise do Subprime de 2007/2008 nos Estados Unidos e seu respectivo contágio para a economia brasileira, filtrou-se a variância condicional dos ativos brasileiros (que é uma medida de incerteza), objetivando-se eliminar os eventos de contágio e, consequentemente, foi feita uma projeção contrafactual da evolução da economia, caso os episódios da crise não tivessem ocorrido. Com base nestes dados e com uma análise da tendência evolutiva da economia brasileira no período anterior à crise, constatou-se que 70% da crise econômica vivenciada no Brasil no período pós-2008 é decorrente de falhas na condução da política macroeconômica e somente 30% decorre dos efeitos do cenário externo na economia.
84

Modelos lineares generalizadas para series temporais com memoria longa / Generalized linear models for long memory time series

Borges, Cristiano Amâncio Vieira 15 August 2018 (has links)
Orientador: Mauricio Enrique Zevallos Herencia / Dissertação (mestrado) - Universidade Estadadual de Campinas, Instituto de Matematica, Estatistica e Computação Cientifica / Made available in DSpace on 2018-08-15T13:14:28Z (GMT). No. of bitstreams: 1 Borges_CristianoAmancioVieira_M.pdf: 2172730 bytes, checksum: 3a0a212a114d920caf7bafe3f7a04868 (MD5) Previous issue date: 2010 / Resumo: A modelagem de séries temporais não gaussianas é um tema de alta relevância na análise de séries temporais. Utilizando-se de estimação por verossimilhança parcial, Kedem e Fokianos (2002) estenderam sistematicamente a metodologia dos Modelos Lineares Generalizados (MLG) para séries temporais em que tanto a série de interesse quanto as covariáveis são estocasticamente dependentes. Entretanto, a análise estatística de séries com memória longa (ML), seja na resposta ou nas covariáveis, não é discutida em detalhes. O primeiro objetivo desta dissertação é investigar, através de simulações, as propriedades dos estimadores de máxima verossimilhança parcial dos coeficientes do MLG quando utilizado para séries temporais com ML. O segundo objetivo consiste em um estudo sobre a qualidade das previsões obtidas para vários modelos ajustados a dados de séries com ML, utilizando a metodologia proposta por Kedem e Fokianos (2002). Os modelos considerados nesta dissertação são modelos para séries de contagens, séries binárias e séries categóricas ordinais. Finalmente, as metodologias são ilustradas através de aplicações em conjuntos de dados reais de finanças e de poluição do ar. / Abstract: Non-gaussian time series modeling is a high relevance issue of time series analysis. Kedem and Fokianos (2002) have used partial likelihood estimation to extend the Generalized Linear Models (GLM) methodology systematically to time series where the response and covariate data are both stochastically dependent. However, statistical analysis of time series with long memory (LM), whether in the response or in the covariates, is not discussed in detail. The first purpose of this paper is to investigate, via simulations, the properties of the partial maximum likelihood estimators of the GLM coefficients as used for modeling LM time series. As a second purpose, we have assessed the quality of the forecasts obtained from several adjusted models (using the methodology proposed by Kedem and Fokianos (2002)) as applied to data with LM series. The models we have chosen for our work include count series, binary series, and categorical ordinal time series models. Finally, the methodologies are illustrated with applications to financial and air pollution real data. / Mestrado / Series Temporais / Mestre em Estatística
85

Estimação do índice de memória em processos estocásticos com memória longa: uma abordagem via ABC / Estimation of the memory index of stochastic processes with long memory: an ABC approach

Plinio Lucas Dias Andrade 28 March 2016 (has links)
Neste trabalho propomos o uso de um método Bayesiano para estimar o parâmetro de memória de um processo estocástico com memória longa quando sua função de verossimilhança é intratável ou não está disponível. Esta abordagem fornece uma aproximação para a distribuição a posteriori sobre a memória e outros parâmetros e é baseada numa aplicação simples do método conhecido como computação Bayesiana aproximada (ABC). Alguns estimadores populares para o parâmetro de memória serão revisados e comparados com esta abordagem. O emprego de nossa proposta viabiliza a solução de problemas complexos sob o ponto de vista Bayesiano e, embora aproximativa, possui um desempenho muito satisfatório quando comparada com métodos clássicos. / In this work we propose the use of a Bayesian method for estimating the memory parameter of a stochastic process with long-memory when its likelihood function is intractable or unavailable. Such approach provides an approximation for the posterior distribution on the memory and other parameters and it is based on a simple application of the so-called approximate Bayesian computation (ABC). Some popular existing estimators for the memory parameter are reviewed and compared to this method. The use of our proposal allows for the solution of complex problems under a Bayesian point of view and this proposal, although approximative, has a satisfactory performance when compared to classical methods.
86

Tři eseje o trhu s elektřinou / Three Essays on Electricity Markets

Luňáčková, Petra January 2018 (has links)
DISSERTATION - Abstract in English Three Essays on Electricity Markets Author: PhDr. Petra Luňáčková Academic Year: 2017/2018 This thesis consists of three papers that share the main theme - energy. The articles introduce characteristics and behavior of electricity focusing on its unique properties. The dissertation aims at the Czech electricity market and analyzes also highly discussed solar power plants. The first article studies long term memory properties of electricity spot prices through the detrended fluctuation analysis, as electricity prices are dominated by cycles. We conclude that Czech electricity prices are strongly mean reverting yet non-stationary. The second part of the dissertation investigates possible asymmetry in the gas - oil prices adjustment. Oil prices determine the price of electricity during the times of peak demand, as the reaction of power plants fueled by oil is quick but marginal costs are high. We chose the gasoline - crude oil relationship known as "rockets and feathers" effect and offer two new tests to analyze such type of relationship as we believe that error correction model is not the most suitable tool. Analyzing international dataset we do not find statistically significant asymmetry. The third study assesses the impact of renewable energy sources, solar plants in...
87

Empirical analysis of inflation dynamics : evidence from Ghana and South Africa

Boateng, Alexander January 2017 (has links)
Thesis (Ph.D. (Statistics)) -- University of Limpopo, 2022 / Using the ARFIMA (autoregressive and fractionally integrated moving aver age) model extended with sGARCH (standard generalised autoregressive con ditional heteroscedasticity) and ’gjrGARCH (Glosten-Jagannathan-Runkle gen eralised autoregressive conditional heteroscedascity) innovations, fractional in tegration approach and state space model, this study has empirically examined persistency of inflation dynamics of Ghana and South Africa, the only two coun tries in Sub-Saharan Africa with Inflation Targeting (IT) monetary policy. The first part of the analysis employed monthly CPI (Consumer Price Index) in flation series for the period January 1971 to October 2014 obtained from the Bank of Ghana (BoG), and for the period January 1995 to December 2014 ob tained from Statistics South Africa. The second part involves the estimation of threshold effect of inflation on economic growth using annual data obtained from the IMF (International Monetary Fund) database for the period 1981 to 2014, for both countries. Results from the study showed that structural breaks, long memory and non linearities (or regime shifts) are largely responsible for inflation persistence, hence the ever-changing nature of inflation rates of Ghana and South Africa. ARFIMA(3,0.35,1)-‘gjrGARCH(1,1) under Generalised Error Distribution (GED) and ARFIMA(3,0.50,1)-‘gjrGARCH(1,1) under Student-t Distribution (STD) mod els provided the best fit for persistence in the conditional mean (or level) of CPI for Ghana and South Africa, respectively. The results from these models pro vided evidence of time-varying conditional mean and volatility in CPI inflation rates of both countries. The two models also revealed an asymmetric effect of inflationary shocks, where negative shocks appear to have greater impact than positive shocks, in terms of persistence on the conditional mean with time varying volatility. This thesis proposes a model that combines fractional integration with non linear deterministic terms based on the Chebyshev polynomials in time for the analysis of CPI inflation rates of Ghana and South Africa. We tested for non-linear deterministic terms in the context of fractional integration and esti mated the fractional differencing parameters, d to be 1.11 and 1.32 respectively, for the Ghanaian and the South African inflation rates, but the non-linear trends were found to be statistically insignificant in the two series. New ev idence from this thesis depicts that inflation rate of Ghana is highly persistent and non-mean reverting, with an estimated fractional differencing parameter, d > 1.0, and will therefore require some policy action to steer inflation back to stability. However, the South African inflation series was found to be a cyclical process with an order of integration estimated to be d = 0.7, depicting mean reversion, with the length of the cycles approximated to last for 80 months. Finally, the thesis incorporated structural breaks, long memory, non-linearity, and some explanatory variables into a state space model and estimated the threshold effect of inflation on economic growth. The empirical results suggest that inflation below the estimated levels of 9% and 6% for Ghana and South Africa respectively, will be conducive for economic growth. The policy implications of these results for both countries are as follows. First, both series had similar properties responsible for inducing inflation persistence such as structural breaks, non-linearities, long memory and asymmetric re sponse to negatives shocks - but with varied degrees of magnitude. For both countries, the conditional mean and unobserved components such as volatility for both countries were found to be time-varying. This thesis, therefore, recom mends to the BoG and the South African Reserve Bank (SARB) - responsible for monetary policies, and the Finance Ministers of both governments - respon sible for fiscal policies, to take the above-mentioned properties into account in the formulation of their monetary policies. Second, the thesis recommends that the BoG and the SARB consolidate the IT policy, since keeping inflation below the targets set of 9% and 6%, respectively for Ghana and South Africa, will boost economic growth. Third, policymakers could also design measures (monetary and fiscal policies) such as increase in interest rates, credit control, and reduction of unnecessary expenditure, among others, to control inflation due to its adverse effects on market volatility. Even though an increase in interest rates could assist in curtailing the recent and anticipated increase in inflation rates in both countries, where targets have been missed by Ghana and South Africa, it will also be prudent to legislate monetary policies around demand-supply side since the problem of both coun tries appears to be more of a structuralist than a monetarist. It is, therefore, recommended that both countries tighten the IT monetary policy in order to re duce inflation persistence. This will eventually impact on poverty and income distribution with ramifications for economic growth and/or development. The fourth implication of these results is that governments and central banks should be mindful of the actions and decisions they take, in the sense that unguarded decisions and unnecessary alarms could raise uncertainties in the economy, which could, in turn, affect the future trajectory of inflation. Finally, the thesis recommends that governments of both countries strengthen the pri vate sector, which is the engine of growth. For small and open economies such as Ghana and South Africa, this will grow the economy through job creation and restore investor confidence. / National Research Foundation (NRF), Department of Science and Technology (DST), Telkom’s Tertiary Education Support Programme (TESP) and the NRF-DST Centre of Excellence for Mathematical and Statistical Sciences (CoE-MaSS)
88

Application of Wavelets to Filtering and Analysis of Self-Similar Signals

Wirsing, Karlton 30 June 2014 (has links)
Digital Signal Processing has been dominated by the Fourier transform since the Fast Fourier Transform (FFT) was developed in 1965 by Cooley and Tukey. In the 1980's a new transform was developed called the wavelet transform, even though the first wavelet goes back to 1910. With the Fourier transform, all information about localized changes in signal features are spread out across the entire signal space, making local features global in scope. Wavelets are able to retain localized information about the signal by applying a function of a limited duration, also called a wavelet, to the signal. As with the Fourier transform, the discrete wavelet transform has an inverse transform, which allows us to make changes in a signal in the wavelet domain and then transform it back in the time domain. In this thesis, we have investigated the filtering properties of this technique and analyzed its performance under various settings. Another popular application of wavelet transform is data compression, such as described in the JPEG 2000 standard and compressed digital storage of fingerprints developed by the FBI. Previous work on filtering has focused on the discrete wavelet transform. Here, we extended that method to the stationary wavelet transform and found that it gives a performance boost of as much as 9 dB over that of the discrete wavelet transform. We also found that the SNR of noise filtering decreases as a frequency of the base signal increases up to the Nyquist limit for both the discrete and stationary wavelet transforms. Besides filtering the signal, the discrete wavelet transform can also be used to estimate the standard deviation of the white noise present in the signal. We extended the developed estimator for the discrete wavelet transform to the stationary wavelet transform. As with filtering, it is found that the quality of the estimate decreases as the frequency of the base signal increases. Many interesting signals are self-similar, which means that one of their properties is invariant on many different scales. One popular example is strict self-similarity, where an exact copy of a signal is replicated on many scales, but the most common property is statistical self-similarity, where a random segment of a signal is replicated on many different scales. In this work, we investigated wavelet-based methods to detect statistical self-similarities in a signal and their performance on various types of self-similar signals. Specifically, we found that the quality of the estimate depends on the type of the units of the signal being investigated for low Hurst exponent and on the type of edge padding being used for high Hurst exponent. / Master of Science
89

Essays in long memory : evidence from African stock markets

Thupayagale, Pako January 2010 (has links)
This thesis explores various aspects of long memory behaviour in African stock markets (ASMs). First, we examine long memory in both equity returns and volatility using the weak-form version of the efficient market hypothesis (EMH) as a criterion. The results show that these markets (largely) display a predictable component in returns; while evidence of long memory in volatility is mixed. In general, these findings contradict the precepts of the EMH and a variety of remedial policies are suggested. Next, we re-examine evidence of volatility persistence and long memory in light of the potential existence of neglected breaks in the stock return volatility data. Our results indicate that a failure to account for time-variation in the unconditional mean variance can lead to spurious conclusions. Furthermore, a modification of the GARCH model to allow for mean variation is introduced, which, generates improved volatility forecasts for a selection of ASMs. To further evaluate the quality of volatility forecasts we compare the performance of a number of long memory models against a variety of alternatives. The results generally suggest that over short horizons simple statistical models and the short memory GARCH models provide superior forecasts of volatility; while, at longer horizons, we find some evidence in favour of long memory models. However, the various model rankings are shown to be sensitive to the choice of error statistic used to assess the accuracy of the forecasts. Finally, a wide range of volatility forecasting models are evaluated in order to ascertain which method delivers the most accurate value-at-risk (VaR) estimates in the context of Basle risk framework. The results show that both asymmetric and long memory attributes are important considerations in delivering accurate VaR measures.
90

以FIGARCH模型估計長期利率期貨風險值 / Modeling Daily Value-at-Risk for Long-term Interest Rate Futures Using FIGARCH Models

吳秉宗, Wu,Pinh-Tsung Unknown Date (has links)
近幾年,風險值已經成為金融機構風險控管的重要工具。它的明確及簡單易懂是其讓人接受的原因,加上巴塞爾銀行監理委員會在1996提出的巴塞爾協定修正,規定銀行將市場風險因素納入考量,並允許銀行自行發展內部模型,以風險值模型衡量市場風險後,各種風險值的估算方法相繼被提出。 本篇論文是使用部分整合自回歸條件變異數(Fractional Integrated Generalized Autoregressive Conditional Heteroskedasticity,簡稱FIGARCH)計算長期利率期貨多空部位的每日風險值。選取的三支長期利率期貨是在芝加哥期貨交易所掛牌的三十年期美國政府債券期貨(TB)、十年期美國政府債券期貨(TN) 與十年期市政債券指數期貨(MNI)。 利率期貨的研究在過去文獻中,甚少被提及。但隨著利率型商品日新月異的發展,以利率期貨避險的需求也與日遽增。尤其在台灣,利率期貨更是今年新登場的期貨商品。因此,我選擇利率期貨作為研究標的,藉由以FIGARCH模型來配適波動性,提供避險者一個估算風險值的方法。 FIGARCH模型係由Baillie、Bollerslev與Mikkelsen於1996所提出,與傳統GARCH模型所不同的是,FIGARCH模型特別適用於描述具有波動性長期記憶(Long Memory)性質的資料。所謂長期記憶性,是指衝擊所造成的持續性是以緩慢的雙曲線速率衰退。而許多市場實證分析均指出,FIGARCH較適合用來描述金融市場上的波動性。此外,本研究的風險值計算,除了一般實務界常用的常態分配以外,還考慮了t分配與偏斜t分配,以捕捉財務資料常見的厚尾與偏斜的特性。 而實證結果顯示,長期利率期貨報酬率的波動性確實存在長期記憶性,所以FIGARCH(1,d,1)模型可以適切地估算長期利率期貨的每日風險值,不論在樣本內或樣本外的風險值計算均優於傳統GARCH(1,1)模型的計算結果。至於各種不同分配的比較,在樣本內的風險值計算,當α=0.05時,常態分配FIGARCH(1,d,1)模型表現較佳;當α=0.025到0.0025時,t分配與偏斜t分配FIGARCH(1,d,1)模型表現較佳,而偏斜t分配FIGARCH又稍微優於t分配FIGARCH(1,d,1)模型。 而樣本外的風險值預測,則有不同的結果,當α=0.05,t分配與偏斜t分配FIGARCH(1,d,1)模型表現較佳;而α=0.01時,常態分配FIGARCH(1,d,1)模型表現較佳。而且t分配與偏斜t分配FIGARCH(1,d,1)模型在α=0.01會出現太過保守的情形,出現失敗率(failure rate)為零,高估風險值。 / Value-at-Risk (VaR) has become the standard measure used to quantify market risk recently, and it is defined as the maximum expected loss in the value of an asset or portfolio, for a given probability α at a determined time period. This article uses the FIGARCH(1,d,1) models to calculate daily VaR for long-term interest rate futures returns for long and short trading positions based on the normal, the Student-t, and the skewed Student-t error distributions. The U.S. Treasury bonds futures, Treasury notes futures, and municipal notes index futures of daily frequency are studied. The empirical results show that returns series for three interest rate futures all have long memory in volatility, and should be modeled using fractional integrated models. Besides, the in-sample and out-of-sample VaR values generated using FIGARCH(1,d,1) models are more accurate than those generated using traditional GARCH(1,1) models. For different distributions among FIGARCH(1,d,1) models, the normal FIGARCH(1,d,1) models are preferred for in-sample VaR computing whenα=0.05, and the Student-t and skewed Student-t models perform better for in-sample VaR computing whenα=0.025-0.0025. Nonetheless, for out-of-sample VaR, the Student-t and skewed Student-t FIGARCH(1,d,1) models perform better in the case α=0.05 while the normal FIGARCH(1,d,1) models perform better in the case α=0.01. The VaR values obtained by the Student-t and skewed Student-t FIGARCH(1,d,1) models are too conservative whenα=0.01.

Page generated in 0.0612 seconds