• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 50
  • 21
  • 9
  • 7
  • 5
  • 4
  • 4
  • 3
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 107
  • 107
  • 26
  • 25
  • 24
  • 23
  • 17
  • 16
  • 14
  • 13
  • 13
  • 12
  • 12
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Apport des ondelettes dans l'ananlyse univariée et multivariée des processus à mémoire longue : application à des données financières / Apport of the wavelet in the univariate and mulrtivariate analysis of long memory process : application to financial data

Boubaker, Heni 09 December 2010 (has links)
Cette thèse fait appel à la théorie des ondelettes pour estimer le paramètre de mémoire longue dans le cadre stationnaire et non stationnaire lors de la modélisation des séries financières, et pour l'estimation non paramétrique de la copule lors de l'examen de leurs interdépendances. L'avantage de la méthode des ondelettes comparée à l'analyse de Fourier est d'être parfaitement localisée dans le domaine temporel et celui des fréquences.Dans une première étape, nous nous sommes intéressés à la modélisation de la dynamique des séries de variations. À cette fin, nous proposons un modèle économétrique qui permet de tenir compte, en plus de la composante mémoire longue dans la moyenne, une dépendance de long terme dans la variance conditionnelle.D'une part, nous étudions les liens de causalité entre les séries obtenus par décomposition en ondelettes à chaque niveau de résolution. D'autre part, nous nous orientons vers la théorie de cointégration fractionnaire. À cet égard,nous estimons un modèle vectoriel à correction d'erreur dans lequel la vitesse d'ajustement à l'équilibre de long terme est plus lente que la vitesse associée à la cointégration linéaire. L'atout de cette approche est de détecter la présence d'une relation de long terme en plus des fluctuations de court terme et de mettre en œuvre des liens de causalité durables.Dans une deuxième étape, nous proposons une analyse des dépendances multivariées entre les risques financiers et leurs impacts sur les mesures de risques communément rencontrées en gestion des risques. Nous exposons une application de la théorie des copules à l'analyse de la structure des dépendances entre différentes séries financières. Les résultats empiriques obtenus montrent que la structure de dépendance devient accentuée lorsque les séries sont filtrées de l'effet mémoire. Ensuite, nous étudions l'effet du changement de la structure de dépendance dans la frontière d'efficience et dans les mesures du risque sur l'ensemble des portefeuilles optimaux en considérant le modèle moyenne-variance-copules et en élaborant une mesure du risque basée sur l'approche CVaR-copules. Les résultats empiriques prouvent que nous sommes loin des portefeuilles optimaux de Markowitz.Enfin, nous proposons un nouvel estimateur dans le cadre des ondelettes qui constitue une extension de celui de Shimotsu et Phillips (2005, 2010). / This thesis uses wavelet theory to estimate the long memory parameter in the stationary and non stationary framework when modeling financial time series, and non parametric estimation of the copula in the examination of their interdependencies. The advantage of the wavelet method compared to the Fourier analysis is to be fully localized in the time domain and that of the frequency. In a first step, we are interested in modeling the dynamics of series of variations. To this end, we propose an econometric model that takes into account, in addition to the long memory component in the mean, a long term dependence in the conditional variance. On the first hand, we study the causal links between the series obtained by wavelet decomposition at each level of resolution. On the second hand, we are moving towards the theory of fractional cointegration. In this regard, we consider a vector error correction model in which the speed of adjustment to the long run equilibrium is slower than the speed associated with the linear cointegration. The advantage of this approach is to detect the presence of a long term relationship in addition to short term fluctuations and implement long run causal links.In a second step, we deal a multivariate analysis of dependencies between risks and their impacts on financial measures of risk commonly used in risk management. We present an application of copula theory to analyze the structure of dependencies between different financial series. The empirical results show that the dependence structure becomes accentuated when the series are filtered from the memory effect. Next, we investigate the effect of changing the structure of dependence in the efficiency frontier and the risk measures on all optimal portfolios considering the mean-variance-copulas model and developing a risk measure based on the CVaR-copula approach. The empirical results show that we are far from optimal portfolios Markowitz . Finally, we propose a new estimator in the wavelet domain which is an extension of the estimator of the Shimotsu and Phillips (2005, 2010).
52

Multifraktální povaha finančních trhů a její vztah k tržní efektivnosti / Multifractal nature of financial markets and its relationship to market efficiency

Jeřábek, Jakub January 2009 (has links)
The thesis shows the relationship between the persistence in the financial markets returns and their efficiency. It interprets the efficient markets hypothesis and provides various time series models for the analysis of financial markets. The concept of long memory is broadly presented and two main types of methods to estimate long memory are analysed - time domain and frequency domain methods. A Monte Carlo study is used to compare these methods and selected estimators are then used on real world data - exchange rate and stock market series. There is no evidence of long memory in the returns but the stock market volatilities show clear signs of persistence.
53

Robust parameter estimation and pivotal inference under heterogeneous and nonstationary processes

Hou, Jie 22 January 2016 (has links)
Robust parameter estimation and pivotal inference is crucial for credible statistical conclusions. This thesis addresses these issues in three contexts: long-memory parameter estimation robust to low frequency nonstationary contamination, long-memory properties of financial time series, and inference on structural changes in a joint segmented trend with heterogeneous noise. Chapter 1 considers robust estimation of the long-memory parameter allowing for a wide collection of contamination processes, in particular low frequency nonstationary processes such as random level shifts. We propose a robust modified local-Whittle estimator and show it has the usual asymptotic distribution. We also provide modifications to further account for short-memory dynamics and additive noise. The proposed estimator provides substantial efficiency gains compared to existing methods in the presence of contaminations, without sacrificing efficiency when these are absent. Chapter 2 applies the modified local-Whittle estimator to various volatilities series for stock indices and exchange rates to robustly estimate the long-memory parameter. Our findings suggest that all series are a combination of long and short-memory processes and random level shifts, with the magnitude of each component varying across series. Our results contrast with the view that long-memory is the dominant feature. Chapter 3 is concerned with pivotal inference about structural changes in a joint segmented trend with heterogeneous noise. We provide tests for changes in the slope and the variance of the noise valid when both may be present, each allowed to occur at different dates. We suggest procedures for four testing problems.
54

Econometric methods related to parameter instability, long memory and forecasting

Xu, Jiawen 22 January 2016 (has links)
The dissertation consists of three chapters on econometric methods related to parameter instability, forecasting and long memory. The first chapter introduces a new frequentist-based approach to forecast time series in the presence of in and out-of-sample breaks in the parameters. We model the parameters as random level shift (RLS) processes and introduce two features to make the changes in parameters forecastable. The first models the probability of shifts according to some covariates. The second incorporates a built-in mean reversion mechanism to the time path of the parameters. Our model can be cast into a non-linear non-Gaussian state-space framework. We use particle filtering and Monte Carlo expectation maximization algorithms to construct the estimates. We compare the forecasting performance with several alternative methods for different series. In all cases, our method allows substantial gains in forecasting accuracy. The second chapter extends the RLS model of Lu and Perron (2010) for the volatility of asset prices. The extensions are in two directions: a) we specify a time-varying probability of shifts as a function of large negative lagged returns; b) we incorporate a mean reverting mechanism so that the sign and magnitude of the jump component change according to the deviations of past jumps from their long run mean. We estimate the model using daily data on four major stock market indices. Compared to competing models, the modified RLS model yields the smallest mean square forecast errors overall. The third chapter proposes a method of inference about the mean or slope of a time trend that is robust to the unknown order of fractional integration of the errors. Our tests have the standard asymptotic normal distribution irrespective of the value of the long-memory parameter. Our procedure is based on using quasi-differences of the data and regressors based on a consistent estimate of the long-memory parameter obtained from the residuals of a least-squares regression. We use the exact local-Whittle estimator proposed by Shimotsu (2010). Simulation results show that our procedure delivers tests with good finite sample size and power, including cases with strong short-term correlations.
55

Long memory in bond market returns: a test of weak-form efficiency in Botswana's bond market

Muzhoba, Gorata 06 March 2022 (has links)
Using the ARFIMA-FIGARCH model, this dissertation examines the efficiency of Botswana's bond market. It focuses on the properties of the return and volatility of the Fleming Asset Bond Index (the main aggregate fixed income benchmark index in Botswana) over the period September 2009 to May 2019. The weak-form version of efficient market hypothesis (EMH) is used as a criterion to investigate the existence of long memory in both bond returns and volatility. The results of our study indicate that the Botswana bond market data follow, to a great extent, the long-range dependence which negates the precepts of the efficient market hypothesis. Furthermore, policy reforms intended to stimulate bond market reform and related efficiency gains appear not to have produced the desired outcomes as the existence of long memory is found across all sample periods. Further remedial policies are suggested to enhance market dynamism.
56

FITTING MODELS OF NONSTATIONARY TIME SERIES: AN APPLICATION TO EEG DATA

Konda, Sreenivas 02 June 2006 (has links)
No description available.
57

Statistical tests for long memory and unit root of high frequency financial data

Chang, Yen-Hsiang 24 July 2008 (has links)
In this thesis, we study the unit root tests which includes the ADF, PP and KPSS tests, the long memory tests such as the R/S and GPH tests, and the applications of these methods in high frequency financial data analysis. The software SPLUS was adopted to analyze data and correction of the SPLUS program in unit tests are also proposed. To apply these two test methods in high frequency data, we quoted the library, HFlibrary designed by Yan and Zivot in 2003 for preliminary data analysis and propose a new library HFanalysis, which can be used in correcting high frequency data (excluding N.A. value, sorting transactions and retrieve a certain time of transactions), obtaining equi-distanced time intervals and testing for unit root and long memory properties. In additions, we apply this proposed library to simulate the power of traditional unit root methods such as the ADF test and long memory test method such as the R/S and to perform an empirical study. Finally, we explore the power of the ADF for testing data simulated from a threshold unit root model and simulate the percentiles of the null distribution of the following threshold unit root tests: WALD, LM, LR and W£f.
58

運用長期記憶模型於估計股票指數期貨之風險值 / Estimating Value-at-Risk for stock index futures using Double Long-memory Models

唐大倫, Tang,Ta-lun Tang Unknown Date (has links)
在本篇文章中,我們採用長期記憶模型來估計S&P500、Nasdaq100和Dow Jones Industrial Index三個股票指數期貨的日收盤價的風險值。為了更準確地計算風險值,本文採用常態分配、t分配以及偏斜t分配來做模型估計以及風險值之計算。有鑒於大多數探討風險值的文獻只考慮買入部位的風險,本研究除了估計買入部位的風險值,也估計放空部位的風險值,以期更能全面性地估算風險。實證結果顯示,ARFIMA-FIGARCH模型配合偏斜t分配較其他兩種分配更能精確地估算樣本內的風險值。基於ARFIMA-FIGARCH模型配合偏斜t分配在樣本內風險值計算的優異表現,我們利用此模型搭配來實際求算樣本外風險值。結果如同樣本內風險值一般,ARFIMA-FIGARCH模型配合偏斜t分配在樣本外也有相當好的風險預測能力。 / In this thesis, we estimate Value-at-Risk (VaR) for daily closing price of three stock index futures contracts, S&P500, Nasdaq100, and Dow Jones, using the double long memory models. Due to the existence of a long-term persistence characterized in our data, the ARFIMA-FIGARCH models are used to compute the VaR. In order to investigate better, three kinds of density distributions, normal, Student-t, and skewed Student-t distributions, are used for estimating models and computing the VaR. In addition to the VaR for the long trading positions which most researches focus on to date, the VaR for the short trading positions are calculated as well in this study. From the empirical results we show that for the three stock index futures, the ARFIMA-FIGARCH models with skewed Student-t distribution perform better in computing in-sample VaR both in long and short trading positions than symmetric models and has a quite excellent performance in forecasting out-of-sample VaR as well.
59

風險值與波動性共整合: 長期記憶模型 / Value at Risk and Volatility Comovement with Long Memory Models

劉尚銘, Liu, Shang Ming Unknown Date (has links)
金融自由化後,金融商品交易的多樣性在活絡金融市場方面佔有很重的份量,也使得投資者有更多樣化的投資管道及標地。投資者購買金融商品除了追求較高的報酬外,對於投資風險的管理也是不容乎視。2007年,美國的次級房貸subprimemortgage風爆使得雷曼兄弟和AIG集團爆發財務危機,正是投資者追求高報酬之後,在風險管理上並未妥善管理所造成。      衡量風險時,通常會使用變異數或標準差當做衡量指標,即在衡量其波動性,因此波動性裏含有許多訊息。在本論文中,我們將探討波動性所透露出來的兩個訊息,一個是風險值(VaR),文中將分別使用二種衡量可解釋長期記憶的GARCH模型探討台股指數期貨及新加坡的摩台股指數期貨這兩個期貨市場的VaR。另外則是試圖尋找出這兩個期貨市場殘差值的波動性之間的長期共整合關係。 本論文主要由三篇文章組成,第一篇是利用Baillie, Bollerslev, and Millelsen (1996) 所提出的長期記憶模型FIGARCH來計算台指期貨的風險值(VaR);第二篇也是利用長期記憶模型來計算新加坡的摩台指期貨的風險值,但這次的長期記憶模型增加一個由Tse (1998) 提出的可以考慮不對稱性波動的FIAPARCH模型。   這兩個模型都搭配三種不同的分配來計算VaR,分別是Normal, Student-t和skewed Student-t分配;實證結果顯示,這兩個期貨市場報酬的波動皆具有長期記憶,表示之前影響指數期貨報酬率的因素對未來指數期貨報酬率會有較長時間的影響力。而在傳統認為差殘值服從常態分配的假定下所計算出的VaR的配適情況較以Student-t分配計算出的VaR的配適情況不具效率,這除了說明傳統的常態分配假說在計算此兩個指數期貨報酬率是不適用之外,亦得出他們是具有肥尾(厚尾)的現象。   第三篇則是結合前兩篇的結果來探討此兩個指數期貨報酬率之間的波動性是否具有長期關係。因為台指期貨報酬率與摩台指期貨報酬率的波動性皆具有長期記憶,故在此部分,利用Engle-Granger (1987) 的兩階段共整合模型來求此兩個指數期貨報酬率之間的波動性是否存在長期關係。實證結果顯示,他們確實存在長期共整合關係,且摩台指期貨報酬率的波動性較台指期貨報酬率的波動性強,因此我們可以在台指期貨市場買入期指,而在新加坡的摩台指期貨市場避險 / The finance commodity exchange's multiplicity holds the very heavy component in the detachable money market aspect, after the financial liberalization. It also enables the investor to have many chances and commodities of investment. The investor purchases the financial commodity besides the higher reward, and does not allow regarding investment risk's management to regard. In 2007, the securitization commodity violation of US's subprimemortgage explodes causes Lehman Brothers and the AIG group erupts the financial crisis. This is precisely the investor pursues the high reward, and their administration centers have not created properly in the risk management. When we measure risks, we usually adopt the variance or the standard deviation. That is to weight its property of volatilities. There is much information in the volatilities. In this thesis, we discussed two kinds of information which the property of volatilities discloses. One is the value at risk (VaR hereafter). In this article, we use long-term memory's GARCH model to explain that the VaR of Taiwan stock index futures returns and Singapore's MSCI Taiwan index futures returns. Moreover, we attempts to seek for whether there are long relationship of the residuals volatilities between these two futures markets. This thesis was combined by three essays. The first essay employed the FIGARCH model of Baillie, Bollerslev, and Millelsen (1996) to calculated the VaR of Taiwan stock index futures returns. The second essay employed the FIGARCH model and FIAPARCH model of Tse (1998) to calculated the VaR of Singapore's MSCI Taiwan index futures returns. We calculated the VaRs of the different two futures markets by using the FIGARCH and FIAPARCH models with three different distributions-normal, student-t and skewed student-t. The empirical results showed the two futures markets both has long memory. It is not efficient to calculated the VaRs by using the traditional normal distribution. The Student-t distribution fitted the model better than the normal distribution. The third essay, we employed the Engle-Granger (1987) two-step cointegration model to test whether there are long relationship of the residuals volatilities between the Taiwan stock index futures returns and Singapore's MSCI Taiwan index futures returns. The empirical results showed that there was fractional cointegration between the two futures markets and the volatility in Taiwan stock index futures market is about 83% of that in MSCI Taiwan Index Futures market.
60

ARFIMA modely časových řad / ARFIMA time series models

Vdovičenko, Martin January 2014 (has links)
The thesis deal with long-memory processes which are defined by several ways. The main concern is dedicated to ARFIMA model, to its basic properties and its application. Next, graphical, semiparametric and parametric estimation methods of ARFIMA parameters are described in detail. Five selected R packages are introduced that are suitable for modeling long-memory processes. We discuss their basic functions with description of input arguments and output. Finally, the application of the packages on real data is discussed according to results of~each function. Data sample comes from the Nile River and represents its yearly minimal water levels. Powered by TCPDF (www.tcpdf.org)

Page generated in 0.0466 seconds