11 |
隨機利率模型下台灣公債市場殖利率曲線之估計 / Yield Curve Estimation Under Stochastic Interest Rate Modles :Taiwan Government Bond Market Empirical Study羅家俊, Lo, Chia-Chun Unknown Date (has links)
隨著金融市場的開放,越來越多的金融商品被開發出來以迎合市場參予者的需求,利率衍生性金融商品是一種以利率為標的的一種新金融商品,而這種新金融商品的交易量也是相當的可觀。我們在設計金融商品的第一步就是要去定價,在現實社會中利率是隨機波動的而不是像在B-S的選擇權公式中是固定的。隨機利率模型的用途就是在描述利率隨機波動的行為,進而對利率衍生性金融商品定價。本文嘗試以隨機利率模型估計台灣公債市場的殖利率曲線,而殖利率曲線的建立對於固定收益證券及其衍生性金融商品的定價是很重要的。在台灣大部分的利率模型的研究都是利用模擬的方式做比較,這也許是因為資料取得上的問題,本文利用CKLS(1992)所提出的方式以GMM(Generalized Method of Moment)的估計方法,利用隨機利率模型估計出台灣公債市場的殖利率曲線。本文中將三種隨機利率模型做比較他們分別為: Vasicek model (Vasicek 1977),、隨機均數的Vasicek 模型 (BDFS 1998) ,以及隨機均數與隨機波動度的Vasicek 模型 (Chen,Lin 1996). 後面兩個模型是首次出現在台灣的研究文獻中。在本文的附錄中將提出如何利用偏微分方程式(PDE)的方法求解出這三個模型的零息債券價格的封閉解(Closed-Form Solution)。文中利用台灣商業本票的價格當作零息債券價格的近似值,再以RMSE (Root mean squared Price Prediction Error)作為利率模型配適公債市場價格能力的指標。本文的主要貢獻在於嘗試以隨機利率模型估計出台灣公債市場的殖利率曲線,以及介紹了兩種首次在台灣研究文獻出現的利率模型,並且詳細推導其債券價格的封閉解,這對於想要建構一個新的隨機利率模型的研究人員而言,這是一個相當好的一個練習。 / With the growth in the area of financial engineering, more and more financial products are designed to meet demands of the market participants. Interest rate derivatives are those instruments whose values depend on interest rate changes. These derivatives form a huge market worth several trillions of dollars.
The first step to design or develop a new financial product is pricing. In the real world interest rate is not a constant as in the B-S option instead it changes over time. Stochastic interest rate models are used for capturing the volatile behavior of interest rate and valuing interest rate derivatives. Appropriate models are necessary to value these instruments. Here we want to use stochastic interest rate models to construct the yield curve of Taiwan Government Bond (TGB) market. It is important to construct yield curve for pricing some financial instruments such as interest rate derivatives and fixed income securities.
In Taiwan Although most of the research surrounding interest rate models is intended towards studying their usefulness in valuing and hedging complex interest rate derivatives by simulation. But just a few papers focus on empirical study. Maybe this is due to the problems for data collection. In this paper we want to use stochastic interest models to construct the yield curve of Taiwan’s Government Bond market. The estimation method that we use in this paper is GMM (Generalized Method of Moment) followed CKLS (1992).
I introduce three different interest rate model, Vasicek model (Vasicek 1977), Vasicek with stochastic mean model (BDFS 1998) and Vasicek with stochastic mean and stochastic volatility model (Chen,Lin 1996). The last two models first appear in Taiwan’s research. In the Chapter 3, I will introduce these models in detail and in the appendix of my thesis I will show how to use PDE approach to derive each model’s zero coupon bond price close-form solution. In this paper we regard Taiwan CP (cmmercial Paper) rates as a proxy of short rate to estimate the parameters of each model. Finally we use these models to construct the yield curve of Taiwan Government Bonds market and to tell which model has the best fitting bond prices performance. Our metric of performance for these models is RMSE (Root mean squared Price Prediction Error). The main contribution of this study is to construct the yield curve of TGB market and it is useful to price derivatives and fixed income securities and I introduce two stochastic interest rates models, which first appear in Taiwan’s research. I also show how to solve the PDE for a bond price and it is a useful practice for someone who wants to construct his/her own model.
|
12 |
一般化動差估計分析方法資產訂價模型之應用李沃牆, LI, WO-QIANG Unknown Date (has links)
Lucas(1976) 批評當時總體時間序列的計量分析方法,且主張傳統計量模型參數會隨體制及政策而改變,基於這些評論,於是許多對。嗜好(Taste)"及"技術"(Technology)" 結構參數估計的進論方法偭開始使用動態模型中的尤拉最適化條件(Euler Optimality Conditios)來進行估計。
然而,其中以Hansen(1982)所提出來的一般化動差估計法(Generalized Method of Moments)(簡稱GMM)最受矚目。此法乃源於一般化工具變數(GIVE),在不需強烈假設下進行估計。其估計過程大致可分為下列三個階段:
1.建立正交化條件え建立目標函數最小化2.過度確認限制(overidentifying restriction) 之檢定問題因其本身即涵蓋許多估計式,如GIVE,MLE,2SLS, 且能滿足有限樣本性質,快速數斂。此法目前已用於總體計量,非線性理性預期實證及財務金融計量上。而本文應用台灣總體時間序列於資產訂價模型的GMM參數估計過程,證明了資料的適用性。另外,蒙地卡羅(Monte Carlo) 實驗設計模擬亦應用在本文研究,來探討有限樣本下的統計量之行為,並獲致適當的推論。 / Lucas(1976) criticized the existing strategies for econometricic analysis of macroeconomic time series and argues that papameters of traditional econometric models are not invariant with respect to shifts in policy regimes. In response to that criticism, several inference strategies for "taste and technology" structural parameter models using Euler optimality conditions in dynamic models were suggested.
Hansen's(1982) Generalized Method of Moments(henceforth GMM) instrumental
variables procedure is among the most notable inference strrategies
for structural parameters.
The procedure of GMM may consist three steps: (l)Set-up of the orthogonality
conditions (2).Minimizing the objective function. (3)Test
of the overidentifying restrictions
In this paper we can understand the statistical properties of GMM
estimator of Consumption-Based structural parameters obtained from
Capital Asset Pricing Model by the use of Monte Carlo Simualtion .
|
13 |
關於多變量中球狀和主成份特徵值假設檢定之研究郭信霖, Guo, Xin-Lin Unknown Date (has links)
第一章為緒論。
第二章為多變量常態分配、球狀常態分配、威夏特(WISHART) 分配以及最大數化的
基本概念。
第三章為球狀檢定,討論其檢定的不變性、一火偏性以及在H下的動差和正合(EXAC
T)分配,最後以U─1檢定步驟來驗證球狀檢定。
第四章為一些均勻性變當數方法和球狀聯立檢定,將分別介紹L□檢定、M檢定以及
八種球狀聯立檢定之方法。
第五章為主畏成份檢定,將介紹一些最後K個特徵值相等的檢定方法以及實例分析。
第六章為結論,並說明其進一步研究的方向。
|
14 |
Consumption Euler Equation: The Theoretical and Practical Roles of Higher-Order Moments / 消費尤拉方程式:高階動差的理論與實證重要性藍青玉, Lan, Ching-Yu Unknown Date (has links)
本論文共分三章,全數圍繞在消費尤拉方程式中,消費成長的高階動差在理論與實證上的重要性。分別說明如下:
本論文第一章討論消費高階動差在實證估計消費結構性參數之重要性。消費尤拉方程式是消費者極大化問題的一階條件,而自Hall (1978)起,估計消費結構參數如跨期替代彈性時,也多是利用這個尤拉方程式所隱涵的消費動態關係,進行估計。但是由於消費資料存在嚴重的衡量誤差問題,實證上多將尤拉方程式進行對數線性化,或是二階線性化後進行估計。
然而前述一、二階線性化,固然處理了資料的衡量誤差問題,卻也造成了參數估計上的近似誤差(approximation bias)。其原因來自於線性化過程中所忽略的高階動差實為內生,而與迴歸式中的二階動差相關。這使得即便用工具變數進行估計,仍然無法產生具有一致性的估計結果。這當中的原因在於足以解釋二階動差,卻又與殘差項中的高階動差直交的良好(valid)的工具變數無法取得。
我們認為在資料普遍存在衡量誤差的情況下,線性化估計尤拉方程式不失為一可行又易於操作的方法。於是我們嘗試在線性化的尤拉方程式中,將高階動差引入,並檢視這種高階近似是否能有效降低近似誤差。我們的模擬結果首先證實,過去二階近似尤拉方程式的估計,確實存在嚴重近似誤差。利用工具變數雖然可以少部份降低該誤差,但由於高階動差的內生性質,誤差仍然顯著。我們也發現,將高階動差引入模型,確實可以大幅降低近似誤差,但是在偏誤降低的同時,參數估計效率卻也隨之降低。
高階動差的引入,除了降低近似偏誤外,卻也必須付出估計效率降低的代價。我們因此並不建議無限制地放入高階動差。則近似階次選取,乃為攸關估計績效的重要因素。本章的第二部份,即著眼於該最適近似階次選取。我們首先定義使參數估計均方誤(mean squared error, MSE)為最小的近似階次,為最適近似階次。我們發現,該最適階次與樣本大小、效用函數的彎曲程度都有直接的關係。
然而在實際進行估計時,由於參數真值無法得知,MSE準則自然無法作為階次選取之依據。我們於是利用目前在模型與階次選取上經常被使用的一些準則進行階次選取,並比較這些不同準則下參數估計的MSE。我們發現利用這些準則,確實可以使高階近似尤拉方程式得到MSE遠低於目前被普遍採用的二階近似的估計結果,而為估計消費結構參數時更佳的選擇。
本論文第二章延續前一章的模擬結果,嘗試利用消費高階動差間的非線性關係,進一步改善高階近似消費尤拉方程式的估計表現。由第一章的研究結果,我們發現高階近似估計確有助大幅降低近似誤差,但這其中可能產生的估計效率喪失,卻是輕乎不得的。這個效率喪失,很大一部份來自於我們所使用的工具變數,雖然可以有效掌握消費成長二階動差的變動,但是當這同一組工具變數被用來解釋如偏態與峰態等這些更高階動差時,預測力卻大幅滑落。這使待得當我們將這些配適度偏低的配適後高階動差,放到迴歸式中進行估計時,所能提供的額外情報也就相當有限。而所造成的共線性問題,也自然使得估計效率大幅惡化。
於是在其他合格的工具變數相對有限的情況下,我們利用高階動差間所存在的均衡關係,將原來的工具變數進行非線性轉換,以求得對高階動差的較佳配適。由於消費動差間之關係,尚未見諸相關文獻。於是我們首先透過數值分析,進一步釐清消費高階動差間之關係。這其中尤為重要的是由消費二階動差所衡量的消費風險,與更高階動差間之關係。因為這些關係將為我們轉換工具變數之依據。
我們發現與二階動差相一致地,消費者對這些高階動差之預期,都隨其財富水準的提高而減少。這隱涵消費風險與更高階動差間之正向關係。更進一步檢視消費風險與高階動差間之關係也發現,二者間確實存在非線性之正向關係。而這也解釋了何以前一章線性的工具變數,雖可適切捕捉消費風險,但對高階動差的解釋力卻異常薄弱。
利用這些非線性關係,我們將原始的工具變數進行非線性轉換後,用以配適更高階動差。透過模擬分析,我們證實了這些非線性工具變數,確實大幅改善高階近似尤拉方程式的估計表現。除了仍保有與線性工具變數般的一些特性,諸如隨樣本的增加,最適近似階次也隨之增加之外,相較於線性工具變數,非線性工具變數可以在較低的近似階次下,就使得估計偏誤大幅下降。在近似階次愈高估計效率愈低的情況下,這自然大幅度地提高了估計效率。比較兩種工具變數估計結構數參數所產生的MSE也證實,非線性工具變數確實有遠低於原始線性工具變數的MSE表現。
然而我們同時也發現,利用非線性工具變數估計,若未適當選擇近似階次,效率喪失的速度,可能更甚於線性工具變數時。這凸顯了選擇近似階次的重要性。於是我們同樣檢視了前述階次選擇準則在目前非線性工具變數環境下的適用性。而總結第一、二章的研究結果,我們凸顯了高階動差的重要性,確實助益重要消費結構參數估計。而利用過去尚未被討論過的高階動差間非線性關係,更可大幅度改善估計績效。
本論文的最後一章,則旨在理論上建立高階動差的重要性。我們在二次式的效用函數(quadratic utility function)設定下,推導借貸限制下的最適消費決策。二次式的效用函數,由於其邊際價值函數(marginal value function)為一線性函數,因此所隱涵的消費決策,具有確定相等(certainty equivalence)的特性。這表示消費者只關心未來的期望消費水準,二階以上的更高階動差,都不影響其消費決策。然而這種確定相等的特性,將因為借貸限制的存在而不復存在,而高階動差的重要性也就因此凸顯。
我們證明,確定相等特性的喪失,其背後的理論原因在於,借貸限制的存在,使得二次式效用函數的邊際價值函數,產生凸性。消費者因而因應未來的不確定性,進行預防性儲蓄。透過分析解的求得,我們也得以進一步分析更高階動差的對消費決策的理論性質。同時我們也引申理論推導的實證意涵,其中較重要者諸如未受限消費者因預防性儲蓄行為所引發的消費過度敏感性現象,實證上樣本分割法的選取,以及高階動差的引入模型。 / The theme of this thesis seeks to explore the importance of higher-order moments in the consumption Euler equation, both theoretically and empirically. Applying log-linearized versions of Euler equations has been a dominant approach to obtaining sensible analytical solutions, and a popular choice of model specifications for estimation. The literature however by now has been no lack of conflicting empirical results that are attributed to the use of the specific version of Euler equations. Important yet natural questions whether the higher-order moments can be safely ignored, or whether higher-order approximations offer explanations to the stylized facts remain unanswered. Such inquires as in the thesis thus can improve our understanding toward consumer behaviors over prior studies based on the linear approximation.
1. What Do We Gain from Estimating Euler Equations with Higher-Order Approximations?
Despite the importance of estimating structural parameters governing consumption dynamics, such as the elasticity of intertemporal substitution, empirical attempts to unveil these parameters using a log-linearized version of the Euler equation have produced many puzzling results. Some studies show that the approximation bias may well constitute a compelling explanation. Even so, the approximation technique continues to be useful and convenient in estimation of the parameters, because noisy consumption data renders a full-fledged GMM estimation unreliable. Motivated by its potential success in reducing the bias, we investigate the economic significance and empirical relevance of higher-order approximations to the Euler equation with simulation methodology. The higher-order approximations suggest a linear relationship between expected consumption growth and its higher-order moments. Our simulation results clearly reveal that the approximation bias can be significantly reduced when the higher-order moments are introduced into estimation, but at the cost of efficiency loss. It therefore documents a clear tradeoff between approximation bias reduction and efficiency loss in the consumption growth regression when higher-order approximations to the Euler equation is considered. A question of immediate practical interest arises ``How many higher-order terms are needed?'' The second part of our Monte-Carlo studies then deals with this issue. We judge whether a particular consumption moment should be included in the regression by the criterion of mean squared errors (MSE) that accounts for a trade-off between estimation bias and efficiency loss. The included moments leading to smaller MSE are regarded as ones to be needed. We also investigate the usefulness of the model and/or moment selection criteria in providing guidance in selecting the approximation order. We find that improvements over the second-order approximated Euler equation can always be achieved simply by allowing for the higher-order moments in the consumption regression, with the approximation order selected by these criteria.
2. Uncovering Preference Parameters with the Utilization of Relations between Higher-Order Consumption Moments
Our previous attempt to deliver more desirable estimation performance with higher-order approximations to the consumption Euler equation reveals that the approximation bias can be significantly reduced when the higher-order moments are introduced into estimation, but at the cost of efficiency loss. The latter results from the difficulty in identifying independent variation in the higher-order moments by sets of linear instruments used to identify that in variability in consumption growth, mainly consisting of individual-specific characteristics. Thus, one major challenge in the study is how to obtain quality instruments that are capable of doing so. With the numerical analysis technique, we first establish the nonlinear equilibrium relation between consumption risk and higher-order consumption moments. This nonlinear relation is then utilized to form quality instruments that can better capture variations in higher-order moments. A novelty of this chapter lies in adopting a set of nonlinear instruments that is to cope with this issue. They are very simple moment transformations of the characteristic-related instruments, thereby easy to obtain in practice. As expected, our simulations demonstrate that for a comparable amount of the bias corrected, applying the nonlinear instruments does entail an inclusion of fewer higher-order moments in estimation. A smaller simulated MSE that reveals the improvement over our previous estimation results can thus be achieved.\
3. Precautionary Saving and Consumption with Borrowing Constraint
This last chapter offers a theoretical underpinning for the importance of the higher-order moments in a simple environment where economic agents have a quadratic-utility preference. The resulting Euler equation gives rise to a linear policy function in essence, or a random-walk consumption rule. The twist in our theory comes from a presence of borrowing constraint facing consumers. The analysis shows that the presence of the constraint induces precautionary motives for saving as responses from consumers to income uncertainties, even there has been no such motives inherent in consumers' preference. The corresponding value function now displays a convexity property that is virtually only associated with more general preferences than a quadratic utility. The analytical framework allows us to be able to characterize saving behaviors that are of precautionary motives, and their responses to changes in different moments of income process. As empirical implications, our analysis shed new light on the causes of excess sensitivity, the consequences of sample splitting between the rich and the poor, as well as the relevance of the higher-order moments to consumption dynamics, specifically skewness and kurtosis.
|
15 |
有限理性與彈性迷思 / Bounded Rationality and the Elasticity Puzzle王仁甫, Wang,Jen Fu Unknown Date (has links)
在總體經濟學中,跨期替代分析方法佔有相當重要的地位。其中跨期替代彈性(the
elasticity of intertemporal substitution, EIS)的大小,間接或者直接影響總體經濟中的許多層面,直覺上,例如跨期替代彈性越大,對個人而言,是對當期消費的機會成本提升,使延後消費的意願上升,同時增加個人儲蓄,在正常金融市場情況之下,個人儲蓄金額的增加,將使市場資金的供給量增多,使得企業或個人的投資機會成本降低,經由總體經濟中間接或直接的影響下,則總體經濟成長率應會上升。其中,當消費者效用函數為固定風險趨避係數(constant coefficient of relative risk aversion, CRRA)且具有跨期分割與可加性的特性,加上在傳統經濟學中,假設每個人皆為完全理性的前提下,經由跨期替代分析方法推導後,可以得到相對風險趨避係數(the coefficient of relative risk aversion, RRA)與跨期替代彈性(the elasticity of intertemporal substitution, EIS)恰好是倒數關係。 / 在過去相關研究中,Hansen and Singleton (1983)推估出跨期替代彈性值較大且顯著,但Hall (1988)強調,若考慮資料的時間加總問題(time aggregation problem),
則前者估計出跨期替代彈性在統計上則不再是顯著;Hall亦於結論提出跨期替代彈性為小於或等於0.1,甚至比0小。在經濟意義上,代表股票市場中投資人的相對風險趨避程度(RRA)極大,直覺上,是不合理的現象,這也是著名的彈性迷思(elasticity puzzle)。於是Epstein and Zin (1991)嘗試建議並修正效用函數為不具時間分割性(non-time separable utility)的效用函數,並得到跨期替代彈性(EIS)與相對風險趨避係數(RRA)互為倒數關係,不復存在的結論。這也說明影響彈性迷思(elasticity puzzle)的原因有許多,其中之一,可能為設定不同形式效用函數所造成。 / 在傳統經濟模型中,假設完全理性的個人決策行為之下,利用跨期替代方法,可以得到跨期替代彈性(EIS)與相對風險趨避程度(RRA)互為倒數關係後,又得到隱含風險趨避程度為無窮大的推估結論。這也是本研究想要來探究的問題,即是彈性迷思(elasticity puzzle)究竟是假設所造成,或者是因為由個體資料加總成總體資料,所產生的謬誤。 / 因此,本研究與其他研究不同之處,在於利用建構時間可分離形式的效用函數(time-separable utility)模型基礎,以遺傳演算(Genetic Algorithms)方法,建構有限理性的人工股票市場進行模擬,其中,模擬方式為設定不同代理人(agent)有不同程度的預測能力,代表其理性程度的差異的表現。 / 本研究發現在有限理性異質性個人的人工股票市場下,相對風險趨避程度係數(RRA)與跨期替代彈性(EIS)不為倒數關係,且設定不同代理人不同的預測能力,亦會影響跨期替代彈性(EIS)的推估數值大小。
|
16 |
過濾靴帶反覆抽樣與一般動差估計式 / Sieve Bootstrap Inference Based on GMM Estimators of Time Series Data劉祝安, Liu, Chu-An Unknown Date (has links)
In this paper, we propose two types of sieve bootstrap, univariate and multivariate approach, for the generalized method of moments estimators of time series data. Compared with the nonparametric block bootstrap, the sieve bootstrap is in essence parametric, which helps fitting data better when researchers have prior information about the time series properties of the variables of interested. Our Monte Carlo experiments show that the performances of these two types of sieve bootstrap are comparable to the performance of the block bootstrap. Furthermore, unlike the block bootstrap, which is sensitive to the choice of block length, these two types of sieve bootstrap are less sensitive to the choice of lag length.
|
17 |
自我迴歸模型的動差估計與推論 / Estimation and inference in autoregressive models with method of moments陳致綱, Chen, Jhih Gang Unknown Date (has links)
本論文的研究主軸圍繞於自我迴歸模型的估計與推論上。文獻上自我迴歸模型的估計多直接採用最小平方法, 但此估計方式卻有兩個缺點:(一)當序列具單根時,最小平方估計式的漸近分配為非正規型態,因此檢定時需透過電腦模擬得到臨界值;(二)最小平方估計式雖具一致性,但卻有嚴重的有限樣本偏誤問題。有鑑於此,我們提出一種「二階差分轉換估計式」,並證明該估計式的偏誤遠低於前述最小平方估計式,且在序列為粧定與具單根的環境下具有相同的漸近常態分配。此外,二階差分轉換估計式相當適合應用於固定效果追蹤資料模型,而據以形成的追蹤資料單根檢定在序列較短的情況下仍有不錯的檢定力。
本論文共分四章,茲分別簡單說明如下:
第1章為緒論,回顧文獻上估計與推論自我回歸模型時的問題,並說明本論文的研究目標。估計自我迴歸模型的傳統方式是直接採取最小平方法,但在序列具單根的情況下由於訊息不隨時間消逝而快速累積,使估計式的收斂速度高於序列為恒定的情況。不過,這也導致最小平方估計式的漸近分配為非標準型態,並使得進行假設檢定前必須先透過電腦模擬來獲得臨界值。其次,最小平方估計式雖具一致性,但在有限樣本下卻是偏誤的。實證上, 樣本點不多是研究者時常面臨的窘境,並使得小樣本偏誤程度格外嚴重。本章中透過對前述問題形成因素的瞭解,說明解決與改善的方法,亦即我們提出的「二階差分轉換估計式」。
第2章主要目的在於推導二階差分轉換估計式之有限樣本偏誤。我們亦推導了多階差分自我迴歸模型下二階段最小平方估計式(two stage least squares, 2SLS)與 Phillips andHan (2008)採用的一階差分轉換估計式之偏誤,以同時進行比較。本章理論與模擬結果皆顯示,一階與二階差分轉換估許式與2SLS之 $T^{−1}$ 階偏誤程度皆低於以最小平方法估計原始準模型(level model)的偏誤,其中 T 為時間序列長度。另外,一階差分轉換估計式與二階差分轉換估計式在 $T^{−1}$ 階偏誤上,分別與一階和二階差分模型下2SLS相同,但兩估計式的相對偏誤程度則因自我相關係數的大小而互有優劣。同時,我們發現估計高於二階的差分模型對小樣本偏誤並無法有更進一步的改善。最後,即使在樣本點不多的情況下,本章所推導的偏誤理論對於實際偏誤仍有良好的近似能力。
第3章主要目的在於發展二階差分轉換估計式之漸近理論。與 Phillips and Han (2008) 採用之一階差分轉換估計式相似的是,該估計式在序列為恒定與具單根的情況下收斂速度相同,並有漸近常態分配的優點。值得注意的是, 二階差分轉換估計式的漸近分配為 N(0,2),不受任何未知參數的影響。另外,當序列呈現正自我相關時,二階差分轉換估計式相較於一階差分轉換估計式具有較小的漸近變異數,進而使得據以形成的檢定統計量有較佳的對立假設偵測能力。最後, 誠如 Phillips and Han (2008) 所述,由於差分過程消除了模型中的截距項,使得此類估計方法在固定效果的動態追蹤資料模型(dynamic panel data model with fixed effect) 具相當的發展與應用價值。
本論文第4 章進一步將二階差分轉換估計式推展至固定效果的動態追蹤資料模型。文獻上估計此種模型通常利用差分來消除固定效果後,再以一般動差法 (generalized method of moments, GMM) 進行估計。然而,這樣的估計方式在序列為近單根或具單根時卻面臨了弱工具變數(weak instrument)的問題,並導致嚴重的估計偏誤。相反的,差分轉換估計式所利用的動差條件在近單根與單根的情況下仍然穩固,因此在小樣本下的估計偏誤相當輕微(甚至無偏誤)。另外,我們證明了不論序列長度(T )或橫斷面規模(n)趨近無窮大,差分轉換估計式皆有漸近常態分配的性質。與單一序列時相同的是,我們提出的二階差分轉換估計式在序列具正自我相關性時的漸近變異數較一階差分轉換估計式小;受惠於此,利用二階差分轉換估計式所建構的檢定具有較佳的檢力。值得注意的是,由於二階差分轉換估計式在單根的情況下仍有漸近常態分配的性質,我們得以直接利用該漸近理論建構追蹤資料單根檢定。電腦模擬結果發現,在小 T 大 n 的情況下,其檢力優於文獻上常用的 IPS 檢定(Im et al., 1997, 2003)。 / This thesis deals with estimation and inference in autoregressive models. Conventionally, the autoregressive models estimated by the least squares (LS) procedure may be subject to two shortcomings. First, the asymptotic distribution of the LS estimates for autoregressive coefficient is discontinuous at unity. Test statistics based on the LS estimates thus follow nonstandard distributions, and the critical values obtained need to rely on Monte Carlo techniques. Secondly, as is well known, the LS estimates of autoregressive models are biased in finite samples. This bias could be substantial and leads to serious size distortion for the test statistics built on the estimates and forecast errors. In this thesis,we consider a simple newmethod ofmoments estimator, termed the “transformed second-difference” (hereafter TSD) estimator, that is without the aforementioned problems, and has many useful applications. Notably, when applied to dynamic panel models, the associated panel unit root tests shares a great power advantage over the existing ones, for the cases with very short time span.
The thesis consists of 4 chapters, which are briefly described as follows.
1. Introduction: Overview and Purpose
This chapter first reviews the literature and states the purpose of this dissertation. We discuss the sources of problems in estimating autoregressive models with the conventional method. The motivation to estimate the autoregressive series with multiple-difference models,
instead of the conventional level model, is provided. We then propose a new estimator, the TSD estimator, which can avoid (fully or partly) the drawbacks of the LS method, and highlight its finite-sample and asymptotic properties.
2. The Bias of 2SLSs and transformed difference estimators in Multiple-Difference AR(1) Models
In this chapter, we derive approximate bias for the TSD estimator. For comparisons, the corresponding bias of the two stage least squares estimators (2SLS) in multiple-difference AR(1) models and the transformed first-difference (TFD) estimator proposed by Chowdhurry (1987) are also given as by-products. We find that: (i) All the estimators considered are much less biased than the LS ones with the level regression; (ii)The difference method can be exploited to reduce the bias only up to the order of difference 2; and (iii) The bias of the TFD and TSD estimators share the same order at $O(T^{-1})$ as that of 2SLSs. However, to the extent of bias reductions, neither the 2 considered transformed difference estimators shows a uniform dominance over the entire parameter space. Our simulation evidence lends credible supports to our bias approximation theory.
3. Gaussian Inference in AR(1) Time Series with or without a Unit Root
The goal of the chapter is to develop an asymptotic theory of the TSD estimator. Similar to that of the TFD estimator shown by Phillips and Han (2008), the TSDestimator is found to have Gaussian asymptotics for all values of ρ ∈ (−1, 1] with $\sqrt{T}$ rate of convergence, where ρ
is the autoregressive coefficient of interest and T is the time span. Specifically, the limit distribution of the TSD estimator is N(0,2) for all possible values of ρ. In addition, the asymptotic variance of the TSD estimator is smaller than that of the TFD estimator for the cases with ρ > 0, and the corresponding t -test thus exhibits superior power to the TFD-based one.
4. Estimation and Inference with Moment Methods for Dynamic Panels with Fixed Effects
This chapter demonstrates the usefulness of the TSD estimator when applying to to dynamic panel datamodels. We find again that the TSD estimator displays a standard Gaussian limit, with a convergence rate of $\sqrt{nT}$ for all values of ρ, including unity, irrespective of how n or T approaches infinity. Particularly, the TSD estimator makes use of moment conditions that are strong for all values of ρ, and therefore can completely avoid the weak instrument problem for ρ in the vicinity of unity, and has virtually no finite sample bias. As in the time series case, the asymptotic variance of the TSD estimator is smaller than that of the TFD estimator of Han and Phillips (2009) when ρ > 0 and T > 3, and the corresponding t -ratio test is thus more capable of unveiling the true data generating process. Furthermore, the asymptotic theory can be applied directly to panel unit root test. Our simulation results reveal that the TSD-based unit root test is more powerful than the widely used IPS test (Im et al, 1997, 2003) when n is large and T is small.
|
18 |
一般均衡利率期限結構理論─台灣公債市場之實證研究廖志峰 Unknown Date (has links)
利率是影響金融市場中金融工具的主要因素,對經濟體系而言,是貨幣面與實質面的橋樑,代表使用負債資金所需支付的成本;對法人機構、投資個人而言,利率是進行任何融資、投資活動的重要參考指標。近年來,利用一般均衡、無套利評價理論來研究利率期限結構和利率或有請求權(Contingent Claims)訂價的文獻有如雨後春筍一般;另一方面,由於時間序列(Time Series)於1980年代的快速發展,諸如:ARCH家族、GARCH家族、隨機變異性(Stochastic Volatility),兩套方法互相結合運用,有愈來愈多文獻顯示,其對現實的利率期限結構具有一定水準的解釋能力。
隨著國際金融市場的多元化、自由化與無國界化,金融創新與金融商品的大量問世,如何合理估計利率期限結構,以運用於投資決策、或預測未來利率走勢,及對利率風險的管理,這都隱含利率期限結構的重要性。本文擬針對著一般均衡利率期限結構模型加以分析,並驗證在我國公債市場應用的可行性。
一般均衡利率期限結構模型,由Cox、Ingersoll and Ross(1985a、b)正式提出,其為單因子一般均衡利率期限結構模型;Longstaff and Schwartz(1992)提出二因子一般均衡利率期限結構模型,因其利率期限結構隱含一個限制式,故LS兩因子實證模型以差分形式進行,故將損失兩個參數(gamma、eta);基於此點,本文試圖採用Gibbons and Ramaswamy(1993)的實質報酬率觀念,希望經由調整物價因素後的殖利率樣本資料,可消除時間趨勢不穩定的因子,藉以判斷包含(gamma、eta)的完整兩因子一般均衡模型是否能更充分解釋利率期限結構;另一方面,亦可透過Gibbons and Ramaswamy(1993)的實質報酬率觀念,觀察二因子一般均衡利率期限結構模型所獲得的名目利率期限結構與實質利率期限結構的差異。
本文實證結果並不令人滿意,調整物價因素後的殖利率樣本資料,仍存在不穩定的情況;本文以差分與模擬的方式,建構出台灣公債市場利率期限結構。另一方面,亦發現本文調整物價因素的方法,在較長的樣本期間下並不適宜。
|
19 |
預測S&P500指數實現波動度與VIX- 探討VIX、VIX選擇權與VVIX之資訊內涵 / The S&P 500 Index Realized Volatility and VIX Forecasting - The Information Content of VIX, VIX Options and VVIX黃之澔 Unknown Date (has links)
波動度對於金融市場影響甚多,同時為金融資產定價的重要參數以及市場穩
定度的衡量指標,尤其在金融危機發生時,波動度指數的驟升反映資產價格震盪。
本篇論文嘗試捕捉S&P500 指數實現波動度與VIX變動率未來之動態,並將VIX、
VIX 選擇權與VVIX 納入預測模型中,探討其資訊內涵。透過研究S&P500 指數
實現波動度,能夠預測S&P500 指數未來之波動度與報酬,除了能夠觀察市場變
動,亦能使未來選擇權定價更為準確;而藉由模型預測VIX,能夠藉由VIX 選
擇權或VIX 期貨,提供避險或投資之依據。文章採用2006 年至2011 年之S&P500
指數、VIX、VIX 選擇權與VVIX 資料。
在 S&P500 指數之實現波動度預測當中,本篇論文的模型改良自先前文獻,
結合實現波動度、隱含波動度與S&P500 指數選擇權之風險中立偏態,所構成之
異質自我回歸模型(HAR-RV-IV-SK model)。論文額外加入VIX 變動率以及VIX指數選擇權之風險中立偏態作為模型因子,預測未來S&P500 指數實現波動度。
研究結果表示,加入VIX 變動率作為S&P500 指數實現波動度預測模型變數後,
可增加S&P500 指數實現波動度預測模型之準確性。
在 VIX 變動率預測模型之中,論文採用動態轉換模型,作為高低波動度之
下,區分預測模型的方法。以VIX 過去的變動率、VIX 選擇權之風險中立動差
以及VIX 之波動度指數(VVIX)作為變數,預測未來VIX 變動率。結果顯示動態
轉換模型能夠提升VIX 預測模型的解釋能力,並且在動態轉換模型下,VVIX 與
VIX 選擇權之風險中立動差,對於VIX 預測具有相當之資訊隱涵於其中。 / This paper tries to capture the future dynamic of S&P 500 index realized
volatility and VIX. We add the VIX change rate and the risk neutral skewness of VIX
options into the Heterogeneous Autoregressive model of Realized Volatility, Implied
Volatility and Skewness (HAR-RV-IV-SK) model to forecast the S&P 500 realized
volatility. Also, this paper uses the regime switching model and joins the VIX, risk
neutral moments of VIX options and VVIX variables to raise the explanatory ability
in the VIX forecasting. The result shows that the VIX change rate has additional
information on the S&P 500 realized volatility. By using the regime switching model,
the VVIX and the risk neutral moments of VIX options variables have information
contents in VIX forecasting. These models can be used for hedging or investment
purposes.
|
Page generated in 0.0911 seconds