• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 83
  • 71
  • 12
  • 4
  • 2
  • Tagged with
  • 89
  • 89
  • 47
  • 46
  • 43
  • 35
  • 33
  • 29
  • 27
  • 26
  • 24
  • 24
  • 23
  • 23
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

區間時間序列預測及其準確度分析 / Time series analysis and forecasting evaluation with interval data

徐惠莉, Hsu, Hui-Li Unknown Date (has links)
近年來隨著科技的進步與工商業的發展,預測技術的創新與改進愈來愈受到重視。相對地,對於預測準確度的要求也愈來愈高。尤其在經濟建設、經營規畫、管理控制等問題上,預測更是決策過程中不可或缺的重要資訊。然而僅用單一數值形式收集來的資料,其建立的模式是不足以描述每日或每月的發展趨勢。因為有太多模糊且不完整訊息,以致於無法用傳統以點資料建構的系統來進行預測。基於點預測的不確定性,因此嘗試以區間資料來建構模式並進行預測。本論文探討區間時間序列之動態走勢及預測結果之效率性,共三部份,分別為區間時間序列之分析與預測、區間預測準確度之探討和計算區間資料的相關係數。 第一部份,利用區間具有糢糊數的特質,將其分解成區間平均數及區間長度,提出區間時間數列建構過程及預測方法,如區間移動平均、區間加權移動平均、ARIMA區間預測等方法。並藉由模擬方式設計出數組穩定及非穩定之區間時間數列,再利用本文所提出的區間預測方法進行預測。根據這些計算預測結果效率性的方法,發現ARIMA區間預測,提供了較傳統的預測方法更為準確及具有彈性的預測結果。 第二部份,我們特別針對區間預測結果的準確度提出效率性的分析,如平均區間預測誤差平方和、平均相對區間誤差及平均XOR比率。而在預測效率性的實證分析上,平均XOR比率能給與決策者更正確的資訊,做出更客觀的判斷。 第三部份,在探討如何將區間資料應用在計算相關係數。利用單一數值資料的收集 ,並以傳統的相關係數r來說明兩變數之間是否相關? 是較為便利且易懂的統計方法。但資料是否足以代表母體特性?這樣求出來的相關係數值會不會太主觀?有鑑於此,以區間就是模糊數的概念,建構模糊相關係數。最後舉出應用實例,比較模糊相關係數與傳統的相關係數的差異性,在說明兩變數關係的強弱程度,模糊相關係數提供了一個較有彈性的統計分析方法。 / Point forecasting provides important information during decision-making processes, especially in economic developments, population policies, management planning or financial controls. Nevertheless, the forecasting model constructed only by single values may not demonstrate the whole trend of a daily or monthly process. Since there are so many unpredictable and continuous fluctuations on the process to be predicted, the observed values are discrete instantaneous values which are insufficient to represent the true process. Therefore, the collected information is generally vague and incomplete so that the real number system is not sufficient to express the forecasting model. In additional, due to the business marketing is full of uncertainty and the continuous fluctuations, intervals are used to express and establish the forecasting model to estimate the prediction values. This dissertation investigates the dynamic trend of interval time series and the performance evaluation of interval forecasting. It consists of three parts: the analysis and forecasting of interval time series, the evaluation of forecasting performance for interval data, and the calculation of the fuzzy correlation coefficient. First of all, we propose the conception of fuzzy for interval and propose interval forecasting approaches, such as the interval moving average, the weighted interval moving average, and ARIMA interval forecasting. The soft computing technique as well as the model simulation is used to carry out the interval forecasting. The forecast results are compared by the mean squared interval error and the mean relative interval error. Finally, we take two practical cases study. By the comparison of forecasting performance, it is found that ARIMA interval forecasting provides more efficiency and flexibility than the traditional ones do. Secondly, we concentrated on the forecasting performance evaluation for interval data. The evaluation techniques are developed to determine the validity of the forecast results. The forecast results are compared by three criteria which are the mean squared error of interval, mean relative interval error, and the mean ratio of exclusive-or. It is found that the empirical studies show that the mean ratio of exclusive-or can provide a more objective suggestion in interval forecasting for policymakers. The third part considers the evaluation of the correlation coefficient interval by collecting sample data whose types are real and interval. When an interval is considered as a fuzzy number, the aspect of fuzzy can be utilized to construct the fuzzy correlation coefficient for interval data. As compared with the traditional correlation coefficient, the fuzzy correlation coefficient can demonstrate conservative correlation coefficient and provide an objective statistical method for discovering the correlation between two variables.
72

由金融帳之角度探討亞洲通貨危機 / From Financial Account to Asian Currency Crisis

郭怡婷, Kuo, Yi-Ting Unknown Date (has links)
90年代末東亞金融危機造成多國貨幣大幅貶值,銀行紛紛倒閉。基本上金融危機可分為通貨危機(Currency Crisis)與銀行危機(Banking Crisis);通貨危機是指當年中任一季名目匯率貶值超過25%,且貶值幅度比前一季超過10個百分點。諸多實證文獻顯示,高估一國匯率為其通貨崩潰之先驅;又由於近年來新興國家快速開放資本市場,以致於成為危機之導火線。為分析此一現象,本文首先編製金融帳權數之新台幣實質有效匯率指數,並將東亞之台灣、印尼、韓國、菲律賓、泰國等五國之匯率、相對物價(各國與美國物價)、金融帳餘額等變數做共整合關係檢定,觀察三個變數的長期均衡關係,再將誤差項加入模型中,建構向量誤差模型。實證結果發現,金融帳與相對物價對匯率有顯著之影響力。 / The 1997 East Asian Crises had made exchange rate depreciations and bank bankruptcies. Broadly speaking, it can be divided into currency crisis and banking crisis. Nominal exchange rate of any season in a year, which is depreciated over 25% and 10% than last season, is called a currency crisis. Lots of papers demonstrate that overvaluation is a precursor of a currency crash. Furthermore, developing countries have opened capital markets so rapidly that it became the tinderbox of crises. To analyze the phenomenon, this thesis first compile Taiwan’s financial weighted real effective exchange rate index, then examine exchange rates, relative prices (compare to American consumer price index), and net financial account of Taiwan, Indonesia, Korea, Philippine, and Thailand with cointegrated test to identify the long run equilibrium relationships between variables; then adding error terms into models to estimates vector error correction model (VECM). The empirical results show that financial account and relative price influence exchange rate significantly.
73

自變數有測量誤差的羅吉斯迴歸模型之序貫設計探討及其在教育測驗上的應用 / Sequential Designs with Measurement Errors in Logistic Models with Applications to Educational Testing

盧宏益, Lu, Hung-Yi Unknown Date (has links)
本論文探討當自變數存在測量誤差時,羅吉斯迴歸模型的估計問題,並將此結果應用在電腦化適性測驗中的線上校準問題。在變動長度電腦化測驗的假設下,我們證明了估計量的強收斂性。試題反應理論被廣泛地使用在電腦化適性測驗上,其假設受試者在試題的表現情形與本身的能力,可以透過試題特徵曲線加以詮釋,羅吉斯迴歸模式是最常見的試題反應模式。藉由適性測驗的施行,考題的選取可以依據不同受試者,選擇最適合的題目。因此,相較於傳統測驗而言,在適性測驗中,題目的消耗量更為快速。在題庫的維護與管理上,新試題的補充與試題校準便為非常重要的工作。線上試題校準意指在線上測驗進行中,同時進行試題校準。因此,受試者的能力估計會存在測量誤差。從統計的觀點,線上校準面臨的困難,可以解釋為在非線性模型下,當自變數有測量誤差時的實驗設計問題。我們利用序貫設計降低測量誤差,得到更精確的估計,相較於傳統的試題校準,可以節省更多的時間及成本。我們利用處理測量誤差的技巧,進一步應用序貫設計的方法,處理在線上校準中,受試者能力存在測量誤差的問題。 / In this dissertation, we focus on the estimate in logistic regression models when the independent variables are subject to some measurement errors. The problem of this dissertation is motivated by online calibration in Computerized Adaptive Testing (CAT). We apply the measurement error model techniques and adaptive sequential design methodology to the online calibration problem of CAT. We prove that the estimates of item parameters are strongly consistent under the variable length CAT setup. In an adaptive testing scheme, examinees are presented with different sets of items chosen from a pre-calibrated item pool. Thus the speed of attrition in items will be very fast, and replenishing of item pool is essential for CAT. The online calibration scheme in CAT refers to estimating the item parameters of new, un-calibrated items by presenting them to examinees during the course of their ability testing together with previously calibrated items. Therefore, the estimated latent trait levels of examinees are used as the design points for estimating the parameter of the new items, and naturally these designs, the estimated latent trait levels, are subject to some estimating errors. Thus the problem of the online calibration under CAT setup can be formulated as a sequential estimation problem with measurement errors in the independent variables, which are also chosen sequentially. Item Response Theory (IRT) is the most commonly used psychometric model in CAT, and the logistic type models are the most popular models used in IRT based tests. That's why the nonlinear design problem and the nonlinear measurement error models are involved. Sequential design procedures proposed here can provide more accurate estimates of parameters, and are more efficient in terms of sample size (number of examinees used in calibration). In traditional calibration process in paper-and-pencil tests, we usually have to pay for the examinees joining the pre-test calibration process. In online calibration, there will be less cost, since we are able to assign new items to the examinees during the operational test. Therefore, the proposed procedures will be cost-effective as well as time-effective.
74

自變數有誤差的邏輯式迴歸模型:估計、實驗設計及序貫分析 / Logistic regression models when covariates are measured with errors: Estimation, design and sequential method

簡至毅, Chien, Chih Yi Unknown Date (has links)
本文主要在探討自變數存在有測量誤差時,邏輯式迴歸模型的估計問題,並設計實驗使得測量誤差能滿足遞減假設,進一步應用序貫分析方法,在給定水準下,建立一個信賴範圍。 當自變數存在有測量誤差時,通常會得到有偏誤的估計量,進而在做決策時會得到與無測量誤差所做出的決策不同。在本文中提出了一個遞減的測量誤差,使得滿足這樣的假設,可以證明估計量的強收斂,並證明與無測量誤差所得到的估計量相同的近似分配。相較於先前的假設,特別是證明大樣本的性質,新增加的樣本會有更小的測量誤差是更加合理的假設。我們同時設計了一個實驗來滿足所提出遞減誤差的條件,並利用序貫設計得到一個更省時也節省成本的處理方法。 一般的case-control實驗,自變數也會出現測量誤差,我們也證明了斜率估計量的強收斂與近似分配的性質,並提出一個二階段抽樣方法,計算出所需的樣本數及建立信賴區間。 / In this thesis, we focus on the estimate of unknown parameters, experimental designs and sequential methods in both prospective and retrospective logistic regression models when there are covariates measured with errors. The imprecise measurement of exposure happens very often in practice, for example, in retrospective epidemiology studies, that may due to either the difficulty or the cost of measuring. It is known that the imprecisely measured variables can result in biased coefficients estimation in a regression model and therefore, it may lead to an incorrect inference. Thus, it is an important issue if the effects of the variables are of primary interest. When considering a prospective logistic regression model, we derive asymptotic results for the estimators of the regression parameters when there are mismeasured covariates. If the measurement error satisfies certain assumptions, we show that the estimators follow the normal distribution with zero mean, asymptotically unbiased and asymptotically normally distributed. Contrary to the traditional assumption on measurement error, which is mainly used for proving large sample properties, we assume that the measurement error decays gradually at a certain rate as there is a new observation added to the model. This kind of assumption can be fulfilled when the usual replicate observation method is used to dilute the magnitude of measurement errors, and therefore, is also more useful in practical viewpoint. Moreover, the independence of measurement error and covariate is not required in our theorems. An experimental design with measurement error satisfying the required degenerating rate is introduced. In addition, this assumption allows us to employ sequential sampling, which is popular in clinical trials, to such a measurement error logistic regression model. It is clear that the sequential method cannot be applied based on the assumption that the measurement errors decay uniformly as sample size increasing as in the most of the literature. Therefore, a sequential estimation procedure based on MLEs and such moment conditions is proposed and can be shown to be asymptotical consistent and efficient. Case-control studies are broadly used in clinical trials and epidemiological studies. It can be showed that the odds ratio can be consistently estimated with some exposure variables based on logistic models (see Prentice and Pyke (1979)). The two-stage case-control sampling scheme is employed for a confidence region of slope coefficient beta. A necessary sample size is calculated by a given pre-determined level. Furthermore, we consider the measurement error in the covariates of a case-control retrospective logistic regression model. We also derive some asymptotic results of the maximum likelihood estimators (MLEs) of the regression coefficients under some moment conditions on measurement errors. Under such kinds of moment conditions of measurement errors, the MLEs can be shown to be strongly consistent, asymptotically unbiased and asymptotically normally distributed. Some simulation results of the proposed two-stage procedures are obtained. We also give some numerical studies and real data to verify the theoretical results in different measurement error scenarios.
75

數值高程模型誤差偵測之研究 / Study on error detection methods for digital elevation models

林永錞, Lin, Yung Chun Unknown Date (has links)
摘要 本研究主要利用誤差偵測方法發掘數值高程模型中可能出現的高程誤差,藉以提升數值高程模型之高程品質。本研究採用三種誤差偵測方法即參數統計、水流方向矩陣、坡度與變化約制等,這三種方法過去是應用在航測資料測製之格網式數值高程模型,本研究嘗試推廣至空載光達製作的數值高程模型。 利用模擬DEM資料以驗證三種偵測方法之偵測能力。首先利用多項式函數擬合出各種地形,並假設該地形無誤差。再將人為誤差隨機加入模擬DEM資料;第二部份則將誤差偵測之方法應用至真實的數值高程模型資料,並配合檢核點高程測量檢驗之。根據誤差偵測結果,參數統計和坡度變化結果類似而且皆有過度偵測之缺點,可透過提高門檻值或高通濾波改善;水流方向矩陣比較不適合誤差偵測,但可透過窪地填平最佳化地形。 關鍵字:數值高程模型、誤差偵測、參數統計法、坡度與變化約制、水流方向矩陣。 / Abstract In this study, error detection methods were proposed to find possible elevation errors in digital elevation model (DEM), and to improve the quality of DEM. Three methods were employed to detect errors in the study, i.e. parametric statistical method, flow direction matrix, and constrained slope and change. These methods can deal with grid DEM from photogrammetric approach in the past, and now the methods are used to find errors in high resolution DEM from light detection and ranging (LIDAR). The simulated DEMs were used to approve the detection capability of the proposed methods. The fitted DEMs were first obtained by polynomial functions fit the different terrains and assuming these DEMs were free of errors. Then the artificial errors were added to fitted DEMs. The proposed methods were also applied to real DEM data got from LIDAR and field check works were run to insure the results. The results of parametric statistical method and constrained slope and change are similar, and all show the over-detection of errors. These results can be improved by using high threshold or high-pass filter. Flow direction matrix is not suitable for error detection in DEM, but can be applied to fill sink to optimize terrain for watershed analysis. Keyword: digital elevation model, error detection, parametric statistical method, constrained slope and change, flow direction matrix.
76

主從式架構下基於晶格之通行碼認證金鑰交換協定之研究 / A study of password-based authenticated key exchange from lattices for client/server model

鄭逸修 Unknown Date (has links)
基於通行碼之認證金鑰交換協定(Password-based Authenticated Key Exchange)為一項使要進行交換訊息之雙方做相互驗證並產生一把共享金鑰的技術。藉由通訊雙方共享一組通行碼做為身份驗證的依據,並且在驗證結束後產生一把僅有雙方才知道的祕密通訊金鑰,往後進行傳遞機密資訊時即可透過此金鑰建立安全的通訊管道。 本篇論文提出一個在主從式架構(Client/Server model)下基於晶格(lattice)之通行碼認證金鑰交換協定,用戶端只需記錄與伺服器共享之通行碼,而伺服器端除了通行碼外擁有屬於自己的公私鑰對,雙方間透過共享之通行碼進行相互驗證,並且在兩個步驟內完成認證及金鑰交換。在安全性上基於晶格密碼系統之難問題,若未來量子電腦問世能夠抵擋其強大運算能力之攻擊,達到安全且有效率之通行碼認證金鑰協議。 / The password-based authenticated key exchange is a technology that allows both parties to perform mutual authentication and generate a shared session key. They through the shared password as the basis for authentication and generate a session key that is only known by both parties. At last, they can use this key to establish a secure channel to transmit secret message. We propose a password-based authenticated key exchange from lattices for Client-Server model. The client only need to remember the password rather than the private key, and the server except keep the password and its own public/private key pair. Both parties execute the mutual authentication via the shared password and accomplish the key exchange within two steps. The security of our protocol is based on LWE problem for lattices, so it is secure even an attacker uses a quantum computer.
77

美國企業購併、股價及工業生產指數之共積與因果關係檢定 / Cointegration and Causality Test among Mergers, Stock Price and Index of Industrial Production in the United States of America

張秀雲, Hsiu-Yun Chang Unknown Date (has links)
本文使用共積檢定以及因果關係檢定方法,針對美國第三波購併風潮前後時期,檢定購併家數、股價及工業生產指數三個變數間的可預測性。不同以往的是,本文除了將購併風潮分段進行研究外,並以晚近由Hoornik 及Hendry(1997)以Johansen(1988)為基礎所發展的一套共積檢定法來檢定變數間的長期均衡關係,再以Toda and Phillips(1994)的因果關係檢定流程與SSW的因果關係檢定分別檢定出變數間的可預測性。 經由本文實證結果發現: (1)購併、股價及工業生產指數三個變數,在ADF單根檢定結果三變數皆呈I(1)非恆定時間數列。並進一步以共積檢定檢測出不論參變數或購併和股價兩變數模型,1967年第四季以前變數間皆有一共積關係存在,1968年以後則無任何共積關係。 (2)從因果關係檢定結果發現,三變數體系中,股價與工業生產指數兩變數間可能存在極高的線性重合現象,且子期間礙於無法取夠長的遞延期數,使得工業生產指數對其他變數的影響力無法明確地反應出來,故三變數模型無法正確的檢定購併風潮前後變數間的因果關係。 (3)在購併與股價變數間的因果關係檢定研究中發現,1948~1967年間,股價對購併存在可預測性;然而1968~1979年間,股價與購併完全不存在任何可預測性。故可知購併風潮前後,股價對購併的可預測性發生了變化,從1967年前股價可合理地預測購併活動,到1967年後股價卻完全無法預測購併的情況。 (4)對影響購併的諸多因素做進一步的考量,發現威廉法案的出現對當時購併案件有相當程度的衝擊。 從實證結果可知,以共積與因果關係檢定方法一再地證明出,購併風潮前後股價對購併活動的可預性確實發生了結構性的變化。 第一章 緒論 第一節 研究背景與動機………………………………………….1 第二節 研究目的………………………………………………….3 第三節 購併之定義及相關基本概念…………………………….4 第四節 研究架構與流程………………………………………….8 第二章 文獻回顧 第一節 理論文獻回顧……………………………………………10 第二節 實證研究文獻回顧………………………………………13 第三節 文獻回顧總結……………………………………………23 第三章 實證研究方法 第一節 單根檢定………………………………………………..24 第二節 共積檢定…………………………………………………28 第三節 因果關係檢定……………………………………………34 第四節 實證檢定流程……………………………………………40 第四章 實證結果 第一節 實證資料來源……………………………………………43 第二節 Augmented Dickey-Fuller單根檢定………………….44 第三節 共積檢定……………………………………………....49 第四節 因果關係檢定……………………………………………55 第五節 因果關係檢定結果………………………………………78 第五章 法律因素的考量 第一節 時代背景….……………………………………….……79 第二節 檢視法條之影響力……………………………………..81 第三節 從案例角度分析………………………………………..84 第四節 威廉法案的威力………………………………………..87 第六章 結論…………………………………………………………88 附錄圖表(一):各變數資料圖….………………………………90 附錄圖表(二):共積殘差項圖………………………………….93 參考文獻…………………………………………………………….96
78

臺灣匯率非恆定實證方法預測之研究 / The prediction of new Taiwan dollars-nonstationary method

賴恬忻, Lai, Teng-Shing Unknown Date (has links)
自1997年以降,受到亞洲金融風暴的衝擊,亞洲各國匯率巨幅波動,於是如何增進匯率預測的準確度已成為重要的研究課題。而自1973年布列敦森林體制崩潰,各工業國家改採浮動匯率以來,匯率巨幅波動致使國際收支理論不再能解釋匯率如何決定,於是1970年代,學者們紛紛提出各種匯率決定理論,其中以貨幣學派模型與資產組合平衡模型最受到重視。然而,自1978年始,這些結構模型的解釋能力逐漸受到質疑,在1983年Meese and Rogoff甚至提出結構模型的樣本外預測能力不如隨機漫步模型的樣本外預測表現,引起學者們的討論到底何者的樣本外預測表現較佳。而隨著計量方法的演進實證研究已由恆定的計量方法演進至非恆定的計量方法,在非恆定的計量方法方面,MacDonald and Taylor(1993、1994)、吳宜璋(1996)等人的研究皆採誤差修正模型來做預測。 本研究亦採誤差修正模型來做預測,但對其他學者的研究稍作改良:1.加入結構變動虛擬變數2.以向量誤差修正模型而非一條誤差修正的式子來做預測,在此以整個體系的觀點來做預測3.以背氏方法加入相驗情報來改善預測。 結論為在金融風暴發生期間,匯率受非基本面因素影響較大時,貝氏向量自迴歸模型預測表現較佳。而在金融風暴發生之前,匯率受基本面影響較小時,以貝氏向量誤差修正模型為良好的預測模型。 / This study improves other scholars' empirical studies by testing structure changes and by using Vector Error Correction Model to forecast N.T. Dollars. Futhermore,use Bayesian Method to improve predition .The conclusion is Bayesian VAR Model perform better when forecasting period include Asian finanl crisis . And Bayesian VECM Model is better model when forecasting period don't include Asian financial crisis.And the out of sample prediction performance of structure model is better than Random Walk Model.
79

台灣地區貨幣需求與股市成交量共積關係之研究 / The research of the cointegration relationship between money demand and stock trading volume - the case of Taiwan

李博遠, Li, Po-Yuan Unknown Date (has links)
傳統貨幣需求函數的估計,使用的影響因素包括物價、所得及利率。但是近年股市的蓬勃發展,對貨幣需求造成了一定程度的影響。 Friedman 就股市對貨幣需求的影響提出 4 大效果,分別是交易效果、資產組合調整效果、財富效果及替代效果。其中替代效果為負,其他的效果為正。然而並非只有股市會對貨幣需求造成影響,貨幣需求同樣會影響股市。本文採用 Johansen Procedure 估計法,首先建立一般的貨幣需求模型,使用的雙變數包括貨幣需求、物價、所得及利率,實證結果確定這些變數存在 2 條共積關係,一是貨幣需求共積方程式,一是物價共積方程式。然後我們將股市成交量放入,同樣確定這些變數間具有 2 條共積關係。 Johansen Procedure 有 5 種模型,分別適用於不同的情況,我們要事先由資料來判斷使用哪一個模型並不容易,因此本文採用了多項標準,包括共積係數符號及其大小、向量誤差修正模型誤差項常態性與序列相關檢定、重要統計值(RSS、AIC、SC)等,用來作為選擇最適模型的依據。經由實證結果我們發現,不論是否加入股市成交量,模型三都是最適當的模型,也就是資料有不為零的平均數與線性趨勢,但共積方程式只有截距項。 就貨幣需求共積方程式殘差對各變數的影響來看,M1A 與 M1B 的連續增加,都會使股市成交量擴大,而 M1B 的連續增加還會形成物價上漲的壓力。而就物價共積方程式殘差對各變數的影響來看,解釋上較不容易。這可能是因為台灣地區物價長期處於穩定,加上台灣股市受到心理及消息面的影響性很大,要用總體變數作一個完整的解釋並不十分容易。雖然如此,貨幣市場與股票市場間的互動仍然極具有研究價值。 / Traditionally, when estimating the money demand, we use price index, income, and interest rate as its influcing factors. But the stock market that is booming these years has made certain influence on money demand. Milton Friedman pointed out that there are 4 effects that stock market can influcnce money demand. They are trading effect, portfolio reconstruction effect, wealth effect, and subsitution effect. Among these effects, subsitution effect has negative influence on money demand and other 3 effects have positive influence on mondy demand. However, not only does the stock market has influence on mondy demand, money demand also has influence on stock market. In my thesis, I applied Johansen Procedure estimation method. First, I established a traditional model on money demand. The variables I used including money demand, price index, income, and interest rate. From the empirical outcome we are sure that there are 2 cointegration equations among these variables.One is the money demand cointegration equation and the other is the price cointegration equation. Next we add the stock trading volume to the model. We also make sure that there are 2 cointegration relationships among them. There are 5 models in Johansen Procedure estimation method, and they are applied in different situations. It is not easy to decide which model to apply in advance. So in the thesis, we used many criteria, including the value and the sign of the coefficients, the the serial correlation and the normality test of the residuals from the vector error correction model, and important statistics(RSS, AIC, SC) to decide which model to apply. According to the empirical outcome, whether stock trading volume is included, model 3, which is there are means and linear trend in data but the cointegration equation only has intercept is the proper model we selected. About the residuals from the money demand cointegration quation's influence on variables, we find that the continuous increase in M1A and M1B will make enlarge the stock trading volume. Besides, the coutinuous increase in M1B will cause the price to raise. And about the residuals from the price cointegration equation's influence on variables, it is a little bit difficult to interpret. Maybe it is because the price is very stable in Taiwan and the stock market in Taiwan is affected by psychology side and information side easily. So it is not easy to use the macro economic variables to interpret fully. Althought it is the case, the interaction between the money market and the stock market still worth researching.
80

群集樣本具巢狀誤差結構之迴歸分析 / Regression analysis for cluster samples with nested-error structure

賴昭如 Unknown Date (has links)
分析具有巢狀誤差結構的迴歸模式時,惹忽略隨機誤差項之間的相關性,而採用最小平方(OLS)估計量所導出的標準 F 統計量(以 F<sup>S</sup>表之)進行檢定,會導致過大的型 I 錯誤機率;若將隨機誤差項之間的相關性納入考量,而採用廣義最小平方(GLS)估計量所導出的 F 統計量 (以 F<sup>GLS</sup>表之),則計算上會較為繁雜。因此我們藉由轉換方式,將模式轉換成隨機誤差項之間彼此獨立的新模式後,再以 F<sup>S</sup> 進行檢定,其結果與直接以 F<sup>GLS</sup> 檢定相同,且可使計算較為方便。由於模式轉換所需的轉換矩陣為母體變異數的函數,因此當母體變異數未知時,我們以 Henderson 的常數配適 (fitting-of-constants)方法來估計之。藉由模擬結果得知,若各段的觀察個數相等,則不論巢狀誤差結構為二段式(two-stage)或三段式(three-stage),廣義最小平方估計量(GLS)均較最小平方估計量(OLS)表現穩定,且 F<sup>GLS</sup> 在檢定力及實際顯著水準方面的表現也都比 F<sup>S</sup> 好。 / When analyzing the regression model with nested-error structure, if the correlations between errors are ignored, and conduting the model adequacy test by the standard F statistic (F<sup>S</sup>) led from the ordinary leastsquares estimator (OLSE) , then the type I error rate will be inflated. However, if the corrlated structure is considered and the model is tested by F<sup>GLS</sup> led from the general least-squares estimator (GLSE) , the calculation will be more complicate. The model can be transformed to a new model with independent random errors and then, tested by F<sup>S</sup> . The result is the same as the one by F<sup>GLS</sup> , also it is more convenient for calculation. Since the transformation matrix is a function of variance components, we estimate variance components by Henderson's fitting-of-constants when they are unknown. Through simulation, it is concluded that if the observations in each stage of nested-error structure are the same, the GLSE is more stable than the OLSE in both two-stage and tree-stage structures. Also, the power and the sizes of F<sup>GLS</sup> will perform better than those of F<sup>S</sup> .

Page generated in 0.0196 seconds