61 |
過濾靴帶反覆抽樣與一般動差估計式 / Sieve Bootstrap Inference Based on GMM Estimators of Time Series Data劉祝安, Liu, Chu-An Unknown Date (has links)
In this paper, we propose two types of sieve bootstrap, univariate and multivariate approach, for the generalized method of moments estimators of time series data. Compared with the nonparametric block bootstrap, the sieve bootstrap is in essence parametric, which helps fitting data better when researchers have prior information about the time series properties of the variables of interested. Our Monte Carlo experiments show that the performances of these two types of sieve bootstrap are comparable to the performance of the block bootstrap. Furthermore, unlike the block bootstrap, which is sensitive to the choice of block length, these two types of sieve bootstrap are less sensitive to the choice of lag length.
|
62 |
外匯選擇權的定價-馬可夫鏈蒙地卡羅法(MCMC)之績效探討任紀為 Unknown Date (has links)
在真實世界中,我們可以觀察到許多財務或經濟變數(股價、匯率、利率等)有時波動幅度非常微小,呈現相對穩定的狀態(Regime);有時會由於政治因素或經濟環境的變動,突然一段期間呈現瘋狂震盪的狀態。針對這種現象,已有學者提出狀態轉換波動度模型(Regime Switching Volatility Model,簡稱RSV)來捕捉此一現象。
本篇論文選擇每年交易金額非常龐大的外匯選擇權市場,以RSV模型為基礎,採用馬可夫鏈蒙地卡羅法 ( Markov Chain Monte Carlo,簡稱MCMC ) 中的吉普斯抽樣(Gibbs Sampling)法來估計RSV模型的參數,依此預測外匯選擇權在RSV模型下的價格。我們再將此價格與Black and Scholes(BS)法及實際市場交易的價格資料作比較,最後並提出笑狀波幅與隱含波動度平面的結果。結果顯示經由RSV模型與MCMC演算法所計算出來的選擇權價格確實優於傳統的BS方法,且能有效解釋波動率期間結構 (Volatility Term Structure) 與笑狀波幅 (Volatility Smile) 的現象,確實反應且捕捉到了市場上選擇權價格所應具備的特色。
|
63 |
檢驗以比較為基礎的決策理論-Decision by sampling theory之適切性 / Examination of the comparison-based decision making theory: The boundary of the decision by sampling theory李孟潔 Unknown Date (has links)
Vlaev等人(2011)提出大部分決策理論可分為「數值優先(第一類)」、「計算數值並以比較為基礎(第二類)」及「純粹比較(第三類)」三種觀點。在純粹比較觀點中,Stewart等人(2006)提出的抽樣決策理論認為個人的決策歷程只有形成決策樣本,並將目標物與決策樣本中其他選項進行兩兩次序比較就能產生對該目標物的評價,不需要真正計算選項的數值。然而很少研究檢驗決策是否確實不牽涉刺激數值的計算,以及第三類觀點是否優於第一及第二類觀點的理論。
本研究的目的為檢驗抽樣決策理論的適切性,並進行資料庫分析及四類實驗。資料庫分析以代表三種觀點的薪資、薪資相對位置及薪資相對排名為預測變項,工作滿意度為依變項進行階層回歸,結果支持相對排名最能預測工作滿意度。
實驗部分則以Brown 等人(2008)的實驗設計為基礎,展開四類實驗檢驗排名對滿意度評價的影響是否存在且強勢。結果發現排名對滿意度評價的影響雖穩定存在,但影響強度會隨著實驗程序是否暗示受試者進行比較而改變,且相對位置亦會影響評價結果,因此本研究的結果支持第二類觀點的範圍頻次理論。
然而抽樣決策理論並非錯誤。藉由比較四個實驗間的差異,本研究認為抽樣決策理論若將記憶或抽樣歷程可能發生的偏誤納入考量,應能增加對實證資料之解釋力。無論是範圍頻次理論或抽樣決策理論,由於未考量個人對物理刺激的感受性及物理空間與心理空間的對應關係,可能導致部分受試者的反應不適合用此類模型解釋,亦是未來可以進一步探討的方向。
雖然受限於實驗設計無法檢驗受試者對於刺激材料的記憶程度,且實驗設計相較於真實決策情境簡單許多,外推性受到限制,但在相對簡單,比較細微實驗程序差異的本研究中,仍能看到個人隨著作業環境不同而改變行為模式的彈性,無疑是對傳統經濟學家的理性人假設的一個挑戰。
|
64 |
自變數有誤差的邏輯式迴歸模型:估計、實驗設計及序貫分析 / Logistic regression models when covariates are measured with errors: Estimation, design and sequential method簡至毅, Chien, Chih Yi Unknown Date (has links)
本文主要在探討自變數存在有測量誤差時,邏輯式迴歸模型的估計問題,並設計實驗使得測量誤差能滿足遞減假設,進一步應用序貫分析方法,在給定水準下,建立一個信賴範圍。
當自變數存在有測量誤差時,通常會得到有偏誤的估計量,進而在做決策時會得到與無測量誤差所做出的決策不同。在本文中提出了一個遞減的測量誤差,使得滿足這樣的假設,可以證明估計量的強收斂,並證明與無測量誤差所得到的估計量相同的近似分配。相較於先前的假設,特別是證明大樣本的性質,新增加的樣本會有更小的測量誤差是更加合理的假設。我們同時設計了一個實驗來滿足所提出遞減誤差的條件,並利用序貫設計得到一個更省時也節省成本的處理方法。
一般的case-control實驗,自變數也會出現測量誤差,我們也證明了斜率估計量的強收斂與近似分配的性質,並提出一個二階段抽樣方法,計算出所需的樣本數及建立信賴區間。 / In this thesis, we focus on the estimate of unknown parameters, experimental designs and sequential methods in both prospective and retrospective logistic regression models when there are covariates measured with errors. The imprecise measurement of exposure happens very often in practice, for example, in retrospective epidemiology studies, that may due to either the difficulty or the cost of measuring. It is known that the imprecisely measured variables can result in biased coefficients estimation in a regression model and therefore, it may lead to an incorrect inference. Thus, it is an important issue if the effects of the variables are of primary interest.
When considering a prospective logistic regression model, we derive asymptotic results for the estimators of the regression parameters when there are mismeasured covariates. If the measurement error satisfies certain assumptions, we show that the estimators follow the normal distribution with zero mean, asymptotically unbiased and asymptotically normally distributed. Contrary to the traditional assumption on measurement error, which is mainly used for proving large sample properties, we assume that the measurement error decays gradually at a certain rate as there is a new observation added to the model. This kind of assumption can be fulfilled when the usual replicate observation method is used to dilute the magnitude of measurement errors, and therefore, is also more useful in practical viewpoint. Moreover, the independence of measurement error and covariate is not required in our theorems. An experimental design with measurement error satisfying the required degenerating rate is introduced. In addition, this assumption allows us to employ sequential sampling, which is popular in clinical trials, to such a measurement error logistic regression model. It is clear that the sequential method cannot be applied based on the assumption that the measurement errors decay uniformly as sample size increasing as in the most of the literature. Therefore, a sequential estimation procedure based on MLEs and such moment conditions is proposed and can be shown to be asymptotical consistent and efficient.
Case-control studies are broadly used in clinical trials and epidemiological studies. It can be showed that the odds ratio can be consistently estimated with some exposure variables based on logistic models (see Prentice and Pyke (1979)). The two-stage case-control sampling scheme is employed for a confidence region of slope coefficient beta. A necessary sample size is calculated by a given pre-determined level. Furthermore, we consider the measurement error in the covariates of a case-control retrospective logistic regression model. We also derive some asymptotic results of the maximum likelihood estimators (MLEs) of the regression coefficients under some moment conditions on measurement errors. Under such kinds of moment conditions of measurement errors, the MLEs can be shown to be strongly consistent, asymptotically unbiased and asymptotically normally distributed. Some simulation results of the proposed two-stage procedures are obtained. We also give some numerical studies and real data to verify the theoretical results in different measurement error scenarios.
|
65 |
隨機波動模型(stochastic volatility model)--台幣匯率短期波動之研究 / Stochastic volatility model - the study of the volatility of NT exchange rate in the short run王偉濤, Wang, Wei-Tao Unknown Date (has links)
No description available.
|
66 |
工商及服務業普查資料品質之研究 / Data quality research of industry and commerce census邱詠翔 Unknown Date (has links)
資料品質的好壞會影響決策品質以及各種行動的執行成果,所以資料品質在近年來越來越受到重視。本研究包含了兩個資料庫,一個是產業創新調查資料庫,一個是95年工商及服務業普查資料庫,資料品質的好壞對一個資料庫來說也是一個相當重要的議題,資料庫中往往都含有錯誤的資料,錯誤的資料會導致分析結果出現偏差的狀況,所以在進行資料分析之前,資料清理與整理是必要的事前處理工作。
我們從母體資料分佈與樣本資料分佈得知,在清理與整理資料之前,平均創新員工人數為92.08,平均工商員工人數為135.54;在清理與整理資料之後,我們比較兩個資料庫員工人數的相關性、相似性、距離等性質,結果顯示兩個資料庫的資料一致性極高,平均創新員工人數與平均工商員工人數分別為39.01與42.12,跟母體平均員工人數7.05較為接近,也顯示出資料清理的重要性。
本研究使用的方法為事後分層抽樣,主要研究目的是要利用產業創新調查樣本來推估95年工商及服務業普查母體資料的準確性。產業創新調查樣本在推估母體從業員工人數與母體營業收入方面皆出現高估的狀況,推測出現高估的原因是產業創新調查母體為前中華徵信所出版的五千大企業名冊為母體底冊,而工商及服務業普查企業資料為一般企業母體底冊。因此,我們利用和產業創新調查樣本所相對應的工商普查樣本做驗證,發現95年工商及服務業普查樣本與產業創新調查樣本的資料一致性極高。 / Data quality is good or bad will affect the decision quality and achievements in the implementation of various actions, so the data quality more and more attention in recent years. This study consists of two databases, one is the industrial innovation survey database, another is the industry and commerce census database in ninety five years. Data quality is good or bad of a database is also a very important issue, the database often contain erroneous information, incorrect information will result in bias of the analysis results. So before carrying out data analysis, data cleaning and consolidation is necessary.
We can know from the parent and the sample data distribution. Before data cleaning and consolidation, the average number of innovation employees is 92.08, and the average number of industrial-commerce employees is 135.54. After data cleaning and consolidation, we compare the correlation, similarity, and distance of the number of employees in two databases. The results show the data consistency of the two databases is very high, the average number of innovation employees is 39.01, and the average number of industrial-commerce employees is 42.12, it is closer to the average number of parent employees 7.05. This also shows the importance of data cleaning.
Method used in the study is post-stratified sampling, the main research objective is to use industrial innovation survey sample to estimate the data accuracy of the industry and commerce census in ninety five years. Use industrial innovation survey sample to estimate the number of employees and operating revenue in the industry and commerce census in ninety five years are both overestimated, we guess the reason is that the parent of the industrial innovation survey is five thousand large enterprises published by China Credit Information, and the parent of the industry and commerce census is general enterprises. Therefore, we use the corresponding industry and commerce census sample for validation. The results show that the data consistency of the industrial innovation survey sample and the industry and commerce census sample in ninety five years is very high.
|
67 |
以高效率狄氏演算法產生其他機率分配 / Generation of Distributions Based on an Efficient Dirichlet Algorithm陳韋成, Chen, Wei Cheng Unknown Date (has links)
狄氏分配(Dirichlet distribution)可視為高維度的貝他分配,其應用範圍包括貝氏分析的共軛先驗分配,多變量資料建模。當狄氏分配參數α_1=⋯=α_(n+1)=1時,可視為在n維空間的單體(simplex)均勻分配。高維度空間的不規則區域均勻分配有很多的應用,例如:在不規則區域中物種調查的方區抽樣和蒙地卡羅模擬(Monte Carlo Simulation)常需要多面體的均勻亂數,利用狄氏分配可更迅速的生成不規則區域的均勻亂數。本論文主要是評估由Cheng et al. (2012) 設計的R統計軟體套件“rBeta2009” [8],並探討如何利用此套件中的狄氏分配演算法來生成其他多變量分配,如:(i)反狄氏分配(Inverted Dirichlet distribution) (ii) Liouville分配,以及(iii)由線性限制式所圍成的多面體空間之均勻分配。本文也利用電腦模擬的方式驗證本文介紹之方法比現有的電腦軟體中的演算法有效率(以電腦執行時間來看)。 / Dirichlet distributions can be taken as a high-dimensioned version of beta distributions, and it has many applications, such as conjugate prior distribution in bayesian Inference and construction of the model of multivariate data. When the parameters of Dirichlet distributions are α_1=⋯=α_(n+1)=1, it can be regarded as uniform distribution within a n-dimensioned simplex. High-dimensioned uniform distribution in irregular domains has various applications, such as species surveys in quadrats sampling and Monte Carlo simulation, which often need to generate uniform random vectors over polyhedrons. With Dirichlet distributions, it is more efficient to generate uniform random vectors in irregular domain. This article evaluated the module in R, “rBeta2009” [8], originally designed by Cheng et al. (2012), and discusses how to generate other multivariate distributions by using the Dirichlet algorithm in the package, including generation of (i) Inverted Dirichlet random vectors (ii) Liouville random vectors, and (iii) uniform random vectors over polyhedrons with linear constraints. The article also verified that the method is more efficient than the older package in R. (by comparing the CPU time.)
|
68 |
用馬可夫鏈蒙地卡羅法估計隨機波動模型:台灣匯率市場的實證研究賴耀君, Lai,Simon Unknown Date (has links)
針對金融時序資料變異數不齊一的性質,隨機波動模型除了提供於ARCH族外的另一選擇;且由於其設定隱含波動本身亦為一個隨機波動函數,藉由設定隨時間改變且自我相關的條件變異數,使得隨機波動模型較ARCH族來得有彈性且符合實際。傳統上處理隨機波動模型的參數估計往往需要面對到複雜的多維積分,此問題可藉由貝氏分析裡的馬可夫鏈蒙地卡羅法解決。本文主要的探討標的,即在於利用馬可夫鏈蒙地卡羅法估計美元/新台幣匯率隨機波動模型參數。除原始模型之外,模型的擴充分為三部分:其一為隱含波動的二階自我回歸模型;其二則為藉由基本模型的修改,檢測匯率市場上的槓桿效果;最後,我們嘗試藉由加入scale mixture的方式以驗證金融時序資料中常見的厚尾分配。
|
69 |
應用存活分析在微陣列資料的基因表面定型之探討 / Gene Expression Profiling with Survival Analysis on Microarray Data張仲凱, Chang,Chunf-Kai Unknown Date (has links)
如何藉由DNA微陣列資料跟存活資料的資訊來找出基因表現定型一直是個重要的議題。這些研究的主要目標是從大量的基因中找出那些真正跟存活時間或其它重要的臨床結果有顯著關係的小部分。Threshold Gradient Directed Regularization (TGDR)是ㄧ種已經被應用在高維度迴歸問題中能同時處理變數選取以及模型配適的演算法。然而,TGDR採用一種梯度投影型態的演算法使得收斂速率緩慢。在本篇論文中,我們建議新的包含Newton-Raphson求解演算法類型的改良版TGDR方法。我們建議的方法有類似TGDR的特性但卻有比較快的收斂速率。文中並利用一筆附有設限存活時間的真實微陣列癌症資料來做示範。
本篇論文的第二部份是關於適用於區間設限存活資料的重複抽樣Peto-Peto檢定。這個重複抽樣Peto-Peto檢定能夠評估存活函數估計方法的檢定力,例如Turnbull的估計方法以及Kaplan-Meier的估計方法。這個檢定方法顯示出在區間設限資料時Kaplan-Meier的估計方法的檢定力要比Turnbull的估計方法的檢定力來得低。這個檢定方法將以模擬的區間設限資料以及一筆真實關於乳癌研究的區間設限資料來說明。 / Analyzing censored survival data with high-dimensional covariates arising from the microarray data has been an important issue. The main goal is to find genes that have pivotal influence with patient's survival time or other important clinical outcomes. Threshold Gradient Directed Regularization (TGDR) method has been used for simultaneous variable selection and model building in high-dimensional regression problems. However, the TGDR method adopts a gradient-projection type of method and would have slow convergence rate. In this thesis, we proposed Modified TGDR algorithms which incorporate Newton-Raphson type of search algorithm. Our proposed approaches have the similar characteristics with TGDR but faster convergence rates. A real cancer microarray data with censored survival times is used for demonstration.
The second part of this thesis is about a proposed resampling based Peto-Peto test for survival functions on interval censored data. The proposed resampling based Peto-Peto test can evaluate the power of survival function estimation methods, such as Turnbull’s Procedure and Kaplan-Meier estimate. The test shows that the power based on Kaplan-Meier estimate is lower than that based on Turnbull’s estimation on interval censored data. This proposed test is demonstrated on simulated data and a real interval censored data from a breast cancer study.
|
70 |
離散條件機率分配之相容性研究 / On compatibility of discrete conditional distributions陳世傑, Chen, Shih Chieh Unknown Date (has links)
設二個隨機變數X1 和X2,所可能的發生值分別為{1,…,I}和{1,…,J}。條件機率分配模型為二個I × J 的矩陣A 和B,分別代表在X2 給定的條件下X1的條件機率分配和在X1 給定的條件下X2的條件機率分配。若存在一個聯合機率分配,而且它的二個條件機率分配剛好就是A 和B,則稱A和B相容。我們透過圖形表示法,提出新的二個離散條件機率分配會相容的充分必要條件。另外,我們證明,二個相容的條件機率分配會有唯一的聯合機率分配的充分必要條件為:所對應的圖形是相連的。我們也討論馬可夫鏈與相容性的關係。 / For two discrete random variables X1 and X2 taking values in {1,…,I} and {1,…,J}, respectively, a putative conditional model for the joint distribution of X1 and X2 consists of two I × J matrices representing the conditional distributions of X1 given X2 and of X2 given X1. We say that two conditional distributions (matrices) A and B are compatible if there exists a joint distribution of X1 and X2 whose two conditional distributions are exactly A and B. We present new versions of necessary and sufficient conditions for compatibility of discrete conditional distributions via a graphical representation. Moreover, we show that there is a unique joint distribution for two given compatible conditional distributions if and only if the corresponding graph is connected. Markov chain characterizations are also presented.
|
Page generated in 0.4998 seconds