• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • 1
  • Tagged with
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Testing for Cointegration in Multivariate Time Series : An evaluation of the Johansens trace test and three different bootstrap tests when testing for cointegration

Englund, Jonas January 2013 (has links)
In this paper we examine, by Monte Carlo simulation, size and power of the Johansens trace test when the error covariance matrix is nonstationary, and we also investigate the properties of three different bootstrap cointegration tests. Earlier studies indicate that the Johansen trace test is not robust in presence of heteroscedasticity, and tests based on resampling methods have been proposed to solve the problem. The tests that are evaluated is the Johansen trace test, nonparametric bootstrap test and two different types of wild bootstrap tests. The wild bootstrap test is a resampling method that attempts to mimic the GARCH model by multiplying each residual by a stochastic variable with an expected value of zero and unit variance. The wild bootstrap tests proved to be superior to the other tests, but not as good as earlier indicated. The more the error terms differs from white noise, the worse these tests are doing. Although the wild bootstrap tests did not do a very bad job, the focus of further investigation should be to derive tests that does an even better job than the wild bootstrap tests examined here.
2

The Growth Curve Model for High Dimensional Data and its Application in Genomics

Jana, Sayantee 04 1900 (has links)
<p>Recent advances in technology have allowed researchers to collect high-dimensional biological data simultaneously. In genomic studies, for instance, measurements from tens of thousands of genes are taken from individuals across several experimental groups. In time course microarray experiments, gene expression is measured at several time points for each individual across the whole genome resulting in massive amount of data. In such experiments, researchers are faced with two types of high-dimensionality. The first is global high-dimensionality, which is common to all genomic experiments. The global high-dimensionality arises because inference is being done on tens of thousands of genes resulting in multiplicity. This challenge is often dealt with statistical methods for multiple comparison, such as the Bonferroni correction or false discovery rate (FDR). We refer to the second type of high-dimensionality as gene specific high-dimensionality, which arises in time course microarry experiments due to the fact that, in such experiments, sample size is often smaller than the number of time points ($n</p> <p>In this thesis, we use the growth curve model (GCM), which is a generalized multivariate analysis of variance (GMANOVA) model, and propose a moderated test statistic for testing a special case of the general linear hypothesis, which is specially useful for identifying genes that are expressed. We use the trace test for the GCM and modify it so that it can be used in high-dimensional situations. We consider two types of moderation: the Moore-Penrose generalized inverse and Stein's shrinkage estimator of $ S $. We performed extensive simulations to show performance of the moderated test, and compared the results with original trace test. We calculated empirical level and power of the test under many scenarios. Although the focus is on hypothesis testing, we also provided moderated maximum likelihood estimator for the parameter matrix and assessed its performance by investigating bias and mean squared error of the estimator and compared the results with those of the maximum likelihood estimators. Since the parameters are matrices, we consider distance measures in both power and level comparisons as well as when investigating bias and mean squared error. We also illustrated our approach using time course microarray data taken from a study on Lung Cancer. We were able to filter out 1053 genes as non-noise genes from a pool of 22,277 genes which is approximately 5\% of the total number of genes. This is in sync with results from most biological experiments where around 5\% genes are found to be differentially expressed.</p> / Master of Science (MSc)
3

混合型資料下之單位根檢定研究:平均概似比統計量之建立與模擬 / Panel Unit Root Test

邱惠玉, Chiu, Huei-Yu Unknown Date (has links)
自Nelson和Plosser (1982)後,研究經濟資料是否具有單位根現象,已成為近二十年來熱門且重要的課題。因 為資料性質的不同(恆定或非恆定),對實證計量模型的設定、統計推論以及原理論的發展有深遠的影響。與傳 統探討單一時間數列之單位根的論文不同的是,本篇論文將橫斷面的資料擴大,探討混合型資料的單位根現象 ( Panel Unit Root )。就此課題,文獻上已有兩個不同的檢定方法: Levin、Lin和Chu (1997)的LLC檢定法以及Im、 Pesaran和Shin (1995)的IPS檢定法。 我們的研究,有別於以上兩者,是從「概似比」的角度(likelihood ratio) 和應用檢定共積關係的Johansen (1988)「Trace檢定」,建構新的單位根檢定統計量。首先於文中推導出,「Trace檢定」可用於檢測單一時間數 列的單位根現象。進而,再將橫斷面資料擴大,採用mean group方法,加總平均每個橫斷面時間數列的「Trace 檢定」統計量,形成混合型資料之單位根檢定統計量 。根據中央極限定理,標準化後的 檢定統計量,極限上 收斂至標準常態分配。此外,我們也推導得出 檢定統計量與傳統ADF、LLC以及IPS檢定統計量極限上的關係。 最後,我們以「蒙地卡羅」模擬方法,分析小樣本下「型一誤差」與「檢定力」的表現。發現新的混合型資 料之單位根檢定統計量表現優良,近似於標準常態分配。故在做混合型資料的單位根分析時,採用 檢定統計 量,可得到較精確的推論。

Page generated in 0.0627 seconds