• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 3
  • 1
  • Tagged with
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

兩種正則化方法用於假設檢定與判別分析時之比較 / A comparison between two regularization methods for discriminant analysis and hypothesis testing

李登曜, Li, Deng-Yao Unknown Date (has links)
在統計學上,高維度常造成許多分析上的問題,如進行多變量迴歸的假設檢定時,當樣本個數小於樣本維度時,其樣本共變異數矩陣之反矩陣不存在,使得檢定無法進行,本文研究動機即為在進行兩群多維常態母體的平均數檢定時,所遇到的高維度問題,並引發在分類上的研究,試圖尋找解決方法。本文研究目的為在兩種不同的正則化方法中,比較何者在檢定與分類上表現較佳。本文研究方法為以 Warton 與 Friedman 的正則化方法來分別進行檢定與分類上的分析,根據其檢定力與分類錯誤的表現來判斷何者較佳。由分析結果可知,兩種正則化方法並沒有絕對的優劣,須視母體各項假設而定。 / High dimensionality causes many problems in statistical analysis. For instance, consider the testing of hypotheses about multivariate regression models. Suppose that the dimension of the multivariate response is larger than the number of observations, then the sample covariance matrix is not invertible. Since the inverse of the sample covariance matrix is often needed when computing the usual likelihood ratio test statistic (under normality), the matrix singularity makes it difficult to implement the test . The singularity of the sample covariance matrix is also a problem in classification when the linear discriminant analysis (LDA) or the quadratic discriminant analysis (QDA) is used. Different regularization methods have been proposed to deal with the singularity of the sample covariance matrix for different purposes. Warton (2008) proposed a regularization procedure for testing, and Friedman (1989) proposed a regularization procedure for classification. Is it true that Warton's regularization works better for testing and Friedman's regularization works better for classification? To answer this question, some simulation studies are conducted and the results are presented in this thesis. It is found that neither regularization method is superior to the other.
2

複迴歸係數排列檢定方法探討 / Methods for testing significance of partial regression coefficients in regression model

闕靖元, Chueh, Ching Yuan Unknown Date (has links)
在傳統的迴歸模型架構下,統計推論的進行需要假設誤差項之間相互獨立,且來自於常態分配。當理論模型假設條件無法達成的時候,排列檢定(permutation tests)這種無母數的統計方法通常會是可行的替代方法。 在以往的文獻中,應用於複迴歸模型(multiple regression)之係數排列檢定方法主要以樞紐統計量(pivotal quantity)作為檢定統計量,進而探討不同排列檢定方式的差異。本文除了採用t統計量這一個樞紐統計量作為檢定統計量的排列檢定方式外,亦納入以非樞紐統計量的迴歸係數估計量b22所建構而成的排列檢定方式,藉由蒙地卡羅模擬方法,比較以此兩類檢定方式之型一誤差(type I error)機率以及檢定力(power),並觀察其可行性以及適用時機。模擬結果顯示,在解釋變數間不相關且誤差分配較不偏斜的情形下,Freedman and Lane (1983)、Levin and Robbins (1983)、Kennedy (1995)之排列方法在樣本數大時適用b2統計量,且其檢定力較使用t2統計量高,但差異程度不大;若解釋變數間呈現高度相關,則不論誤差的偏斜狀態,Freedman and Lane (1983)、Kennedy (1995) 之排列方法於樣本數大時適用b2統計量,其檢定力結果也較使用t2統計量高,而且兩者的差異程度比起解釋變數間不相關時更加明顯。整體而言,使用t2統計量適用的場合較廣;相反的,使用b2的模擬結果則常需視樣本數大小以及解釋變數間相關性而定。 / In traditional linear models, error term are usually assumed to be independently, identically, normally distributed with mean zero and a constant variance. When the assumptions cannot meet, permutation tests can be an alternative method. Several permutation tests have been proposed to test the significance of a partial regression coefficient in a multiple regression model. t=b⁄(se(b)), an asymptotically pivotal quantity, is usually preferred and suggested as the test statistic. In this study, we take not only t statistics, but also the estimates of the partial regression coefficient as our test statistics. Their performance are compared in terms of the probability of committing a type I error and the power through the use of Monte Carlo simulation method. Situations where estimates of the partial regression coefficients may outperform t statistics are discussed.
3

排列檢定法應用於空間資料之比較 / Permutation test on spatial comparison

王信忠, Wang, Hsin-Chung Unknown Date (has links)
本論文主要是探討在二維度空間上二母體分佈是否一致。我們利用排列 (permutation)檢定方法來做比較, 並藉由費雪(Fisher)正確檢定方法的想法而提出重標記 (relabel)排列檢定方法或稱為費雪排列檢定法。 我們透過可交換性的特質證明它是正確 (exact) 的並且比 Syrjala (1996)所建議的排列檢定方法有更高的檢定力 (power)。 本論文另提出二個空間模型: spatial multinomial-relative-log-normal 模型 與 spatial Poisson-relative-log-normal 模型 來配適一般在漁業中常有的右斜長尾次數分佈並包含很多0 的空間資料。另外一般物種可能因天性或自然環境因素像食物、溫度等影響而有群聚行為發生, 這二個模型亦可描述出空間資料的群聚現象以做適當的推論。 / This thesis proposes the relabel (Fisher's) permutation test inspired by Fisher's exact test to compare between distributions of two (fishery) data sets locating on a two-dimensional lattice. We show that the permutation test given by Syrjala (1996} is not exact, but our relabel permutation test is exact and, additionally, more powerful. This thesis also studies two spatial models: the spatial multinomial-relative-log-normal model and the spatial Poisson-relative-log-normal model. Both models not only exhibit characteristics of skewness with a long right-hand tail and of high proportion of zero catches which usually appear in fishery data, but also have the ability to describe various types of aggregative behaviors.
4

再發事件資料之無母數分析

黃惠芬 Unknown Date (has links)
再發事件資料常見於醫學、工業、財經、社會等等領域中,對再發資料分析研究時,我們往往無法確知再發事件發生的時間或是發生次數的分配。因此,本論文探討的是分析再發事件的無母數方法,包括Nelson提出的平均累積函數(mean cumulative function)估計量,及Wang、Chiang與Huang介紹的發生率(occurrence rate)之核函數(kernel function)估計量。 就平均累積函數估計量來說,藉由Nelson導出的變異數及自然(naive)變異數,可分別求得平均累積函數的區間估計。本文利用靴環法(bootstrap)計算出平均累積函數在不同時點的變異數,再與Nelson變異數及自然變異數比較,結果顯示Nelson變異數與靴環法算出的變異數較接近。因此,應依據Nelson變異數建構出事件發生累積次數之漸近信賴區間。 本論文亦介紹了兩個或多個母體的平均累積函數的比較方法,包含固定時點之比較與整條曲線之比較。在固定時點之下,比較方法分別為平均累積函數成對差異之漸近信賴區間及靴環信賴區間、變異數分析比較法,與排列檢定法;而整條曲線比較方法包含:類似 統計量、Lawless-Nadeau檢定。這些方法應用在本論文所採之實證資料時,所得到的檢定結論是一致的。 / Recurrent event data arise in many fields, such as medicine, industry, economics, social sciences and so on. When studying recurrent event data, we usually don’t know the exact joint or marginal distributions of the occurrence times or the number of events over time. So, in this article we talk about some nonparametric methods, such as the mean cumulative function (MCF) discussed by Nelson, and kernel estimation of the rate function introduced by Wang, Chiang and Huang. As to the estimator of MCF, we can compute the confidence interval by Nelson’s variance and naive variance. We use bootstrap method to compare the performance of Nelson variance of the estimated MCF and naive variance of the estimated MCF. The results show that Nelson variance is better than naive variance, so we should construct the confidence limits for the MCF by Nelson’s variance except when only grouped data are available. We also introduce methods for comparing MCFs, including pointwise comparison of MCFs and comparison of entire MCFs. Methods for pointwise comparing MCFs include approximate confidence limits for difference between two MCFs, analysis-of-variance comparison, permutation test, and bootstrap’s confidence limits for difference between two MCFs. Methods for comparing entire MCFs include a statistic like Hoetelling’s , and Lawless-Nadeau test. Finally, all approaches are employed to analyze a real data, and the conclusions concordance with each other.

Page generated in 0.0188 seconds