Spelling suggestions: "subject:"principal hessian directions"" "subject:"principal messian directions""
1 |
Bayesian Model Averaging Sufficient Dimension ReductionPower, Michael Declan January 2020 (has links)
In sufficient dimension reduction (Li, 1991; Cook, 1998b), original predictors are replaced by their low-dimensional linear combinations while preserving all of the conditional information of the response given the predictors. Sliced inverse regression [SIR; Li, 1991] and principal Hessian directions [PHD; Li, 1992] are two popular sufficient dimension reduction methods, and both SIR and PHD estimators involve all of the original predictor variables. To deal with the cases when the linear combinations involve only a subset of the original predictors, we propose a Bayesian model averaging (Raftery et al., 1997) approach to achieve sparse sufficient dimension reduction. We extend both SIR and PHD under the Bayesian framework. The superior performance of the proposed methods is demonstrated through extensive numerical studies as well as a real data analysis. / Statistics
|
2 |
Sufficient Dimension Reduction with Missing DataXIA, QI January 2017 (has links)
Existing sufficient dimension reduction (SDR) methods typically consider cases with no missing data. The dissertation aims to propose methods to facilitate the SDR methods when the response can be missing. The first part of the dissertation focuses on the seminal sliced inverse regression (SIR) approach proposed by Li (1991). We show that missing responses generally affect the validity of the inverse regressions under the mechanism of missing at random. We then propose a simple and effective adjustment with inverse probability weighting that guarantees the validity of SIR. Furthermore, a marginal coordinate test is introduced for this adjusted estimator. The proposed method share the simplicity of SIR and requires the linear conditional mean assumption. The second part of the dissertation proposes two new estimating equation procedures: the complete case estimating equation approach and the inverse probability weighted estimating equation approach. The two approaches are applied to a family of dimension reduction methods, which includes ordinary least squares, principal Hessian directions, and SIR. By solving the estimating equations, the two approaches are able to avoid the common assumptions in the SDR literature, the linear conditional mean assumption, and the constant conditional variance assumption. For all the aforementioned methods, the asymptotic properties are established, and their superb finite sample performances are demonstrated through extensive numerical studies as well as a real data analysis. In addition, existing estimators of the central mean space have uneven performances across different types of link functions. To address this limitation, a new hybrid SDR estimator is proposed that successfully recovers the central mean space for a wide range of link functions. Based on the new hybrid estimator, we further study the order determination procedure and the marginal coordinate test. The superior performance of the hybrid estimator over existing methods is demonstrated in simulation studies. Note that the proposed procedures dealing with the missing response at random can be simply adapted to this hybrid method. / Statistics
|
3 |
SIR、SAVE、SIR-II、pHd等四種維度縮減方法之比較探討方悟原, Fang, Wu-Yuan Unknown Date (has links)
本文以維度縮減(dimension reduction)為主題,介紹其定義以及四種目前較被廣為討論的處理方式。文中首先針對Li (1991)所使用的維度縮減定義型式y = g(x,ε) = g1(βx,ε),與Cook (1994)所採用的定義型式「條件密度函數f(y | x)=f(y |βx)」作探討,並就Cook (1994)對最小維度縮減子空間的相關討論作介紹。此外文中也試圖提出另一種適用於pHd的可能定義(E(y | x)=E(y |βx),亦即縮減前後y的條件期望值不變),並發現在此一新定義下所衍生而成的子空間會包含於Cook (1994)所定義的子空間。
有關現有四種維度縮減方法(SIR、SAVE、SIR-II、pHd)的理論架構,則重新予以說明並作必要的補充證明,並以兩個機率模式(y = bx +ε及y = |z| +ε)為例,分別測試四種方法能否縮減出正確的方向。文中同時也分別找出對應於這四種方法的等價條件,並利用這些等價條件相互比較,得到彼此間的關係。我們發現當解釋變數x為多維常態情形下,四種方法理論上都不會保留可以被縮減的方向,而該保留住的方向卻不一定能夠被保留住,但是使用SAVE所可以保留住的方向會比單獨使用其他三者之一來的多(或至少一樣多),而如果SIR與SIR-II同時使用則恰好等同於使用SAVE。另外使用pHd似乎時並不需要「E(y│x)二次可微分」這個先決條件。 / The focus of the study is on the dimension reduction and the over-view of the four methods frequently cited in the literature, i.e. SIR, SAVE, SIR-II, and pHd. The definitions of dimension reduction proposed by Li (1991)(y = g( x,ε) = g1(βx,ε)), and by Cook (1994)(f(y | x)=f(y|βx)) are briefly reviewed. Issues on minimum dimension reduction subspace (Cook (1994)) are also discussed. In addition, we propose a possible definition (E(y | x)=E(y |βx)), i.e. the conditional expectation of y remains the same both in the original subspace and the reduced subspace), which seems more appropriate when pHd is concerned. We also found that the subspace induced by this definition would be contained in the subspace generated based on Cook (1994).
We then take a closer look at basic ideas behind the four methods, and supplement some more explanations and proofs, if necessary. Equivalent conditions related to the four methods that can be used to locate "right" directions are presented. Two models (y = bx +ε and y = |z| +ε) are used to demonstrate the methods and to see how good they can be. In order to further understand the possible relationships among the four methods, some comparisons are made. We learn that when x is normally distributed, directions that are redundant will not be preserved by any of the four methods. Directions that contribute significantly, however, may be mistakenly removed. Overall, SAVE has the best performance in terms of saving the "right" directions, and applying SIR along with SIR-II performs just as well. We also found that the prerequisite, 「E(y | x) is twice differentiable」, does not seem to be necessary when pHd is applied.
|
Page generated in 0.1227 seconds