71 |
Regularization for High-dimensional Time Series ModelsSun, Yan 20 September 2011 (has links)
No description available.
|
72 |
Topics on Sufficient Dimension ReductionNguyen, Son 19 July 2016 (has links)
No description available.
|
73 |
Dimension Reduced Modeling of Spatio-Temporal Processes with Applications to Statistical DownscalingBrynjarsdóttir, Jenný 26 September 2011 (has links)
No description available.
|
74 |
Analysis of Sparse Sufficient Dimension Reduction ModelsWithanage, Yeshan 16 September 2022 (has links)
No description available.
|
75 |
Variable Selection and Supervised Dimension Reduction for Large-Scale Genomic Data with Censored Survival OutcomesSpirko, Lauren Nicole January 2017 (has links)
One of the major goals in large-scale genomic studies is to identify genes with a prognostic impact on time-to-event outcomes, providing insight into the disease's process. With the rapid developments in high-throughput genomic technologies in the past two decades, the scientific community is able to monitor the expression levels of thousands of genes and proteins resulting in enormous data sets where the number of genomic variables (covariates) is far greater than the number of subjects. It is also typical for such data sets to have a high proportion of censored observations. Methods based on univariate Cox regression are often used to select genes related to survival outcome. However, the Cox model assumes proportional hazards (PH), which is unlikely to hold for each gene. When applied to genes exhibiting some form of non-proportional hazards (NPH), these methods could lead to an under- or over-estimation of the effects. In this thesis, we develop methods that will directly address t / Statistics
|
76 |
Bayesian Model Averaging Sufficient Dimension ReductionPower, Michael Declan January 2020 (has links)
In sufficient dimension reduction (Li, 1991; Cook, 1998b), original predictors are replaced by their low-dimensional linear combinations while preserving all of the conditional information of the response given the predictors. Sliced inverse regression [SIR; Li, 1991] and principal Hessian directions [PHD; Li, 1992] are two popular sufficient dimension reduction methods, and both SIR and PHD estimators involve all of the original predictor variables. To deal with the cases when the linear combinations involve only a subset of the original predictors, we propose a Bayesian model averaging (Raftery et al., 1997) approach to achieve sparse sufficient dimension reduction. We extend both SIR and PHD under the Bayesian framework. The superior performance of the proposed methods is demonstrated through extensive numerical studies as well as a real data analysis. / Statistics
|
77 |
Sufficient Dimension Reduction with Missing DataXIA, QI January 2017 (has links)
Existing sufficient dimension reduction (SDR) methods typically consider cases with no missing data. The dissertation aims to propose methods to facilitate the SDR methods when the response can be missing. The first part of the dissertation focuses on the seminal sliced inverse regression (SIR) approach proposed by Li (1991). We show that missing responses generally affect the validity of the inverse regressions under the mechanism of missing at random. We then propose a simple and effective adjustment with inverse probability weighting that guarantees the validity of SIR. Furthermore, a marginal coordinate test is introduced for this adjusted estimator. The proposed method share the simplicity of SIR and requires the linear conditional mean assumption. The second part of the dissertation proposes two new estimating equation procedures: the complete case estimating equation approach and the inverse probability weighted estimating equation approach. The two approaches are applied to a family of dimension reduction methods, which includes ordinary least squares, principal Hessian directions, and SIR. By solving the estimating equations, the two approaches are able to avoid the common assumptions in the SDR literature, the linear conditional mean assumption, and the constant conditional variance assumption. For all the aforementioned methods, the asymptotic properties are established, and their superb finite sample performances are demonstrated through extensive numerical studies as well as a real data analysis. In addition, existing estimators of the central mean space have uneven performances across different types of link functions. To address this limitation, a new hybrid SDR estimator is proposed that successfully recovers the central mean space for a wide range of link functions. Based on the new hybrid estimator, we further study the order determination procedure and the marginal coordinate test. The superior performance of the hybrid estimator over existing methods is demonstrated in simulation studies. Note that the proposed procedures dealing with the missing response at random can be simply adapted to this hybrid method. / Statistics
|
78 |
Behaviour recognition and monitoring of the elderly using wearable wireless sensors : dynamic behaviour modelling and nonlinear classification methods and implementationWinkley, Jonathan James January 2013 (has links)
In partnership with iMonSys - an emerging company in the passive care field - a new system, 'Verity', is being developed to fulfil the role of a passive behaviour monitoring and alert detection device, providing an unobtrusive level of care and assessing an individual's changing behaviour and health status whilst still allowing for independence of its elderly user. In this research, a Hidden Markov Model incorporating Fuzzy Logic-based sensor fusion is created for the behaviour detection within Verity, with a method of Fuzzy-Rule induction designed for the system's adaptation to a user during operation. A dimension reduction and classification scheme utilising Curvilinear Distance Analysis is further developed to deal with the recognition task presented by increasingly nonlinear and high dimension sensor readings, and anomaly detection methods situated within the Hidden Markov Model provide possible solutions to identification of health concerns arising from independent living. Real-time implementation is proposed through development of an Instance Based Learning approach in combination with a Bloom Filter, speeding up the classification operation and reducing the storage requirements for the considerable amount of observation data obtained during operation. Finally, evaluation of all algorithms is completed using a simulation of the Verity system with which the behaviour monitoring task is to be achieved.
|
79 |
基於資訊理論熵之特徵選取 / Entropy based feature selection許立農 Unknown Date (has links)
特徵選取為機器學習常見的資料前處理的方法,現今已有許多不同的特徵選取演算法,然而並不存在一個在所有資料上都優於其他方法的演算法,且由於現今的資料種類繁多,所以研發新的方法能夠帶來更多有關資料的資訊並且根據資料的特性採用不同的變數選取演算法是較好的做法。
本研究使用資訊理論entropy的概念依照變數之間資料雲幾何樹的分群結果定義變數之間的相關性,且依此選取資料的特徵,並與同樣使用entropy概念的FCBF方法、Lasso、F-score、隨機森林、基因演算法互相比較,本研究使用階層式分群法與多數決投票法套用在真實的資料上判斷預測率。結果顯示,本研究使用的entropy方法在各個不同的資料集上有較穩定的預測率提升表現,同時資料縮減的維度也相對穩定。 / Feature selection is a common preprocessing technique in machine learning. Although a large pool of feature selection techniques has existed, there is no such a dominant method in all datasets. Because of the complexity of various data formats, establishing a new method can bring more insight into data, and applying proper techniques to analyzing data would be the best choice.
In this study, we used the concept of entropy from information theory to build a similarity matrix between features. Additionally, we constructed a DCG-tree to separate variables into clusters. Each core cluster consists of rather uniform variables, which share similar covariate information. With the core clusters, we reduced the dimension of a high-dimensional dataset. We assessed our method by comparing it with FCBF, Lasso, F-score, random forest and genetic algorithm. The performances of prediction were demonstrated through real-world datasets using hierarchical clustering with voting algorithm as the classifier. The results showed that our entropy method has more stable prediction performances and reduces sufficient dimensions of the datasets simultaneously.
|
80 |
Proposition d'une méthode spectrale combinée LDA et LLE pour la réduction non-linéaire de dimension : Application à la segmentation d'images couleurs / Proposition of a new spectral method combining LDA and LLE for non-linear dimension reduction : Application to color images segmentationHijazi, Hala 19 December 2013 (has links)
Les méthodes d'analyse de données et d'apprentissage ont connu un développement très important ces dernières années. En effet, après les réseaux de neurones, les machines à noyaux (années 1990), les années 2000 ont vu l'apparition de méthodes spectrales qui ont fourni un cadre mathématique unifié pour développer des méthodes de classification originales. Parmi celles-ci ont peut citer la méthode LLE pour la réduction de dimension non linéaire et la méthode LDA pour la discrimination de classes. Une nouvelle méthode de classification est proposée dans cette thèse, méthode issue d'une combinaison des méthodes LLE et LDA. Cette méthode a donné des résultats intéressants sur des ensembles de données synthétiques. Elle permet une réduction de dimension non-linéaire suivie d'une discrimination efficace. Ensuite nous avons montré que cette méthode pouvait être étendue à l'apprentissage semi-supervisé. Les propriétés de réduction de dimension et de discrimination de cette nouvelle méthode, ainsi que la propriété de parcimonie inhérente à la méthode LLE nous ont permis de l'appliquer à la segmentation d'images couleur avec succès. La propriété d'apprentissage semi-supervisé nous a enfin permis de segmenter des images bruitées avec de bonnes performances. Ces résultats doivent être confortés mais nous pouvons d'ores et déjà dégager des perspectives de poursuite de travaux intéressantes. / Data analysis and learning methods have known a huge development during these last years. Indeed, after neural networks, kernel methods in the 90', spectral methods appeared in the years 2000. Spectral methods provide an unified mathematical framework to expand new original classification methods. Among these new techniques, two methods can be highlighted : LLE for non-linear dimension reduction and LDA as discriminating classification method. In this thesis document a new classification technique is proposed combining LLE and LDA methods. This new method makes it possible to provide efficient non-linear dimension reduction and discrimination. Then an extension of the method to semi-supervised learning is proposed. Good properties of dimension reduction and discrimination associated with the sparsity property of the LLE technique make it possible to apply our method to color images segmentation with success. Semi-supervised version of our method leads to efficient segmentation of noisy color images. These results have to be extended and compared with other state-of-the-art methods. Nevertheless interesting perspectives of this work are proposed in conclusion for future developments.
|
Page generated in 0.1131 seconds