• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 88
  • 9
  • 9
  • 5
  • 4
  • 4
  • 2
  • 1
  • 1
  • Tagged with
  • 150
  • 150
  • 39
  • 37
  • 36
  • 22
  • 20
  • 19
  • 18
  • 18
  • 17
  • 17
  • 17
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Topics on Sufficient Dimension Reduction

Nguyen, Son 19 July 2016 (has links)
No description available.
72

Dimension Reduced Modeling of Spatio-Temporal Processes with Applications to Statistical Downscaling

Brynjarsdóttir, Jenný 26 September 2011 (has links)
No description available.
73

Analysis of Sparse Sufficient Dimension Reduction Models

Withanage, Yeshan 16 September 2022 (has links)
No description available.
74

Variable Selection and Supervised Dimension Reduction for Large-Scale Genomic Data with Censored Survival Outcomes

Spirko, Lauren Nicole January 2017 (has links)
One of the major goals in large-scale genomic studies is to identify genes with a prognostic impact on time-to-event outcomes, providing insight into the disease's process. With the rapid developments in high-throughput genomic technologies in the past two decades, the scientific community is able to monitor the expression levels of thousands of genes and proteins resulting in enormous data sets where the number of genomic variables (covariates) is far greater than the number of subjects. It is also typical for such data sets to have a high proportion of censored observations. Methods based on univariate Cox regression are often used to select genes related to survival outcome. However, the Cox model assumes proportional hazards (PH), which is unlikely to hold for each gene. When applied to genes exhibiting some form of non-proportional hazards (NPH), these methods could lead to an under- or over-estimation of the effects. In this thesis, we develop methods that will directly address t / Statistics
75

Bayesian Model Averaging Sufficient Dimension Reduction

Power, Michael Declan January 2020 (has links)
In sufficient dimension reduction (Li, 1991; Cook, 1998b), original predictors are replaced by their low-dimensional linear combinations while preserving all of the conditional information of the response given the predictors. Sliced inverse regression [SIR; Li, 1991] and principal Hessian directions [PHD; Li, 1992] are two popular sufficient dimension reduction methods, and both SIR and PHD estimators involve all of the original predictor variables. To deal with the cases when the linear combinations involve only a subset of the original predictors, we propose a Bayesian model averaging (Raftery et al., 1997) approach to achieve sparse sufficient dimension reduction. We extend both SIR and PHD under the Bayesian framework. The superior performance of the proposed methods is demonstrated through extensive numerical studies as well as a real data analysis. / Statistics
76

Sufficient Dimension Reduction with Missing Data

XIA, QI January 2017 (has links)
Existing sufficient dimension reduction (SDR) methods typically consider cases with no missing data. The dissertation aims to propose methods to facilitate the SDR methods when the response can be missing. The first part of the dissertation focuses on the seminal sliced inverse regression (SIR) approach proposed by Li (1991). We show that missing responses generally affect the validity of the inverse regressions under the mechanism of missing at random. We then propose a simple and effective adjustment with inverse probability weighting that guarantees the validity of SIR. Furthermore, a marginal coordinate test is introduced for this adjusted estimator. The proposed method share the simplicity of SIR and requires the linear conditional mean assumption. The second part of the dissertation proposes two new estimating equation procedures: the complete case estimating equation approach and the inverse probability weighted estimating equation approach. The two approaches are applied to a family of dimension reduction methods, which includes ordinary least squares, principal Hessian directions, and SIR. By solving the estimating equations, the two approaches are able to avoid the common assumptions in the SDR literature, the linear conditional mean assumption, and the constant conditional variance assumption. For all the aforementioned methods, the asymptotic properties are established, and their superb finite sample performances are demonstrated through extensive numerical studies as well as a real data analysis. In addition, existing estimators of the central mean space have uneven performances across different types of link functions. To address this limitation, a new hybrid SDR estimator is proposed that successfully recovers the central mean space for a wide range of link functions. Based on the new hybrid estimator, we further study the order determination procedure and the marginal coordinate test. The superior performance of the hybrid estimator over existing methods is demonstrated in simulation studies. Note that the proposed procedures dealing with the missing response at random can be simply adapted to this hybrid method. / Statistics
77

Behaviour recognition and monitoring of the elderly using wearable wireless sensors : dynamic behaviour modelling and nonlinear classification methods and implementation

Winkley, Jonathan James January 2013 (has links)
In partnership with iMonSys - an emerging company in the passive care field - a new system, 'Verity', is being developed to fulfil the role of a passive behaviour monitoring and alert detection device, providing an unobtrusive level of care and assessing an individual's changing behaviour and health status whilst still allowing for independence of its elderly user. In this research, a Hidden Markov Model incorporating Fuzzy Logic-based sensor fusion is created for the behaviour detection within Verity, with a method of Fuzzy-Rule induction designed for the system's adaptation to a user during operation. A dimension reduction and classification scheme utilising Curvilinear Distance Analysis is further developed to deal with the recognition task presented by increasingly nonlinear and high dimension sensor readings, and anomaly detection methods situated within the Hidden Markov Model provide possible solutions to identification of health concerns arising from independent living. Real-time implementation is proposed through development of an Instance Based Learning approach in combination with a Bloom Filter, speeding up the classification operation and reducing the storage requirements for the considerable amount of observation data obtained during operation. Finally, evaluation of all algorithms is completed using a simulation of the Verity system with which the behaviour monitoring task is to be achieved.
78

基於資訊理論熵之特徵選取 / Entropy based feature selection

許立農 Unknown Date (has links)
特徵選取為機器學習常見的資料前處理的方法,現今已有許多不同的特徵選取演算法,然而並不存在一個在所有資料上都優於其他方法的演算法,且由於現今的資料種類繁多,所以研發新的方法能夠帶來更多有關資料的資訊並且根據資料的特性採用不同的變數選取演算法是較好的做法。 本研究使用資訊理論entropy的概念依照變數之間資料雲幾何樹的分群結果定義變數之間的相關性,且依此選取資料的特徵,並與同樣使用entropy概念的FCBF方法、Lasso、F-score、隨機森林、基因演算法互相比較,本研究使用階層式分群法與多數決投票法套用在真實的資料上判斷預測率。結果顯示,本研究使用的entropy方法在各個不同的資料集上有較穩定的預測率提升表現,同時資料縮減的維度也相對穩定。 / Feature selection is a common preprocessing technique in machine learning. Although a large pool of feature selection techniques has existed, there is no such a dominant method in all datasets. Because of the complexity of various data formats, establishing a new method can bring more insight into data, and applying proper techniques to analyzing data would be the best choice. In this study, we used the concept of entropy from information theory to build a similarity matrix between features. Additionally, we constructed a DCG-tree to separate variables into clusters. Each core cluster consists of rather uniform variables, which share similar covariate information. With the core clusters, we reduced the dimension of a high-dimensional dataset. We assessed our method by comparing it with FCBF, Lasso, F-score, random forest and genetic algorithm. The performances of prediction were demonstrated through real-world datasets using hierarchical clustering with voting algorithm as the classifier. The results showed that our entropy method has more stable prediction performances and reduces sufficient dimensions of the datasets simultaneously.
79

Proposition d'une méthode spectrale combinée LDA et LLE pour la réduction non-linéaire de dimension : Application à la segmentation d'images couleurs / Proposition of a new spectral method combining LDA and LLE for non-linear dimension reduction : Application to color images segmentation

Hijazi, Hala 19 December 2013 (has links)
Les méthodes d'analyse de données et d'apprentissage ont connu un développement très important ces dernières années. En effet, après les réseaux de neurones, les machines à noyaux (années 1990), les années 2000 ont vu l'apparition de méthodes spectrales qui ont fourni un cadre mathématique unifié pour développer des méthodes de classification originales. Parmi celles-ci ont peut citer la méthode LLE pour la réduction de dimension non linéaire et la méthode LDA pour la discrimination de classes. Une nouvelle méthode de classification est proposée dans cette thèse, méthode issue d'une combinaison des méthodes LLE et LDA. Cette méthode a donné des résultats intéressants sur des ensembles de données synthétiques. Elle permet une réduction de dimension non-linéaire suivie d'une discrimination efficace. Ensuite nous avons montré que cette méthode pouvait être étendue à l'apprentissage semi-supervisé. Les propriétés de réduction de dimension et de discrimination de cette nouvelle méthode, ainsi que la propriété de parcimonie inhérente à la méthode LLE nous ont permis de l'appliquer à la segmentation d'images couleur avec succès. La propriété d'apprentissage semi-supervisé nous a enfin permis de segmenter des images bruitées avec de bonnes performances. Ces résultats doivent être confortés mais nous pouvons d'ores et déjà dégager des perspectives de poursuite de travaux intéressantes. / Data analysis and learning methods have known a huge development during these last years. Indeed, after neural networks, kernel methods in the 90', spectral methods appeared in the years 2000. Spectral methods provide an unified mathematical framework to expand new original classification methods. Among these new techniques, two methods can be highlighted : LLE for non-linear dimension reduction and LDA as discriminating classification method. In this thesis document a new classification technique is proposed combining LLE and LDA methods. This new method makes it possible to provide efficient non-linear dimension reduction and discrimination. Then an extension of the method to semi-supervised learning is proposed. Good properties of dimension reduction and discrimination associated with the sparsity property of the LLE technique make it possible to apply our method to color images segmentation with success. Semi-supervised version of our method leads to efficient segmentation of noisy color images. These results have to be extended and compared with other state-of-the-art methods. Nevertheless interesting perspectives of this work are proposed in conclusion for future developments.
80

Geometric algorithms for component analysis with a view to gene expression data analysis

Journée, Michel 04 June 2009 (has links)
The research reported in this thesis addresses the problem of component analysis, which aims at reducing large data to lower dimensions, to reveal the essential structure of the data. This problem is encountered in almost all areas of science - from physics and biology to finance, economics and psychometrics - where large data sets need to be analyzed. Several paradigms for component analysis are considered, e.g., principal component analysis, independent component analysis and sparse principal component analysis, which are naturally formulated as an optimization problem subject to constraints that endow the problem with a well-characterized matrix manifold structure. Component analysis is so cast in the realm of optimization on matrix manifolds. Algorithms for component analysis are subsequently derived that take advantage of the geometrical structure of the problem. When formalizing component analysis into an optimization framework, three main classes of problems are encountered, for which methods are proposed. We first consider the problem of optimizing a smooth function on the set of n-by-p real matrices with orthonormal columns. Then, a method is proposed to maximize a convex function on a compact manifold, which generalizes to this context the well-known power method that computes the dominant eigenvector of a matrix. Finally, we address the issue of solving problems defined in terms of large positive semidefinite matrices in a numerically efficient manner by using low-rank approximations of such matrices. The efficiency of the proposed algorithms for component analysis is evaluated on the analysis of gene expression data related to breast cancer, which encode the expression levels of thousands of genes gained from experiments on hundreds of cancerous cells. Such data provide a snapshot of the biological processes that occur in tumor cells and offer huge opportunities for an improved understanding of cancer. Thanks to an original framework to evaluate the biological significance of a set of components, well-known but also novel knowledge is inferred about the biological processes that underlie breast cancer. Hence, to summarize the thesis in one sentence: We adopt a geometric point of view to propose optimization algorithms performing component analysis, which, applied on large gene expression data, enable to reveal novel biological knowledge.

Page generated in 0.126 seconds