• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • Tagged with
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Similarity-principle-based machine learning method for clinical trials and beyond

Hwang, Susan 01 February 2021 (has links)
The control of type-I error is a focal point for clinical trials. On the other hand, it is also critical to be able to detect a truly efficacious treatment in a clinical trial. With recent success in supervised learning (classification and regression problems), artificial intelligence (AI) and machine learning (ML) can play a vital role in identifying efficacious new treatments. However, the high performance of the AI methods, particularly the deep learning neural networks, requires a much larger dataset than those we commonly see in clinical trials. It is desirable to develop a new ML method that performs well with a small sample size (ranges from 20 to 200) and has advantages as compared with the classic statistical models and some of the most relevant ML methods. In this dissertation, we propose a Similarity-Principle-Based Machine Learning (SBML) method based on the similarity principle assuming that identical or similar subjects should behave in a similar manner. SBML method introduces the attribute-scaling factors at the training stage so that the relative importance of different attributes can be objectively determined in the similarity measures. In addition, the gradient method is used in learning / training in order to update the attribute-scaling factors. The method is novel as far as we know. We first evaluate SBML for continuous outcomes, especially when the sample size is small, and investigate the effects of various tuning parameters on the performance of SBML. Simulations show that SBML achieves better predictions in terms of mean squared errors or misclassification error rates for various situations under consideration than conventional statistical methods, such as full linear models, optimal or ridge regressions and mixed effect models, as well as ML methods including kernel and decision tree methods. We also extend and show how SBML can be flexibly applied to binary outcomes. Through numerical and simulation studies, we confirm that SBML performs well compared to classical statistical methods, even when the sample size is small and in the presence of unmeasured predictors and/or noise variables. Although SBML performs well with small sample sizes, it may not be computationally efficient for large sample sizes. Therefore, we propose Recursive SBML (RSBML), which can save computing time, with some tradeoffs for accuracy. In this sense, RSBML can also be viewed as a combination of unsupervised learning (dimension reduction) and supervised learning (prediction). Recursive learning resembles the natural human way of learning. It is an efficient way of learning from complicated large data. Based on the simulation results, RSBML performs much faster than SBML with reasonable accuracy for large sample sizes.
2

Non-linear dimensionality reduction and sparse representation models for facial analysis / Réduction de la dimension non-linéaire et modèles de la représentations parcimonieuse pour l’analyse du visage

Zhang, Yuyao 20 February 2014 (has links)
Les techniques d'analyse du visage nécessitent généralement une représentation pertinente des images, notamment en passant par des techniques de réduction de la dimension, intégrées dans des schémas plus globaux, et qui visent à capturer les caractéristiques discriminantes des signaux. Dans cette thèse, nous fournissons d'abord une vue générale sur l'état de l'art de ces modèles, puis nous appliquons une nouvelle méthode intégrant une approche non-linéaire, Kernel Similarity Principle Component Analysis (KS-PCA), aux Modèles Actifs d'Apparence (AAMs), pour modéliser l'apparence d'un visage dans des conditions d'illumination variables. L'algorithme proposé améliore notablement les résultats obtenus par l'utilisation d'une transformation PCA linéaire traditionnelle, que ce soit pour la capture des caractéristiques saillantes, produites par les variations d'illumination, ou pour la reconstruction des visages. Nous considérons aussi le problème de la classification automatiquement des poses des visages pour différentes vues et différentes illumination, avec occlusion et bruit. Basé sur les méthodes des représentations parcimonieuses, nous proposons deux cadres d'apprentissage de dictionnaire pour ce problème. Une première méthode vise la classification de poses à l'aide d'une représentation parcimonieuse active (Active Sparse Representation ASRC). En fait, un dictionnaire est construit grâce à un modèle linéaire, l'Incremental Principle Component Analysis (Incremental PCA), qui a tendance à diminuer la redondance intra-classe qui peut affecter la performance de la classification, tout en gardant la redondance inter-classes, qui elle, est critique pour les représentations parcimonieuses. La seconde approche proposée est un modèle des représentations parcimonieuses basé sur le Dictionary-Learning Sparse Representation (DLSR), qui cherche à intégrer la prise en compte du critère de la classification dans le processus d'apprentissage du dictionnaire. Nous faisons appel dans cette partie à l'algorithme K-SVD. Nos résultats expérimentaux montrent la performance de ces deux méthodes d'apprentissage de dictionnaire. Enfin, nous proposons un nouveau schéma pour l'apprentissage de dictionnaire adapté à la normalisation de l'illumination (Dictionary Learning for Illumination Normalization: DLIN). L'approche ici consiste à construire une paire de dictionnaires avec une représentation parcimonieuse. Ces dictionnaires sont construits respectivement à partir de visages illuminées normalement et irrégulièrement, puis optimisés de manière conjointe. Nous utilisons un modèle de mixture de Gaussiennes (GMM) pour augmenter la capacité à modéliser des données avec des distributions plus complexes. Les résultats expérimentaux démontrent l'efficacité de notre approche pour la normalisation d'illumination. / Face analysis techniques commonly require a proper representation of images by means of dimensionality reduction leading to embedded manifolds, which aims at capturing relevant characteristics of the signals. In this thesis, we first provide a comprehensive survey on the state of the art of embedded manifold models. Then, we introduce a novel non-linear embedding method, the Kernel Similarity Principal Component Analysis (KS-PCA), into Active Appearance Models, in order to model face appearances under variable illumination. The proposed algorithm successfully outperforms the traditional linear PCA transform to capture the salient features generated by different illuminations, and reconstruct the illuminated faces with high accuracy. We also consider the problem of automatically classifying human face poses from face views with varying illumination, as well as occlusion and noise. Based on the sparse representation methods, we propose two dictionary-learning frameworks for this pose classification problem. The first framework is the Adaptive Sparse Representation pose Classification (ASRC). It trains the dictionary via a linear model called Incremental Principal Component Analysis (Incremental PCA), tending to decrease the intra-class redundancy which may affect the classification performance, while keeping the extra-class redundancy which is critical for sparse representation. The other proposed work is the Dictionary-Learning Sparse Representation model (DLSR) that learns the dictionary with the aim of coinciding with the classification criterion. This training goal is achieved by the K-SVD algorithm. In a series of experiments, we show the performance of the two dictionary-learning methods which are respectively based on a linear transform and a sparse representation model. Besides, we propose a novel Dictionary Learning framework for Illumination Normalization (DL-IN). DL-IN based on sparse representation in terms of coupled dictionaries. The dictionary pairs are jointly optimized from normally illuminated and irregularly illuminated face image pairs. We further utilize a Gaussian Mixture Model (GMM) to enhance the framework's capability of modeling data under complex distribution. The GMM adapt each model to a part of the samples and then fuse them together. Experimental results demonstrate the effectiveness of the sparsity as a prior for patch-based illumination normalization for face images.

Page generated in 0.0564 seconds