• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 24
  • 4
  • 4
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 55
  • 55
  • 16
  • 15
  • 14
  • 12
  • 9
  • 8
  • 8
  • 7
  • 7
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Gaussian graphical model selection for gene regulatory network reverse engineering and function prediction

Kontos, Kevin 02 July 2009 (has links)
One of the most important and challenging ``knowledge extraction' tasks in bioinformatics is the reverse engineering of gene regulatory networks (GRNs) from DNA microarray gene expression data. Indeed, as a result of the development of high-throughput data-collection techniques, biology is experiencing a data flood phenomenon that pushes biologists toward a new view of biology--systems biology--that aims at system-level understanding of biological systems.<p><p>Unfortunately, even for small model organisms such as the yeast Saccharomyces cerevisiae, the number p of genes is much larger than the number n of expression data samples. The dimensionality issue induced by this ``small n, large p' data setting renders standard statistical learning methods inadequate. Restricting the complexity of the models enables to deal with this serious impediment. Indeed, by introducing (a priori undesirable) bias in the model selection procedure, one reduces the variance of the selected model thereby increasing its accuracy.<p><p>Gaussian graphical models (GGMs) have proven to be a very powerful formalism to infer GRNs from expression data. Standard GGM selection techniques can unfortunately not be used in the ``small n, large p' data setting. One way to overcome this issue is to resort to regularization. In particular, shrinkage estimators of the covariance matrix--required to infer GGMs--have proven to be very effective. Our first contribution consists in a new shrinkage estimator that improves upon existing ones through the use of a Monte Carlo (parametric bootstrap) procedure.<p><p>Another approach to GGM selection in the ``small n, large p' data setting consists in reverse engineering limited-order partial correlation graphs (q-partial correlation graphs) to approximate GGMs. Our second contribution consists in an inference algorithm, the q-nested procedure, that builds a sequence of nested q-partial correlation graphs to take advantage of the smaller order graphs' topology to infer higher order graphs. This allows us to significantly speed up the inference of such graphs and to avoid problems related to multiple testing. Consequently, we are able to consider higher order graphs, thereby increasing the accuracy of the inferred graphs.<p><p>Another important challenge in bioinformatics is the prediction of gene function. An example of such a prediction task is the identification of genes that are targets of the nitrogen catabolite repression (NCR) selection mechanism in the yeast Saccharomyces cerevisiae. The study of model organisms such as Saccharomyces cerevisiae is indispensable for the understanding of more complex organisms. Our third contribution consists in extending the standard two-class classification approach by enriching the set of variables and comparing several feature selection techniques and classification algorithms.<p><p>Finally, our fourth contribution formulates the prediction of NCR target genes as a network inference task. We use GGM selection to infer multivariate dependencies between genes, and, starting from a set of genes known to be sensitive to NCR, we classify the remaining genes. We hence avoid problems related to the choice of a negative training set and take advantage of the robustness of GGM selection techniques in the ``small n, large p' data setting. / Doctorat en Sciences / info:eu-repo/semantics/nonPublished
52

Analyse temporelle et sémantique des réseaux sociaux typés à partir du contenu de sites généré par des utilisateurs sur le Web / Temporal and semantic analysis of richly typed social networks from user-generated content sites on the web

Meng, Zide 07 November 2016 (has links)
Nous proposons une approche pour détecter les sujets, les communautés d'intérêt non disjointes,l'expertise, les tendances et les activités dans des sites où le contenu est généré par les utilisateurs et enparticulier dans des forums de questions-réponses tels que StackOverFlow. Nous décrivons d'abordQASM (Questions & Réponses dans des médias sociaux), un système basé sur l'analyse de réseauxsociaux pour gérer les deux principales ressources d’un site de questions-réponses: les utilisateurs et lecontenu. Nous présentons également le vocabulaire QASM utilisé pour formaliser à la fois le niveaud'intérêt et l'expertise des utilisateurs. Nous proposons ensuite une approche efficace pour détecter lescommunautés d'intérêts. Elle repose sur une autre méthode pour enrichir les questions avec un tag plusgénéral en cas de besoin. Nous comparons trois méthodes de détection sur un jeu de données extrait dusite populaire StackOverflow. Notre méthode basée sur le se révèle être beaucoup plus simple et plusrapide, tout en préservant la qualité de la détection. Nous proposons en complément une méthode pourgénérer automatiquement un label pour un sujet détecté en analysant le sens et les liens de ses mots-clefs.Nous menons alors une étude pour comparer différents algorithmes pour générer ce label. Enfin, nousétendons notre modèle de graphes probabilistes pour modéliser conjointement les sujets, l'expertise, lesactivités et les tendances. Nous le validons sur des données du monde réel pour confirmer l'efficacité denotre modèle intégrant les comportements des utilisateurs et la dynamique des sujets / We propose an approach to detect topics, overlapping communities of interest, expertise, trends andactivities in user-generated content sites and in particular in question-answering forums such asStackOverFlow. We first describe QASM (Question & Answer Social Media), a system based on socialnetwork analysis to manage the two main resources in question-answering sites: users and contents. Wealso introduce the QASM vocabulary used to formalize both the level of interest and the expertise ofusers on topics. We then propose an efficient approach to detect communities of interest. It relies onanother method to enrich questions with a more general tag when needed. We compared threedetection methods on a dataset extracted from the popular Q&A site StackOverflow. Our method basedon topic modeling and user membership assignment is shown to be much simpler and faster whilepreserving the quality of the detection. We then propose an additional method to automatically generatea label for a detected topic by analyzing the meaning and links of its bag of words. We conduct a userstudy to compare different algorithms to choose the label. Finally we extend our probabilistic graphicalmodel to jointly model topics, expertise, activities and trends. We performed experiments with realworlddata to confirm the effectiveness of our joint model, studying the users’ behaviors and topicsdynamics
53

Apprentissage statistique avec le processus ponctuel déterminantal

Vicente, Sergio 02 1900 (has links)
Cette thèse aborde le processus ponctuel déterminantal, un modèle probabiliste qui capture la répulsion entre les points d’un certain espace. Celle-ci est déterminée par une matrice de similarité, la matrice noyau du processus, qui spécifie quels points sont les plus similaires et donc moins susceptibles de figurer dans un même sous-ensemble. Contrairement à la sélection aléatoire uniforme, ce processus ponctuel privilégie les sous-ensembles qui contiennent des points diversifiés et hétérogènes. La notion de diversité acquiert une importante grandissante au sein de sciences comme la médecine, la sociologie, les sciences forensiques et les sciences comportementales. Le processus ponctuel déterminantal offre donc une alternative aux traditionnelles méthodes d’échantillonnage en tenant compte de la diversité des éléments choisis. Actuellement, il est déjà très utilisé en apprentissage automatique comme modèle de sélection de sous-ensembles. Son application en statistique est illustrée par trois articles. Le premier article aborde le partitionnement de données effectué par un algorithme répété un grand nombre de fois sur les mêmes données, le partitionnement par consensus. On montre qu’en utilisant le processus ponctuel déterminantal pour sélectionner les points initiaux de l’algorithme, la partition de données finale a une qualité supérieure à celle que l’on obtient en sélectionnant les points de façon uniforme. Le deuxième article étend la méthodologie du premier article aux données ayant un grand nombre d’observations. Ce cas impose un effort computationnel additionnel, étant donné que la sélection de points par le processus ponctuel déterminantal passe par la décomposition spectrale de la matrice de similarité qui, dans ce cas-ci, est de grande taille. On présente deux approches différentes pour résoudre ce problème. On montre que les résultats obtenus par ces deux approches sont meilleurs que ceux obtenus avec un partitionnement de données basé sur une sélection uniforme de points. Le troisième article présente le problème de sélection de variables en régression linéaire et logistique face à un nombre élevé de covariables par une approche bayésienne. La sélection de variables est faite en recourant aux méthodes de Monte Carlo par chaînes de Markov, en utilisant l’algorithme de Metropolis-Hastings. On montre qu’en choisissant le processus ponctuel déterminantal comme loi a priori de l’espace des modèles, le sous-ensemble final de variables est meilleur que celui que l’on obtient avec une loi a priori uniforme. / This thesis presents the determinantal point process, a probabilistic model that captures repulsion between points of a certain space. This repulsion is encompassed by a similarity matrix, the kernel matrix, which selects which points are more similar and then less likely to appear in the same subset. This point process gives more weight to subsets characterized by a larger diversity of its elements, which is not the case with the traditional uniform random sampling. Diversity has become a key concept in domains such as medicine, sociology, forensic sciences and behavioral sciences. The determinantal point process is considered a promising alternative to traditional sampling methods, since it takes into account the diversity of selected elements. It is already actively used in machine learning as a subset selection method. Its application in statistics is illustrated with three papers. The first paper presents the consensus clustering, which consists in running a clustering algorithm on the same data, a large number of times. To sample the initials points of the algorithm, we propose the determinantal point process as a sampling method instead of a uniform random sampling and show that the former option produces better clustering results. The second paper extends the methodology developed in the first paper to large-data. Such datasets impose a computational burden since sampling with the determinantal point process is based on the spectral decomposition of the large kernel matrix. We introduce two methods to deal with this issue. These methods also produce better clustering results than consensus clustering based on a uniform sampling of initial points. The third paper addresses the problem of variable selection for the linear model and the logistic regression, when the number of predictors is large. A Bayesian approach is adopted, using Markov Chain Monte Carlo methods with Metropolis-Hasting algorithm. We show that setting the determinantal point process as the prior distribution for the model space selects a better final model than the model selected by a uniform prior on the model space.
54

Some Advanced Model Selection Topics for Nonparametric/Semiparametric Models with High-Dimensional Data

Fang, Zaili 13 November 2012 (has links)
Model and variable selection have attracted considerable attention in areas of application where datasets usually contain thousands of variables. Variable selection is a critical step to reduce the dimension of high dimensional data by eliminating irrelevant variables. The general objective of variable selection is not only to obtain a set of cost-effective predictors selected but also to improve prediction and prediction variance. We have made several contributions to this issue through a range of advanced topics: providing a graphical view of Bayesian Variable Selection (BVS), recovering sparsity in multivariate nonparametric models and proposing a testing procedure for evaluating nonlinear interaction effect in a semiparametric model. To address the first topic, we propose a new Bayesian variable selection approach via the graphical model and the Ising model, which we refer to the ``Bayesian Ising Graphical Model'' (BIGM). There are several advantages of our BIGM: it is easy to (1) employ the single-site updating and cluster updating algorithm, both of which are suitable for problems with small sample sizes and a larger number of variables, (2) extend this approach to nonparametric regression models, and (3) incorporate graphical prior information. In the second topic, we propose a Nonnegative Garrote on a Kernel machine (NGK) to recover sparsity of input variables in smoothing functions. We model the smoothing function by a least squares kernel machine and construct a nonnegative garrote on the kernel model as the function of the similarity matrix. An efficient coordinate descent/backfitting algorithm is developed. The third topic involves a specific genetic pathway dataset in which the pathways interact with the environmental variables. We propose a semiparametric method to model the pathway-environment interaction. We then employ a restricted likelihood ratio test and a score test to evaluate the main pathway effect and the pathway-environment interaction. / Ph. D.
55

ANALÝZA A FORMULACE ROZHODOVACÍCH PROBLÉMŮ ZNALCE PŘI OCEŇOVÁNÍ NEMOVITOSTÍ / ANALYSIS AND DEFINITION OF DECISION PROBLEMS OF EXPERT IN REAL ESTATE VALUATION

Krejza, Zdeněk Unknown Date (has links)
The thesis deals with the decision-making of the expert in real estate valuation. Due to the complexity of the process and the difficulties of valuation it can be assumed that the decision will be an arduous process. It is obvious that the choice of an expert is crucial to the result of the valuation process. This topic is currently relatively little explored, and therefore the work will deal with the analysis and formulation of decision problems expert in real estate valuation. The thesis analyses the current status of forensic engineering and decision-making regarding to real estate valuation. The general decision-making process, divided into seven steps, is adapted to the requirements of expert decision-making in real estate valuation. As in the managerial decision-making process, property valuation is also divided into three levels. These three levels considered the described fundamental decision problems that lead to the formulation of the expert decision-making principles in real estate valuation. For better understanding the extensiveness of the decision-making process in the valuation of real estate the author created a decision tree respectively schemes whose functionality has been verified at the end of the thesis, exemplified with the help of a specific case study of the determined price in real estate valuation.

Page generated in 0.0624 seconds