• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 327
  • 50
  • 40
  • 20
  • 7
  • 6
  • 4
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 554
  • 422
  • 185
  • 173
  • 134
  • 122
  • 120
  • 97
  • 89
  • 82
  • 80
  • 78
  • 74
  • 74
  • 73
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Unsupervised Learning of Spatiotemporal Features by Video Completion

Nallabolu, Adithya Reddy 18 October 2017 (has links)
In this work, we present an unsupervised representation learning approach for learning rich spatiotemporal features from videos without the supervision from semantic labels. We propose to learn the spatiotemporal features by training a 3D convolutional neural network (CNN) using video completion as a surrogate task. Using a large collection of unlabeled videos, we train the CNN to predict the missing pixels of a spatiotemporal hole given the remaining parts of the video through minimizing per-pixel reconstruction loss. To achieve good reconstruction results using color videos, the CNN needs to have a certain level of understanding of the scene dynamics and predict plausible, temporally coherent contents. We further explore to jointly reconstruct both color frames and flow fields. By exploiting the statistical temporal structure of images, we show that the learned representations capture meaningful spatiotemporal structures from raw videos. We validate the effectiveness of our approach for CNN pre-training on action recognition and action similarity labeling problems. Our quantitative results demonstrate that our method compares favorably against learning without external data and existing unsupervised learning approaches. / Master of Science
2

N-ary Cross-sentence Relation Extraction: From Supervised to Unsupervised Learning

Yuan, Chenhan 19 May 2021 (has links)
Relation extraction is the problem of extracting relations between entities described in the text. Relations identify a common "fact" described by distinct entities. Conventional relation extraction approaches focus on supervised binary intra-sentence relations, where the assumption is relations only exist between two entities within the same sentence. These approaches have two key limitations. First, binary intra-sentence relation extraction methods can not extract a relation in a fact that is described by more than two entities. Second, these methods cannot extract relations that span more than one sentence, which commonly occurs as the number of entities increases. Third, these methods assume a supervised setting and are therefore not able to extract relations in the absence of sufficient labeled data for training. This work aims to overcome these limitations by developing n-ary cross-sentence relation extraction methods for both supervised and unsupervised settings. Our work has three main goals and contributions: (1) two unsupervised binary intra-sentence relation extraction methods, (2) a supervised n-ary cross-sentence relation extraction method, and (3) an unsupervised n-ary cross-sentence relation extraction method. To achieve these goals, our work includes the following contributions: (1) an automatic labeling method for n-ary cross-sentence data, which is essential for model training, (2) a reinforcement learning-based sentence distribution estimator to minimize the impact of noise on model training, (3) a generative clustering-based technique for intra-sentence unsupervised relation extraction, (4) a variational autoencoder-based technique for unsupervised n-ary cross-sentence relation extraction, and (5) a sentence group selector that identifies groups of sentences that form relations. / Master of Science / In this work, we designed multiple models to automatically extract relations from text. These relations represent the semantic connection between two or more proper nouns. Previous work includes models that can only extract relations between two proper nouns in a single sentence, while the methods proposed in this thesis can extract relations between two or more proper nouns in multiple sentences. We propose three models. The first model can automatically remove erroneous annotations in training data, thereby making the models more credible. We also propose a more effective model that can automatically extract relations between two proper nouns in a single sentence without the need for data annotation. We later extend this model so that it can extract relations between two or more proper nouns in multiple sentences.
3

Development of physics-based reduced-order models for reacting flow applications / Développement de modèles d’ordre réduit basés sur la physique pour les applications d’écoulement réactif

Aversano, Gianmarco 15 November 2019 (has links)
L’objectif final étant de développer des modèles d’ordre réduit pour les applications de combustion, des techniques d’apprentissage automatique non supervisées et supervisées ont été testées et combinées dans les travaux de la présente thèse pour l’extraction de caractéristiques et la construction de modèles d’ordre réduit. Ainsi, l’application de techniques pilotées par les données pour la détection des caractéristiques d’ensembles de données de combustion turbulente (simulation numérique directe) a été étudiée sur deux flammes H2 / CO: une évolution spatiale (DNS1) et une jet à évolution temporelle (DNS2). Des méthodes telles que l’analyse en composantes principales (ACP), l’analyse en composantes principales locales (LPCA), la factorisation matricielle non négative (NMF) et les autoencodeurs ont été explorées à cette fin. Il a été démontré que divers facteurs pouvaient affecter les performances de ces méthodes, tels que les critères utilisés pour le centrage et la mise à l’échelle des données d’origine ou le choix du nombre de dimensions dans les approximations de rang inférieur. Un ensemble de lignes directrices a été présenté qui peut aider le processus d’identification de caractéristiques physiques significatives à partir de données de flux réactifs turbulents. Des méthodes de compression de données telles que l’analyse en composantes principales (ACP) et les variations ont été combinées à des méthodes d’interpolation telles que le krigeage, pour la construction de modèles ordonnées à prix réduits et calculables pour la prédiction de l’état d’un système de combustion dans des conditions de fonctionnement inconnues ou des combinaisons de modèles valeurs de paramètre d’entrée. La méthodologie a d’abord été testée pour la prévision des flammes 1D avec un nombre croissant de paramètres d’entrée (rapport d’équivalence, composition du carburant et température d’entrée), avec des variantes de l’approche PCA classique, à savoir PCA contrainte et PCA locale, appliquée aux cas de combustion la première fois en combinaison avec une technique d’interpolation. Les résultats positifs de l’étude ont conduit à l’application de la méthodologie proposée aux flammes 2D avec deux paramètres d’entrée, à savoir la composition du combustible et la vitesse d’entrée, qui ont donné des résultats satisfaisants. Des alternatives aux méthodes non supervisées et supervisées choisies ont également été testées sur les mêmes données 2D. L’utilisation de la factorisation matricielle non négative (FNM) pour l’approximation de bas rang a été étudiée en raison de la capacité de la méthode à représenter des données à valeur positive, ce qui permet de ne pas enfreindre des lois physiques importantes telles que la positivité des fractions de masse d’espèces chimiques et comparée à la PCA. Comme méthodes supervisées alternatives, la combinaison de l’expansion du chaos polynomial (PCE) et du Kriging et l’utilisation de réseaux de neurones artificiels (RNA) ont été testées. Les résultats des travaux susmentionnés ont ouvert la voie au développement d’un jumeau numérique d’un four à combustion à partir d’un ensemble de simulations 3D. La combinaison de PCA et de Kriging a également été utilisée dans le contexte de la quantification de l’incertitude (UQ), en particulier dans le cadre de collaboration de données lié (B2B-DC), qui a conduit à l’introduction de la procédure B2B-DC à commande réduite. Comme pour la première fois, le centre de distribution B2B a été développé en termes de variables latentes et non en termes de variables physiques originales. / With the final objective being to developreduced-order models for combustion applications,unsupervised and supervised machine learningtechniques were tested and combined in the workof the present Thesis for feature extraction and theconstruction of reduced-order models. Thus, the applicationof data-driven techniques for the detection offeatures from turbulent combustion data sets (directnumerical simulation) was investigated on two H2/COflames: a spatially-evolving (DNS1) and a temporallyevolvingjet (DNS2). Methods such as Principal ComponentAnalysis (PCA), Local Principal ComponentAnalysis (LPCA), Non-negative Matrix Factorization(NMF) and Autoencoders were explored for this purpose.It was shown that various factors could affectthe performance of these methods, such as the criteriaemployed for the centering and the scaling of theoriginal data or the choice of the number of dimensionsin the low-rank approximations. A set of guidelineswas presented that can aid the process ofidentifying meaningful physical features from turbulentreactive flows data. Data compression methods suchas Principal Component Analysis (PCA) and variationswere combined with interpolation methods suchas Kriging, for the construction of computationally affordablereduced-order models for the prediction ofthe state of a combustion system for unseen operatingconditions or combinations of model input parametervalues. The methodology was first tested forthe prediction of 1D flames with an increasing numberof input parameters (equivalence ratio, fuel compositionand inlet temperature), with variations of the classicPCA approach, namely constrained PCA and localPCA, being applied to combustion cases for the firsttime in combination with an interpolation technique.The positive outcome of the study led to the applicationof the proposed methodology to 2D flames withtwo input parameters, namely fuel composition andinlet velocity, which produced satisfactory results. Alternativesto the chosen unsupervised and supervisedmethods were also tested on the same 2D data.The use of non-negative matrix factorization (NMF) forlow-rank approximation was investigated because ofthe ability of the method to represent positive-valueddata, which helps the non-violation of important physicallaws such as positivity of chemical species massfractions, and compared to PCA. As alternative supervisedmethods, the combination of polynomial chaosexpansion (PCE) and Kriging and the use of artificialneural networks (ANNs) were tested. Results from thementioned work paved the way for the developmentof a digital twin of a combustion furnace from a setof 3D simulations. The combination of PCA and Krigingwas also employed in the context of uncertaintyquantification (UQ), specifically in the bound-to-bounddata collaboration framework (B2B-DC), which led tothe introduction of the reduced-order B2B-DC procedureas for the first time the B2B-DC was developedin terms of latent variables and not in terms of originalphysical variables.
4

Unsupervised learning of disease subtypes from continuous time Hidden Markov Models of disease progression

Gupta, Amrita 07 January 2016 (has links)
The detection of subtypes of complex diseases has important implications for diagnosis and treatment. Numerous prior studies have used data-driven approaches to identify clusters of similar patients, but it is not yet clear how to best specify what constitutes a clinically meaningful phenotype. This study explored disease subtyping on the basis of temporal development patterns. In particular, we attempted to differentiate infants with autism spectrum disorder into more fine-grained classes with distinctive patterns of early skill development. We modeled the progression of autism explicitly using a continuous-time hidden Markov model. Subsequently, we compared subjects on the basis of their trajectories through the model state space. Two approaches to subtyping were utilized, one based on time-series clustering with a custom distance function and one based on tensor factorization. A web application was also developed to facilitate the visual exploration of our results. Results suggested the presence of 3 developmental subgroups in the ASD outcome group. The two subtyping approaches are contrasted and possible future directions for research are discussed.
5

Bayesian Unsupervised Labeling of Web Document Clusters

Liu, Ting 22 August 2011 (has links)
Information technologies have recently led to a surge of electronic documents in the form of emails, webpages, blogs, news articles, etc. To help users decide which documents may be interesting to read, it is common practice to organize documents by categories/topics. A wide range of supervised and unsupervised learning techniques already exist for automated text classification and text clustering. However, supervised learning requires a training set of documents already labeled with topics/categories, which is not always readily available. In contrast, unsupervised learning techniques do not require labeled documents, but assigning a suitable category to each resulting cluster remains a difficult problem. The state of the art consists of extracting keywords based on word frequency (or related heuristics). In this thesis, we improve the extraction of keywords for unsupervised labeling of document clusters by designing a Bayesian approach based on topic modeling. More precisely, we describe an approach that uses a large side corpus to infer a language model that implicitly encodes the semantic relatedness of different words. This language model is then used to build a generative model of the cluster in such a way that the probability of generating each word depends on its frequency in the cluster as well as the frequency of its semantically related words. The words with the highest probability of generation are then extracted to label the cluster. In this approach, the side corpus can be thought as a source of domain knowledge or context. However, there are two potential problems: processing a large side corpus can be time consuming and if the content of this corpus is not similar enough to the cluster, the resulting language model may be biased. We deal with those issues by designing a Bayesian transfer learning framework that allows us to process the side corpus just once offline and to weigh its importance based on the degree of similarity with the cluster.
6

Bayesian cluster validation

Koepke, Hoyt Adam 11 1900 (has links)
We propose a novel framework based on Bayesian principles for validating clusterings and present efficient algorithms for use with centroid or exemplar based clustering solutions. Our framework treats the data as fixed and introduces perturbations into the clustering procedure. In our algorithms, we scale the distances between points by a random variable whose distribution is tuned against a baseline null dataset. The random variable is integrated out, yielding a soft assignment matrix that gives the behavior under perturbation of the points relative to each of the clusters. From this soft assignment matrix, we are able to visualize inter-cluster behavior, rank clusters, and give a scalar index of the the clustering stability. In a large test on synthetic data, our method matches or outperforms other leading methods at predicting the correct number of clusters. We also present a theoretical analysis of our approach, which suggests that it is useful for high dimensional data.
7

Bayesian Unsupervised Labeling of Web Document Clusters

Liu, Ting 22 August 2011 (has links)
Information technologies have recently led to a surge of electronic documents in the form of emails, webpages, blogs, news articles, etc. To help users decide which documents may be interesting to read, it is common practice to organize documents by categories/topics. A wide range of supervised and unsupervised learning techniques already exist for automated text classification and text clustering. However, supervised learning requires a training set of documents already labeled with topics/categories, which is not always readily available. In contrast, unsupervised learning techniques do not require labeled documents, but assigning a suitable category to each resulting cluster remains a difficult problem. The state of the art consists of extracting keywords based on word frequency (or related heuristics). In this thesis, we improve the extraction of keywords for unsupervised labeling of document clusters by designing a Bayesian approach based on topic modeling. More precisely, we describe an approach that uses a large side corpus to infer a language model that implicitly encodes the semantic relatedness of different words. This language model is then used to build a generative model of the cluster in such a way that the probability of generating each word depends on its frequency in the cluster as well as the frequency of its semantically related words. The words with the highest probability of generation are then extracted to label the cluster. In this approach, the side corpus can be thought as a source of domain knowledge or context. However, there are two potential problems: processing a large side corpus can be time consuming and if the content of this corpus is not similar enough to the cluster, the resulting language model may be biased. We deal with those issues by designing a Bayesian transfer learning framework that allows us to process the side corpus just once offline and to weigh its importance based on the degree of similarity with the cluster.
8

Bayesian cluster validation

Koepke, Hoyt Adam 11 1900 (has links)
We propose a novel framework based on Bayesian principles for validating clusterings and present efficient algorithms for use with centroid or exemplar based clustering solutions. Our framework treats the data as fixed and introduces perturbations into the clustering procedure. In our algorithms, we scale the distances between points by a random variable whose distribution is tuned against a baseline null dataset. The random variable is integrated out, yielding a soft assignment matrix that gives the behavior under perturbation of the points relative to each of the clusters. From this soft assignment matrix, we are able to visualize inter-cluster behavior, rank clusters, and give a scalar index of the the clustering stability. In a large test on synthetic data, our method matches or outperforms other leading methods at predicting the correct number of clusters. We also present a theoretical analysis of our approach, which suggests that it is useful for high dimensional data.
9

Learning From Snapshot Examples

Beal, Jacob 13 April 2005 (has links)
Examples are a powerful tool for teaching both humans and computers.In order to learn from examples, however, a student must first extractthe examples from its stream of perception. Snapshot learning is ageneral approach to this problem, in which relevant samples ofperception are used as examples. Learning from these examples can inturn improve the judgement of the snapshot mechanism, improving thequality of future examples. One way to implement snapshot learning isthe Top-Cliff heuristic, which identifies relevant samples using ageneralized notion of peaks. I apply snapshot learning with theTop-Cliff heuristic to solve a distributed learning problem and showthat the resulting system learns rapidly and robustly, and canhallucinate useful examples in a perceptual stream from a teacherlesssystem.
10

Bayesian cluster validation

Koepke, Hoyt Adam 11 1900 (has links)
We propose a novel framework based on Bayesian principles for validating clusterings and present efficient algorithms for use with centroid or exemplar based clustering solutions. Our framework treats the data as fixed and introduces perturbations into the clustering procedure. In our algorithms, we scale the distances between points by a random variable whose distribution is tuned against a baseline null dataset. The random variable is integrated out, yielding a soft assignment matrix that gives the behavior under perturbation of the points relative to each of the clusters. From this soft assignment matrix, we are able to visualize inter-cluster behavior, rank clusters, and give a scalar index of the the clustering stability. In a large test on synthetic data, our method matches or outperforms other leading methods at predicting the correct number of clusters. We also present a theoretical analysis of our approach, which suggests that it is useful for high dimensional data. / Science, Faculty of / Computer Science, Department of / Graduate

Page generated in 0.0533 seconds