• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 180
  • 21
  • 18
  • 6
  • 5
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 309
  • 309
  • 120
  • 105
  • 79
  • 74
  • 73
  • 63
  • 62
  • 62
  • 57
  • 49
  • 46
  • 45
  • 45
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

N-ary Cross-sentence Relation Extraction: From Supervised to Unsupervised Learning

Yuan, Chenhan 19 May 2021 (has links)
Relation extraction is the problem of extracting relations between entities described in the text. Relations identify a common "fact" described by distinct entities. Conventional relation extraction approaches focus on supervised binary intra-sentence relations, where the assumption is relations only exist between two entities within the same sentence. These approaches have two key limitations. First, binary intra-sentence relation extraction methods can not extract a relation in a fact that is described by more than two entities. Second, these methods cannot extract relations that span more than one sentence, which commonly occurs as the number of entities increases. Third, these methods assume a supervised setting and are therefore not able to extract relations in the absence of sufficient labeled data for training. This work aims to overcome these limitations by developing n-ary cross-sentence relation extraction methods for both supervised and unsupervised settings. Our work has three main goals and contributions: (1) two unsupervised binary intra-sentence relation extraction methods, (2) a supervised n-ary cross-sentence relation extraction method, and (3) an unsupervised n-ary cross-sentence relation extraction method. To achieve these goals, our work includes the following contributions: (1) an automatic labeling method for n-ary cross-sentence data, which is essential for model training, (2) a reinforcement learning-based sentence distribution estimator to minimize the impact of noise on model training, (3) a generative clustering-based technique for intra-sentence unsupervised relation extraction, (4) a variational autoencoder-based technique for unsupervised n-ary cross-sentence relation extraction, and (5) a sentence group selector that identifies groups of sentences that form relations. / Master of Science / In this work, we designed multiple models to automatically extract relations from text. These relations represent the semantic connection between two or more proper nouns. Previous work includes models that can only extract relations between two proper nouns in a single sentence, while the methods proposed in this thesis can extract relations between two or more proper nouns in multiple sentences. We propose three models. The first model can automatically remove erroneous annotations in training data, thereby making the models more credible. We also propose a more effective model that can automatically extract relations between two proper nouns in a single sentence without the need for data annotation. We later extend this model so that it can extract relations between two or more proper nouns in multiple sentences.
2

Development of physics-based reduced-order models for reacting flow applications / Développement de modèles d’ordre réduit basés sur la physique pour les applications d’écoulement réactif

Aversano, Gianmarco 15 November 2019 (has links)
L’objectif final étant de développer des modèles d’ordre réduit pour les applications de combustion, des techniques d’apprentissage automatique non supervisées et supervisées ont été testées et combinées dans les travaux de la présente thèse pour l’extraction de caractéristiques et la construction de modèles d’ordre réduit. Ainsi, l’application de techniques pilotées par les données pour la détection des caractéristiques d’ensembles de données de combustion turbulente (simulation numérique directe) a été étudiée sur deux flammes H2 / CO: une évolution spatiale (DNS1) et une jet à évolution temporelle (DNS2). Des méthodes telles que l’analyse en composantes principales (ACP), l’analyse en composantes principales locales (LPCA), la factorisation matricielle non négative (NMF) et les autoencodeurs ont été explorées à cette fin. Il a été démontré que divers facteurs pouvaient affecter les performances de ces méthodes, tels que les critères utilisés pour le centrage et la mise à l’échelle des données d’origine ou le choix du nombre de dimensions dans les approximations de rang inférieur. Un ensemble de lignes directrices a été présenté qui peut aider le processus d’identification de caractéristiques physiques significatives à partir de données de flux réactifs turbulents. Des méthodes de compression de données telles que l’analyse en composantes principales (ACP) et les variations ont été combinées à des méthodes d’interpolation telles que le krigeage, pour la construction de modèles ordonnées à prix réduits et calculables pour la prédiction de l’état d’un système de combustion dans des conditions de fonctionnement inconnues ou des combinaisons de modèles valeurs de paramètre d’entrée. La méthodologie a d’abord été testée pour la prévision des flammes 1D avec un nombre croissant de paramètres d’entrée (rapport d’équivalence, composition du carburant et température d’entrée), avec des variantes de l’approche PCA classique, à savoir PCA contrainte et PCA locale, appliquée aux cas de combustion la première fois en combinaison avec une technique d’interpolation. Les résultats positifs de l’étude ont conduit à l’application de la méthodologie proposée aux flammes 2D avec deux paramètres d’entrée, à savoir la composition du combustible et la vitesse d’entrée, qui ont donné des résultats satisfaisants. Des alternatives aux méthodes non supervisées et supervisées choisies ont également été testées sur les mêmes données 2D. L’utilisation de la factorisation matricielle non négative (FNM) pour l’approximation de bas rang a été étudiée en raison de la capacité de la méthode à représenter des données à valeur positive, ce qui permet de ne pas enfreindre des lois physiques importantes telles que la positivité des fractions de masse d’espèces chimiques et comparée à la PCA. Comme méthodes supervisées alternatives, la combinaison de l’expansion du chaos polynomial (PCE) et du Kriging et l’utilisation de réseaux de neurones artificiels (RNA) ont été testées. Les résultats des travaux susmentionnés ont ouvert la voie au développement d’un jumeau numérique d’un four à combustion à partir d’un ensemble de simulations 3D. La combinaison de PCA et de Kriging a également été utilisée dans le contexte de la quantification de l’incertitude (UQ), en particulier dans le cadre de collaboration de données lié (B2B-DC), qui a conduit à l’introduction de la procédure B2B-DC à commande réduite. Comme pour la première fois, le centre de distribution B2B a été développé en termes de variables latentes et non en termes de variables physiques originales. / With the final objective being to developreduced-order models for combustion applications,unsupervised and supervised machine learningtechniques were tested and combined in the workof the present Thesis for feature extraction and theconstruction of reduced-order models. Thus, the applicationof data-driven techniques for the detection offeatures from turbulent combustion data sets (directnumerical simulation) was investigated on two H2/COflames: a spatially-evolving (DNS1) and a temporallyevolvingjet (DNS2). Methods such as Principal ComponentAnalysis (PCA), Local Principal ComponentAnalysis (LPCA), Non-negative Matrix Factorization(NMF) and Autoencoders were explored for this purpose.It was shown that various factors could affectthe performance of these methods, such as the criteriaemployed for the centering and the scaling of theoriginal data or the choice of the number of dimensionsin the low-rank approximations. A set of guidelineswas presented that can aid the process ofidentifying meaningful physical features from turbulentreactive flows data. Data compression methods suchas Principal Component Analysis (PCA) and variationswere combined with interpolation methods suchas Kriging, for the construction of computationally affordablereduced-order models for the prediction ofthe state of a combustion system for unseen operatingconditions or combinations of model input parametervalues. The methodology was first tested forthe prediction of 1D flames with an increasing numberof input parameters (equivalence ratio, fuel compositionand inlet temperature), with variations of the classicPCA approach, namely constrained PCA and localPCA, being applied to combustion cases for the firsttime in combination with an interpolation technique.The positive outcome of the study led to the applicationof the proposed methodology to 2D flames withtwo input parameters, namely fuel composition andinlet velocity, which produced satisfactory results. Alternativesto the chosen unsupervised and supervisedmethods were also tested on the same 2D data.The use of non-negative matrix factorization (NMF) forlow-rank approximation was investigated because ofthe ability of the method to represent positive-valueddata, which helps the non-violation of important physicallaws such as positivity of chemical species massfractions, and compared to PCA. As alternative supervisedmethods, the combination of polynomial chaosexpansion (PCE) and Kriging and the use of artificialneural networks (ANNs) were tested. Results from thementioned work paved the way for the developmentof a digital twin of a combustion furnace from a setof 3D simulations. The combination of PCA and Krigingwas also employed in the context of uncertaintyquantification (UQ), specifically in the bound-to-bounddata collaboration framework (B2B-DC), which led tothe introduction of the reduced-order B2B-DC procedureas for the first time the B2B-DC was developedin terms of latent variables and not in terms of originalphysical variables.
3

Bayesian cluster validation

Koepke, Hoyt Adam 11 1900 (has links)
We propose a novel framework based on Bayesian principles for validating clusterings and present efficient algorithms for use with centroid or exemplar based clustering solutions. Our framework treats the data as fixed and introduces perturbations into the clustering procedure. In our algorithms, we scale the distances between points by a random variable whose distribution is tuned against a baseline null dataset. The random variable is integrated out, yielding a soft assignment matrix that gives the behavior under perturbation of the points relative to each of the clusters. From this soft assignment matrix, we are able to visualize inter-cluster behavior, rank clusters, and give a scalar index of the the clustering stability. In a large test on synthetic data, our method matches or outperforms other leading methods at predicting the correct number of clusters. We also present a theoretical analysis of our approach, which suggests that it is useful for high dimensional data.
4

Bayesian cluster validation

Koepke, Hoyt Adam 11 1900 (has links)
We propose a novel framework based on Bayesian principles for validating clusterings and present efficient algorithms for use with centroid or exemplar based clustering solutions. Our framework treats the data as fixed and introduces perturbations into the clustering procedure. In our algorithms, we scale the distances between points by a random variable whose distribution is tuned against a baseline null dataset. The random variable is integrated out, yielding a soft assignment matrix that gives the behavior under perturbation of the points relative to each of the clusters. From this soft assignment matrix, we are able to visualize inter-cluster behavior, rank clusters, and give a scalar index of the the clustering stability. In a large test on synthetic data, our method matches or outperforms other leading methods at predicting the correct number of clusters. We also present a theoretical analysis of our approach, which suggests that it is useful for high dimensional data.
5

Bayesian cluster validation

Koepke, Hoyt Adam 11 1900 (has links)
We propose a novel framework based on Bayesian principles for validating clusterings and present efficient algorithms for use with centroid or exemplar based clustering solutions. Our framework treats the data as fixed and introduces perturbations into the clustering procedure. In our algorithms, we scale the distances between points by a random variable whose distribution is tuned against a baseline null dataset. The random variable is integrated out, yielding a soft assignment matrix that gives the behavior under perturbation of the points relative to each of the clusters. From this soft assignment matrix, we are able to visualize inter-cluster behavior, rank clusters, and give a scalar index of the the clustering stability. In a large test on synthetic data, our method matches or outperforms other leading methods at predicting the correct number of clusters. We also present a theoretical analysis of our approach, which suggests that it is useful for high dimensional data. / Science, Faculty of / Computer Science, Department of / Graduate
6

Learning techniques for expert systems : an investigation, using simulation techniques, into the possibilities and requirements for reliable un-supervised learning for industrial expert systems

Olley, Peter January 1992 (has links)
No description available.
7

Autonomous Terrain Classification Through Unsupervised Learning

Zeltner, Felix January 2016 (has links)
A key component of autonomous outdoor navigation in unstructured environments is the classification of terrain. Recent development in the area of machine learning show promising results in the task of scene segmentation but are limited by the labels used during their supervised training. In this work, we present and evaluate a flexible strategy for terrain classification based on three components: A deep convolutional neural network trained on colour, depth and infrared data which provides feature vectors for image segmentation, a set of exchangeable segmentation engines that operate in this feature space and a novel, air pressure based actuator responsible for distinguishing rigid obstacles from those that only appear as such. Through the use of unsupervised learning we eliminate the need for labeled training data and allow our system to adapt to previously unseen terrain classes. We evaluate the performance of this classification scheme on a mobile robot platform in an environment containing vegetation and trees with a Kinect v2 sensor as low-cost depth camera. Our experiments show that the features generated by our neural network are currently not competitive with state of the art implementations and that our system is not yet ready for real world applications.
8

A voting-merging clustering algorithm

Dimitriadou, Evgenia, Weingessel, Andreas, Hornik, Kurt January 1999 (has links) (PDF)
In this paper we propose an unsupervised voting-merging scheme that is capable of clustering data sets, and also of finding the number of clusters existing in them. The voting part of the algorithm allows us to combine several runs of clustering algorithms resulting in a common partition. This helps us to overcome instabilities of the clustering algorithms and to improve the ability to find structures in a data set. Moreover, we develop a strategy to understand, analyze and interpret these results. In the second part of the scheme, a merging procedure starts on the clusters resulting by voting, in order to find the number of clusters in the data set. / Series: Working Papers SFB "Adaptive Information Systems and Modelling in Economics and Management Science"
9

Voting in clustering and finding the number of clusters

Dimitriadou, Evgenia, Weingessel, Andreas, Hornik, Kurt January 1999 (has links) (PDF)
In this paper we present an unsupervised algorithm which performs clustering given a data set and which can also find the number of clusters existing in it. This algorithm consists of two techniques. The first, the voting technique, allows us to combine several runs of clustering algorithms, with the number of clusters predefined, resulting in a common partition. We introduce the idea that there are cases where an input point has a structure with a certain degree of confidence and may belong to more than one cluster with a certain degree of "belongingness". The second part consists of an index measure which receives the results of every voting process for diffrent number of clusters and makes the decision in favor of one. This algorithm is a complete clustering scheme which can be applied to any clustering method and to any type of data set. Moreover, it helps us to overcome instabilities of the clustering algorithms and to improve the ability of a clustering algorithm to find structures in a data set. / Series: Report Series SFB "Adaptive Information Systems and Modelling in Economics and Management Science"
10

Semantic interpretation with distributional analysis

Glass, Michael Robert 05 July 2012 (has links)
Unstructured text contains a wealth of knowledge, however, it is in a form unsuitable for reasoning. Semantic interpretation is the task of processing natural language text to create or extend a coherent, formal knowledgebase able to reason and support question answering. This task involves entity, event and relation extraction, co-reference resolution, and inference. Many domains, from intelligence data to bioinformatics, would benefit by semantic interpretation. But traditional approaches to the subtasks typically require a large annotated corpus specific to a single domain and ontology. This dissertation describes an approach to rapidly train a semantic interpreter using a set of seed annotations and a large, unlabeled corpus. Our approach adapts methods from paraphrase acquisition and automatic thesaurus construction to extend seed syntactic to semantic mappings using an automatically gathered, domain specific, parallel corpus. During interpretation, the system uses joint probabilistic inference to select the most probable interpretation consistent with the background knowledge. We evaluate both the quality of the extended mappings as well as the performance of the semantic interpreter. / text

Page generated in 0.0459 seconds