• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 107
  • 42
  • 13
  • 9
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 200
  • 200
  • 200
  • 78
  • 54
  • 54
  • 41
  • 35
  • 29
  • 25
  • 25
  • 24
  • 24
  • 23
  • 23
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Semi-supervised and active training of conditional random fields for activity recognition

Mahdaviani, Maryam 05 1900 (has links)
Automated human activity recognition has attracted increasing attention in the past decade. However, the application of machine learning and probabilistic methods for activity recognition problems has been studied only in the past couple of years. For the first time, this thesis explores the application of semi-supervised and active learning in activity recognition. We present a new and efficient semi-supervised training method for parameter estimation and feature selection in conditional random fields (CRFs),a probabilistic graphical model. In real-world applications such as activity recognition, unlabeled sensor traces are relatively easy to obtain whereas labeled examples are expensive and tedious to collect. Furthermore, the ability to automatically select a small subset of discriminatory features from a large pool can be advantageous in terms of computational speed as well as accuracy. We introduce the semi-supervised virtual evidence boosting (sVEB)algorithm for training CRFs — a semi-supervised extension to the recently developed virtual evidence boosting (VEB) method for feature selection and parameter learning. sVEB takes advantage of the unlabeled data via mini-mum entropy regularization. The objective function combines the unlabeled conditional entropy with labeled conditional pseudo-likelihood. The sVEB algorithm reduces the overall system cost as well as the human labeling cost required during training, which are both important considerations in building real world inference systems. Moreover, we propose an active learning algorithm for training CRFs is based on virtual evidence boosting and uses entropy measures. Active virtual evidence boosting (aVEB) queries the user for most informative examples, efficiently builds up labeled training examples and incorporates unlabeled data as in sVEB. aVEB not only reduces computational complexity of training CRFs as in sVEB, but also outputs more accurate classification results for the same fraction of labeled data. Ina set of experiments we illustrate that our algorithms, sVEB and aVEB, benefit from both the use of unlabeled data and automatic feature selection, and outperform other semi-supervised and active training approaches. The proposed methods could also be extended and employed for other classification problems in relational data.
2

Methods and applications of text-driven toponym resolution with indirect supervision

Speriosu, Michael Adrian 24 September 2013 (has links)
This thesis addresses the problem of toponym resolution. Given an ambiguous placename like Springfield in some natural language context, the task is to automatically predict the location on the earth's surface the author is referring to. Many previous efforts use hand-built heuristics to attempt to solve this problem, looking for specific words in close proximity such as Springfield, Illinois, and disambiguating any remaining toponyms to possible locations close to those already resolved. Such approaches require the data to take a fairly specific form in order to perform well, thus they often have low coverage. Some have applied machine learning to this task in an attempt to build more general resolvers, but acquiring large amounts of high quality hand-labeled training material is difficult. I discuss these and other approaches found in previous work before presenting several new toponym resolvers that rely neither on hand-labeled training material prepared explicitly for this task nor on particular co-occurrences of toponyms in close proximity in the data to be disambiguated. Some of the resolvers I develop reflect the intuition of many heuristic resolvers that toponyms nearby in text tend to (but do not always) refer to locations nearby on Earth, but do not require toponyms to occur in direct sequence with one another. I also introduce several resolvers that use the predictions of a document geolocation system (i.e. one that predicts a location for a piece of text of arbitrary length) to inform toponym disambiguation. Another resolver takes into account these document-level location predictions, knowledge of different administrative levels (country, state, city, etc.), and predictions from a logistic regression classifier trained on automatically extracted training instances from Wikipedia in a probabilistic way. It takes advantage of all content words in each toponym's context (both local window and whole document) rather than only toponyms. One resolver I build that extracts training material for a machine learned classifier from Wikipedia, taking advantage of link structure and geographic coordinates on articles, resolves 83% of toponyms in a previously introduced corpus of news articles correctly, beating the strong but simplistic population baseline. I introduce a corpus of Civil War related writings not previously used for this task on which the population baseline does poorly; combining a Wikipedia informed resolver with an algorithm that seeks to minimize the geographic scope of all predicted locations in a document achieves 86% blind test set accuracy on this dataset. After providing these high performing resolvers, I form the groundwork for more flexible and complex approaches by transforming the problem of toponym resolution into the traveling purchaser problem, modeling the probability of a location given its toponym's textual context and the geographic distribution of all locations mentioned in a document as two components of an objective function to be minimized. As one solution to this incarnation of the traveling purchaser problem, I simulate properties of ants traveling the globe and disambiguating toponyms. The ants' preferences for various kinds of behavior evolves over time, revealing underlying patterns in the corpora that other disambiguation methods do not account for. I also introduce several automated visualizations of texts that have had their toponyms resolved. Given a resolved corpus, these visualizations summarize the areas of the globe mentioned and allow the user to refer back to specific passages in the text that mention a location of interest. One visualization presented automatically generates a dynamic tour of the corpus, showing changes in the area referred to by the text as it progresses. Such visualizations are an example of a practical application of work in toponym resolution, and could be used by scholars interested in the geographic connections in any collection of text on both broad and fine-grained levels. / text
3

Semi-supervised and active training of conditional random fields for activity recognition

Mahdaviani, Maryam 05 1900 (has links)
Automated human activity recognition has attracted increasing attention in the past decade. However, the application of machine learning and probabilistic methods for activity recognition problems has been studied only in the past couple of years. For the first time, this thesis explores the application of semi-supervised and active learning in activity recognition. We present a new and efficient semi-supervised training method for parameter estimation and feature selection in conditional random fields (CRFs),a probabilistic graphical model. In real-world applications such as activity recognition, unlabeled sensor traces are relatively easy to obtain whereas labeled examples are expensive and tedious to collect. Furthermore, the ability to automatically select a small subset of discriminatory features from a large pool can be advantageous in terms of computational speed as well as accuracy. We introduce the semi-supervised virtual evidence boosting (sVEB)algorithm for training CRFs — a semi-supervised extension to the recently developed virtual evidence boosting (VEB) method for feature selection and parameter learning. sVEB takes advantage of the unlabeled data via mini-mum entropy regularization. The objective function combines the unlabeled conditional entropy with labeled conditional pseudo-likelihood. The sVEB algorithm reduces the overall system cost as well as the human labeling cost required during training, which are both important considerations in building real world inference systems. Moreover, we propose an active learning algorithm for training CRFs is based on virtual evidence boosting and uses entropy measures. Active virtual evidence boosting (aVEB) queries the user for most informative examples, efficiently builds up labeled training examples and incorporates unlabeled data as in sVEB. aVEB not only reduces computational complexity of training CRFs as in sVEB, but also outputs more accurate classification results for the same fraction of labeled data. Ina set of experiments we illustrate that our algorithms, sVEB and aVEB, benefit from both the use of unlabeled data and automatic feature selection, and outperform other semi-supervised and active training approaches. The proposed methods could also be extended and employed for other classification problems in relational data.
4

Clustering of Questionnaire Based on Feature Extracted by Geometric Algebra

Tachibana, Kanta, Furuhashi, Takeshi, Yoshikawa, Tomohiro, Hitzer, Eckhard, MINH TUAN PHAM January 2008 (has links)
Session ID: FR-G2-2 / Joint 4th International Conference on Soft Computing and Intelligent Systems and 9th International Symposium on advanced Intelligent Systems, September 17-21, 2008, Nagoya University, Nagoya, Japan
5

Semi-supervised and active training of conditional random fields for activity recognition

Mahdaviani, Maryam 05 1900 (has links)
Automated human activity recognition has attracted increasing attention in the past decade. However, the application of machine learning and probabilistic methods for activity recognition problems has been studied only in the past couple of years. For the first time, this thesis explores the application of semi-supervised and active learning in activity recognition. We present a new and efficient semi-supervised training method for parameter estimation and feature selection in conditional random fields (CRFs),a probabilistic graphical model. In real-world applications such as activity recognition, unlabeled sensor traces are relatively easy to obtain whereas labeled examples are expensive and tedious to collect. Furthermore, the ability to automatically select a small subset of discriminatory features from a large pool can be advantageous in terms of computational speed as well as accuracy. We introduce the semi-supervised virtual evidence boosting (sVEB)algorithm for training CRFs — a semi-supervised extension to the recently developed virtual evidence boosting (VEB) method for feature selection and parameter learning. sVEB takes advantage of the unlabeled data via mini-mum entropy regularization. The objective function combines the unlabeled conditional entropy with labeled conditional pseudo-likelihood. The sVEB algorithm reduces the overall system cost as well as the human labeling cost required during training, which are both important considerations in building real world inference systems. Moreover, we propose an active learning algorithm for training CRFs is based on virtual evidence boosting and uses entropy measures. Active virtual evidence boosting (aVEB) queries the user for most informative examples, efficiently builds up labeled training examples and incorporates unlabeled data as in sVEB. aVEB not only reduces computational complexity of training CRFs as in sVEB, but also outputs more accurate classification results for the same fraction of labeled data. Ina set of experiments we illustrate that our algorithms, sVEB and aVEB, benefit from both the use of unlabeled data and automatic feature selection, and outperform other semi-supervised and active training approaches. The proposed methods could also be extended and employed for other classification problems in relational data. / Science, Faculty of / Computer Science, Department of / Graduate
6

Semi-Supervised Training for Positioning of Welding Seams

Zhang, Wenbin 07 June 2021 (has links)
Supervised deep neural networks have been successfully applied to many real-world measurement applications. However, their success relies on labeled data which is expensive and time-consuming to obtain, especially when domain expertise is required. For this reason, researchers have turned to semi-supervised learning for image classification tasks. Semi-supervised learning uses structural assumptions to automatically leverage unlabeled data, dramatically reducing manual labeling efforts. We conduct our research based on images from Enclosures Direct Inc. (EDI) which is a manufacturer of enclosures used to house and protect electronic devices. Their industrial robotics utilizes a computer vision system to guide a robot in a welding application employing a laser and a camera. The laser is combined with an optical line generator to cast a line of structured light across a joint to be welded. An image of the structured light is captured by the camera which needs to be located in the image in order to find the desired coordinate for the weld seam. The existing system failed due to the fact that the traditional machine vision algorithm cannot analyze the image correctly in unexpected imaging conditions or during variations in the manufacturing process. In this thesis, we purpose a novel algorithm for semi-supervised key-point detection for seam placement by a welding robot. Our deep learning based algorithm overcomes unfavorable imaging conditions providing faster and more precise predictions. Moreover, we demonstrate that our approach can work with as few as ten labeled images accepting a reduction of detection accuracy. In addition, we also purpose a method that can utilize full image resolution to enhance the accuracy of the key-point detection.
7

Réduction de la dimension multi-vue pour la biométrie multimodale / Multi-view dimensionality reduction for multi-modal biometrics

Zhao, Xuran 24 October 2013 (has links)
Dans la plupart des systèmes biométriques de l’état de l’art, les données biométrique sont souvent représentés par des vecteurs de grande dimensionalité. La dimensionnalité d'éléments biométriques génèrent un problème de malédiction de dimensionnalité. Dans la biométrie multimodale, différentes modalités biométriques peuvent former différents entrés des algorithmes de classification. La fusion des modalités reste un problème difficile et est généralement traitée de manière isolée à celui de dimensionalité élevée. Cette thèse aborde le problème de la dimensionnalité élevée et le problème de la fusion multimodale dans un cadre unifié. En vertu d'un paramètre biométrique multi-modale et les données non étiquetées abondantes données, nous cherchons à extraire des caractéristiques discriminatoires de multiples modalités d'une manière non supervisée. Les contributions de cette thèse sont les suivantes: Un état de l’art des algorithmes RMVD de l'état de l'art ; Un nouveau concept de RMVD: accord de la structure de données dans sous-espace; Trois nouveaux algorithmes de MVDR basée sur des définitions différentes de l’accord de la structure dans les sous-espace; L’application des algorithmes proposés à la classification semi-supervisée, la classification non supervisée, et les problèmes de récupération de données biométriques, en particulier dans un contexte de la reconnaissance de personne en audio et vidéo; L’application des algorithmes proposés à des problèmes plus larges de reconnaissance des formes pour les données non biométriques, tels que l'image et le regroupement de texte et la recherche. / Biometric data is often represented by high-dimensional feature vectors which contain significant inter-session variation. Discriminative dimensionality reduction techniques generally follow a supervised learning scheme. However, labelled training data is generally limited in quantity and often does not reliably represent the inter-session variation encountered in test data. This thesis proposes to use multi-view dimensionality reduction (MVDR) which aims to extract discriminative features in multi-modal biometric systems, where different modalities are regarded as different views of the same data. MVDR projections are trained on feature-feature pairs where label information is not required. Since unlabelled data is easier to acquire in large quantities, and because of the natural co-existence of multiple views in multi-modal biometric problems, discriminant, low-dimensional subspaces can be learnt using the proposed MVDR approaches in a largely unsupervised manner. According to different functionalities of biometric systems, namely, clustering, and retrieval, we propose three MVDR frameworks which meet the requirements for each functionality. The proposed approaches, however, share the same spirit: all methods aim to learn a projection for each view such that a certain form of agreement is attained in the subspaces across different views. The proposed MVDR frameworks can thus be unified into one general framework for multi-view dimensionality reduction through subspace agreement. We regard this novel concept of subspace agreement to be the primary contribution of this thesis.
8

Semi-Supervised Anomaly Detection and Heterogeneous Covariance Estimation for Gaussian Processes

Crandell, Ian C. 12 December 2017 (has links)
In this thesis, we propose a statistical framework for estimating correlation between sensor systems measuring diverse physical phenomenon. We consider systems that measure at different temporal frequencies and measure responses with different dimensionalities. Our goal is to provide estimates of correlation between all pairs of sensors and use this information to flag potentially anomalous readings. Our anomaly detection method consists of two primary components: dimensionality reduction through projection and Gaussian process (GP) regression. We use non-metric multidimensional scaling to project a partially observed and potentially non-definite covariance matrix into a low dimensional manifold. The projection is estimated in such a way that positively correlated sensors are close to each other and negatively correlated sensors are distant. We then fit a Gaussian process given these positions and use it to make predictions at our observed locations. Because of the large amount of data we wish to consider, we develop methods to scale GP estimation by taking advantage of the replication structure in the data. Finally, we introduce a semi-supervised method to incorporate expert input into a GP model. We are able to learn a probability surface defined over locations and responses based on sets of points labeled by an analyst as either anomalous or nominal. This allows us to discount the influence of points resembling anomalies without removing them based on a threshold. / Ph. D.
9

Active learning via Transduction in Regression Forests

Hansson, Kim, Hörlin, Erik January 2015 (has links)
Context. The amount of training data required to build accurate modelsis a common problem in machine learning. Active learning is a techniquethat tries to reduce the amount of required training data by making activechoices of which training data holds the greatest value.Objectives. This thesis aims to design, implement and evaluate the Ran-dom Forests algorithm combined with active learning that is suitable forpredictive tasks with real-value data outcomes where the amount of train-ing data is small. machine learning algorithms traditionally requires largeamounts of training data to create a general model, and training data is inmany cases sparse and expensive or difficult to create.Methods.The research methods used for this thesis is implementation andscientific experiment. An approach to active learning was implementedbased on previous work for classification type problems. The approachuses the Mahalanobis distance to perform active learning via transduction.Evaluation was done using several data sets were the decrease in predictionerror was measured over several iterations. The results of the evaluationwas then analyzed using nonparametric statistical testing.Results. The statistical analysis of the evaluation results failed to detect adifference between our approach and a non active learning approach, eventhough the proposed algorithm showed irregular performance. The evalu-ation of our tree-based traversal method, and the evaluation of the Maha-lanobis distance for transduction both showed that these methods performedbetter than Euclidean distance and complete graph traversal.Conclusions. We conclude that the proposed solution did not decreasethe amount of required training data on a significant level. However, theapproach has potential and future work could lead to a working active learn-ing solution. Further work is needed on key areas of the implementation,such as the choice of instances for active learning through transduction un-certainty as well as choice of method for going from transduction model toinduction model.
10

Scalable semi-supervised grammar induction using cross-linguistically parameterized syntactic prototypes

Boonkwan, Prachya January 2014 (has links)
This thesis is about the task of unsupervised parser induction: automatically learning grammars and parsing models from raw text. We endeavor to induce such parsers by observing sequences of terminal symbols. We focus on overcoming the problem of frequent collocation that is a major source of error in grammar induction. For example, since a verb and a determiner tend to co-occur in a verb phrase, the probability of attaching the determiner to the verb is sometimes higher than that of attaching the core noun to the verb, resulting in erroneous attachment *((Verb Det) Noun) instead of (Verb (Det Noun)). Although frequent collocation is the heart of grammar induction, it is precariously capable of distorting the grammar distribution. Natural language grammars follow a Zipfian (power law) distribution, where the frequency of any grammar rule is inversely proportional to its rank in the frequency table. We believe that covering the most frequent grammar rules in grammar induction will have a strong impact on accuracy. We propose an efficient approach to grammar induction guided by cross-linguistic language parameters. Our language parameters consist of 33 parameters of frequent basic word orders, which are easy to be elicited from grammar compendiums or short interviews with naïve language informants. These parameters are designed to capture frequent word orders in the Zipfian distribution of natural language grammars, while the rest of the grammar including exceptions can be automatically induced from unlabeled data. The language parameters shrink the search space of the grammar induction problem by exploiting both word order information and predefined attachment directions. The contribution of this thesis is three-fold. (1) We show that the language parameters are adequately generalizable cross-linguistically, as our grammar induction experiments will be carried out on 14 languages on top of a simple unsupervised grammar induction system. (2) Our specification of language parameters improves the accuracy of unsupervised parsing even when the parser is exposed to much less frequent linguistic phenomena in longer sentences when the accuracy decreases within 10%. (3) We investigate the prevalent factors of errors in grammar induction which will provide room for accuracy improvement. The proposed language parameters efficiently cope with the most frequent grammar rules in natural languages. With only 10 man-hours for preparing syntactic prototypes, it improves the accuracy of directed dependency recovery over the state-ofthe- art Gillenwater et al.’s (2010) completely unsupervised parser in: (1) Chinese by 30.32% (2) Swedish by 28.96% (3) Portuguese by 37.64% (4) Dutch by 15.17% (5) German by 14.21% (6) Spanish by 13.53% (7) Japanese by 13.13% (8) English by 12.41% (9) Czech by 9.16% (10) Slovene by 7.24% (11) Turkish by 6.72% and (12) Bulgarian by 5.96%. It is noted that although the directed dependency accuracies of some languages are below 60%, their TEDEVAL scores are still satisfactory (approximately 80%). This suggests us that our parsed trees are, in fact, closely related to the gold-standard trees despite the discrepancy of annotation schemes. We perform an error analysis of over- and under-generation analysis. We found three prevalent problems that cause errors in the experiments: (1) PP attachment (2) discrepancies of dependency annotation schemes and (3) rich morphology. The methods presented in this thesis were originally presented in Boonkwan and Steedman (2011). The thesis presents a great deal more detail in the design of crosslinguistic language parameters, the algorithm of lexicon inventory construction, experiment results, and error analysis.

Page generated in 0.1142 seconds