• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 378
  • 64
  • 43
  • 26
  • 6
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 606
  • 606
  • 276
  • 211
  • 208
  • 148
  • 133
  • 125
  • 92
  • 91
  • 88
  • 85
  • 78
  • 76
  • 75
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Efficient training and feature induction in sequential supervised learning /

Hao, Guohua. January 1900 (has links)
Thesis (Ph. D.)--Oregon State University, 2010. / Printout. Includes bibliographical references (leaves 82-87). Also available on the World Wide Web.
22

Knowledge transfer techniques for dynamic environments

Rajan, Suju, January 1900 (has links) (PDF)
Thesis (Ph. D.)--University of Texas at Austin, 2006. / Vita. Includes bibliographical references.
23

Semi-supervised and active training of conditional random fields for activity recognition

Mahdaviani, Maryam 05 1900 (has links)
Automated human activity recognition has attracted increasing attention in the past decade. However, the application of machine learning and probabilistic methods for activity recognition problems has been studied only in the past couple of years. For the first time, this thesis explores the application of semi-supervised and active learning in activity recognition. We present a new and efficient semi-supervised training method for parameter estimation and feature selection in conditional random fields (CRFs),a probabilistic graphical model. In real-world applications such as activity recognition, unlabeled sensor traces are relatively easy to obtain whereas labeled examples are expensive and tedious to collect. Furthermore, the ability to automatically select a small subset of discriminatory features from a large pool can be advantageous in terms of computational speed as well as accuracy. We introduce the semi-supervised virtual evidence boosting (sVEB)algorithm for training CRFs — a semi-supervised extension to the recently developed virtual evidence boosting (VEB) method for feature selection and parameter learning. sVEB takes advantage of the unlabeled data via mini-mum entropy regularization. The objective function combines the unlabeled conditional entropy with labeled conditional pseudo-likelihood. The sVEB algorithm reduces the overall system cost as well as the human labeling cost required during training, which are both important considerations in building real world inference systems. Moreover, we propose an active learning algorithm for training CRFs is based on virtual evidence boosting and uses entropy measures. Active virtual evidence boosting (aVEB) queries the user for most informative examples, efficiently builds up labeled training examples and incorporates unlabeled data as in sVEB. aVEB not only reduces computational complexity of training CRFs as in sVEB, but also outputs more accurate classification results for the same fraction of labeled data. Ina set of experiments we illustrate that our algorithms, sVEB and aVEB, benefit from both the use of unlabeled data and automatic feature selection, and outperform other semi-supervised and active training approaches. The proposed methods could also be extended and employed for other classification problems in relational data. / Science, Faculty of / Computer Science, Department of / Graduate
24

Semi-Supervised Training for Positioning of Welding Seams

Zhang, Wenbin 07 June 2021 (has links)
Supervised deep neural networks have been successfully applied to many real-world measurement applications. However, their success relies on labeled data which is expensive and time-consuming to obtain, especially when domain expertise is required. For this reason, researchers have turned to semi-supervised learning for image classification tasks. Semi-supervised learning uses structural assumptions to automatically leverage unlabeled data, dramatically reducing manual labeling efforts. We conduct our research based on images from Enclosures Direct Inc. (EDI) which is a manufacturer of enclosures used to house and protect electronic devices. Their industrial robotics utilizes a computer vision system to guide a robot in a welding application employing a laser and a camera. The laser is combined with an optical line generator to cast a line of structured light across a joint to be welded. An image of the structured light is captured by the camera which needs to be located in the image in order to find the desired coordinate for the weld seam. The existing system failed due to the fact that the traditional machine vision algorithm cannot analyze the image correctly in unexpected imaging conditions or during variations in the manufacturing process. In this thesis, we purpose a novel algorithm for semi-supervised key-point detection for seam placement by a welding robot. Our deep learning based algorithm overcomes unfavorable imaging conditions providing faster and more precise predictions. Moreover, we demonstrate that our approach can work with as few as ten labeled images accepting a reduction of detection accuracy. In addition, we also purpose a method that can utilize full image resolution to enhance the accuracy of the key-point detection.
25

Réduction de la dimension multi-vue pour la biométrie multimodale / Multi-view dimensionality reduction for multi-modal biometrics

Zhao, Xuran 24 October 2013 (has links)
Dans la plupart des systèmes biométriques de l’état de l’art, les données biométrique sont souvent représentés par des vecteurs de grande dimensionalité. La dimensionnalité d'éléments biométriques génèrent un problème de malédiction de dimensionnalité. Dans la biométrie multimodale, différentes modalités biométriques peuvent former différents entrés des algorithmes de classification. La fusion des modalités reste un problème difficile et est généralement traitée de manière isolée à celui de dimensionalité élevée. Cette thèse aborde le problème de la dimensionnalité élevée et le problème de la fusion multimodale dans un cadre unifié. En vertu d'un paramètre biométrique multi-modale et les données non étiquetées abondantes données, nous cherchons à extraire des caractéristiques discriminatoires de multiples modalités d'une manière non supervisée. Les contributions de cette thèse sont les suivantes: Un état de l’art des algorithmes RMVD de l'état de l'art ; Un nouveau concept de RMVD: accord de la structure de données dans sous-espace; Trois nouveaux algorithmes de MVDR basée sur des définitions différentes de l’accord de la structure dans les sous-espace; L’application des algorithmes proposés à la classification semi-supervisée, la classification non supervisée, et les problèmes de récupération de données biométriques, en particulier dans un contexte de la reconnaissance de personne en audio et vidéo; L’application des algorithmes proposés à des problèmes plus larges de reconnaissance des formes pour les données non biométriques, tels que l'image et le regroupement de texte et la recherche. / Biometric data is often represented by high-dimensional feature vectors which contain significant inter-session variation. Discriminative dimensionality reduction techniques generally follow a supervised learning scheme. However, labelled training data is generally limited in quantity and often does not reliably represent the inter-session variation encountered in test data. This thesis proposes to use multi-view dimensionality reduction (MVDR) which aims to extract discriminative features in multi-modal biometric systems, where different modalities are regarded as different views of the same data. MVDR projections are trained on feature-feature pairs where label information is not required. Since unlabelled data is easier to acquire in large quantities, and because of the natural co-existence of multiple views in multi-modal biometric problems, discriminant, low-dimensional subspaces can be learnt using the proposed MVDR approaches in a largely unsupervised manner. According to different functionalities of biometric systems, namely, clustering, and retrieval, we propose three MVDR frameworks which meet the requirements for each functionality. The proposed approaches, however, share the same spirit: all methods aim to learn a projection for each view such that a certain form of agreement is attained in the subspaces across different views. The proposed MVDR frameworks can thus be unified into one general framework for multi-view dimensionality reduction through subspace agreement. We regard this novel concept of subspace agreement to be the primary contribution of this thesis.
26

Multilingual Word Sense Disambiguation Using Wikipedia

Dandala, Bharath 08 1900 (has links)
Ambiguity is inherent to human language. In particular, word sense ambiguity is prevalent in all natural languages, with a large number of the words in any given language carrying more than one meaning. Word sense disambiguation is the task of automatically assigning the most appropriate meaning to a polysemous word within a given context. Generally the problem of resolving ambiguity in literature has revolved around the famous quote “you shall know the meaning of the word by the company it keeps.” In this thesis, we investigate the role of context for resolving ambiguity through three different approaches. Instead of using a predefined monolingual sense inventory such as WordNet, we use a language-independent framework where the word senses and sense-tagged data are derived automatically from Wikipedia. Using Wikipedia as a source of sense-annotations provides the much needed solution for knowledge acquisition bottleneck. In order to evaluate the viability of Wikipedia based sense-annotations, we cast the task of disambiguating polysemous nouns as a monolingual classification task and experimented on lexical samples from four different languages (viz. English, German, Italian and Spanish). The experiments confirm that the Wikipedia based sense annotations are reliable and can be used to construct accurate monolingual sense classifiers. It is a long belief that exploiting multiple languages helps in building accurate word sense disambiguation systems. Subsequently, we developed two approaches that recast the task of disambiguating polysemous nouns as a multilingual classification task. The first approach for multilingual word sense disambiguation attempts to effectively use a machine translation system to leverage two relevant multilingual aspects of the semantics of text. First, the various senses of a target word may be translated into different words, which constitute unique, yet highly salient signal that effectively expand the target word’s feature space. Second, the translated context words themselves embed co-occurrence information that a translation engine gathers from very large parallel corpora. The second approach for multlingual word sense disambiguation attempts to reduce the reliance on the machine translation system during training by using the multilingual knowledge available in Wikipedia through its interlingual links. Finally, the experiments on a lexical sample from four different languages confirm that the multilingual systems perform better than the monolingual system and significantly improve the disambiguation accuracy.
27

Robust Approaches for Learning with Noisy Labels

Lu, Yangdi January 2022 (has links)
Deep neural networks (DNNs) have achieved remarkable success in data-intense applications, while such success relies heavily on massive and carefully labeled data. In practice, obtaining large-scale datasets with correct labels is often expensive, time-consuming, and sometimes even impossible. Common approaches of constructing datasets involve some degree of error-prone processes, such as automatic labeling or crowdsourcing, which inherently introduce noisy labels. It has been observed that noisy labels severely degrade the generalization performance of classifiers, especially the overparameterized (deep) neural networks. Therefore, studying noisy labels and developing techniques for training accurate classifiers in the presence of noisy labels is of great practical significance. In this thesis, we conduct a thorough study to fully understand LNL and provide a comprehensive error decomposition to reveal the core issue of LNL. We then point out that the core issue in LNL is that the empirical risk minimizer is unreliable, i.e., the DNNs are prone to overfitting noisy labels during training. To reduce the learning errors, we propose five different methods, 1) Co-matching: a framework consists of two networks to prevent the model from memorizing noisy labels; 2) SELC: a simple method to progressively correct noisy labels and refine the model; 3) NAL: a regularization method that automatically distinguishes the mislabeled samples and prevents the model from memorizing them; 4) EM-enhanced loss: a family of robust loss functions that not only mitigates the influence of noisy labels, but also avoids underfitting problem; 5) MixNN: a framework that trains the model with new synthetic samples to mitigate the impact of noisy labels. Our experimental results demonstrate that the proposed approaches achieve comparable or better performance than the state-of-the-art approaches on benchmark datasets with simulated label noise and large-scale datasets with real-world label noise. / Dissertation / Doctor of Philosophy (PhD) / Machine Learning has been highly successful in data-intensive applications but is often hampered when datasets contain noisy labels. Recently, Learning with Noisy Labels (LNL) is proposed to tackle this problem. By using techniques from LNL, the models can still generalize well even when trained on the data containing noisy supervised information. In this thesis, we study this crucial problem and provide a comprehensive analysis to reveal the core issue of LNL. We then propose five different methods to effectively reduce the learning errors in LNL. We show that our approaches achieve comparable or better performance compared to the state-of-the-art approaches on benchmark datasets with simulated label noise and real-world noisy datasets.
28

Benchmarking Methods For Predicting Phenotype Gene Associations

Tyagi, Tanya 16 September 2020 (has links)
Assigning human genes to diseases and related phenotypes is an important topic in modern genomics. Human Phenotype Ontology (HPO) is a standardized vocabulary of phenotypic abnormalities that occur in human diseases. Computational methods such as label-propagation and supervised-learning address challenges posed by traditional approaches such as manual curation to link genes to phenotypes in the HPO. It is only in recent years that computational methods have been applied in a network-based approach for predicting genes to disease-related phenotypes. In this thesis, we present an extensive benchmarking of various computational methods for the task of network-based gene classification. These methods are evaluated on multiple protein interaction networks and feature representations. We empirically evaluate the performance of multiple prediction tasks using two evaluation experiments: cross-fold validation and the more stringent temporal holdout. We demonstrate that all of the prediction methods considered in our benchmarking analysis have similar performance, with each of the methods outperforming a random predictor. / Master of Science / For many years biologists have been working towards studying diseases, characterizing dis- ease history and identifying what factors and genetic variants lead to diseases. Such studies are critical to working towards the advanced prognosis of diseases and being able to iden- tify targeted treatment plans to cure diseases. An important characteristic of diseases is that they can be expressed by a set of phenotypes. Phenotypes are defined as observable characteristics or traits of an organism, such as height and the color of the eyes and hair. In the context of diseases, the phenotypes that describe diseases are referred to as clinical phenotypes, with some examples being short stature, abnormal hair pattern, etc. Biologists have identified the importance of deep phenotyping, which is defined as a concise analysis that gathers information about diseases and their observed traits in humans, in finding genetic variants underlying human diseases. We make use of the Human Phenotype Ontology (HPO), a standardized vocabulary of phenotypic abnormalities that occur in human diseases. The HPO provides relationships between phenotypes as well as associations between phenotypes and genes. In our study, we perform a systematic benchmarking to evaluate different types of computational approaches for the task of phenotype-gene prediction, across multiple molecular networks using various feature representations and for multiple evaluation strategies.
29

Semi-Supervised Anomaly Detection and Heterogeneous Covariance Estimation for Gaussian Processes

Crandell, Ian C. 12 December 2017 (has links)
In this thesis, we propose a statistical framework for estimating correlation between sensor systems measuring diverse physical phenomenon. We consider systems that measure at different temporal frequencies and measure responses with different dimensionalities. Our goal is to provide estimates of correlation between all pairs of sensors and use this information to flag potentially anomalous readings. Our anomaly detection method consists of two primary components: dimensionality reduction through projection and Gaussian process (GP) regression. We use non-metric multidimensional scaling to project a partially observed and potentially non-definite covariance matrix into a low dimensional manifold. The projection is estimated in such a way that positively correlated sensors are close to each other and negatively correlated sensors are distant. We then fit a Gaussian process given these positions and use it to make predictions at our observed locations. Because of the large amount of data we wish to consider, we develop methods to scale GP estimation by taking advantage of the replication structure in the data. Finally, we introduce a semi-supervised method to incorporate expert input into a GP model. We are able to learn a probability surface defined over locations and responses based on sets of points labeled by an analyst as either anomalous or nominal. This allows us to discount the influence of points resembling anomalies without removing them based on a threshold. / Ph. D.
30

Supervision Beyond Manual Annotations for Learning Visual Representations

Doersch, Carl 01 April 2016 (has links)
For both humans and machines, understanding the visual world requires relating new percepts with past experience. We argue that a good visual representation for an image should encode what makes it similar to other images, enabling the recall of associated experiences. Current machine implementations of visual representations can capture some aspects of similarity, but fall far short of human ability overall. Even if one explicitly labels objects in millions of images to tell the computer what should be considered similar—a very expensive procedure—the labels still do not capture everything that might be relevant. This thesis shows that one can often train a representation which captures similarity beyond what is labeled in a given dataset. That means we can begin with a dataset that has uninteresting labels, or no labels at all, and still build a useful representation. To do this, we propose to using pretext tasks: tasks that are not useful in and of themselves, but serve as an excuse to learn a more general-purpose representation. The labels for a pretext task can be inexpensive or even free. Furthermore, since this approach assumes training labels differ from the desired outputs, it can handle output spaces where the correct answer is ambiguous, and therefore impossible to annotate by hand. The thesis explores two broad classes of supervision. The first isweak image-level supervision, which is exploited to train mid-level discriminative patch classifiers. For example, given a dataset of street-level imagery labeled only with GPS coordinates, patch classifiers are trained to differentiate one specific geographical region (e.g. the city of Paris) from others. The resulting classifiers each automatically collect and associate a set of patches which all depict the same distinctive architectural element. In this way, we can learn to detect elements like balconies, signs, and lamps without annotations. The second type of supervision requires no information about images other than the pixels themselves. Instead, the algorithm is trained to predict the context around image patches. The context serves as a sort of weak label: to predict well, the algorithm must associate similar-looking patches which also have similar contexts. After training, the feature representation learned using this within-image context indeed captures visual similarity across images, which ultimately makes it useful for real tasks like object detection and geometry estimation.

Page generated in 0.0904 seconds