• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 141
  • 57
  • 16
  • 11
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 269
  • 269
  • 243
  • 102
  • 73
  • 62
  • 59
  • 50
  • 40
  • 36
  • 31
  • 30
  • 28
  • 28
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Deep Domain Fusion for Adaptive Image Classification

January 2019 (has links)
abstract: Endowing machines with the ability to understand digital images is a critical task for a host of high-impact applications, including pathology detection in radiographic imaging, autonomous vehicles, and assistive technology for the visually impaired. Computer vision systems rely on large corpora of annotated data in order to train task-specific visual recognition models. Despite significant advances made over the past decade, the fact remains collecting and annotating the data needed to successfully train a model is a prohibitively expensive endeavor. Moreover, these models are prone to rapid performance degradation when applied to data sampled from a different domain. Recent works in the development of deep adaptation networks seek to overcome these challenges by facilitating transfer learning between source and target domains. In parallel, the unification of dominant semi-supervised learning techniques has illustrated unprecedented potential for utilizing unlabeled data to train classification models in defiance of discouragingly meager sets of annotated data. In this thesis, a novel domain adaptation algorithm -- Domain Adaptive Fusion (DAF) -- is proposed, which encourages a domain-invariant linear relationship between the pixel-space of different domains and the prediction-space while being trained under a domain adversarial signal. The thoughtful combination of key components in unsupervised domain adaptation and semi-supervised learning enable DAF to effectively bridge the gap between source and target domains. Experiments performed on computer vision benchmark datasets for domain adaptation endorse the efficacy of this hybrid approach, outperforming all of the baseline architectures on most of the transfer tasks. / Dissertation/Thesis / Masters Thesis Computer Science 2019
22

Exploring Design Discussions With Semi-Supervised Topic Modelling

Lasrado, Roshan N. 11 August 2022 (has links)
Stack Overflow is a rich source of questions and answers—discussions—about software development. One topic of discussion is software design, such as the correct use of design patterns or best practices in data access. Since design is a more abstract topic in software engineering, researchers have long sought to characterize and model design knowledge. However, these approaches typically require significant expert input to contextualize the abstract design information. In this study, we explore how combining expert input with Stack Overflow might serve as an effective way to identify design topics. Being able to identify and classify this design knowledge would enable the discovery and sharing of this knowledge, enabling developers better leverage Stack Overflow for crowd-sourcing their design decisions. We first perform inductive coding of design-tagged Stack Overflow questions and answers to identify the design concepts that developers discuss. We report on areas where inter-rater agreement was a challenge, including abstraction levels. Since inductive coding is expensive, we apply a semi-supervised (Anchored CorEx) approach. We find that it outperforms LDA and offers superior interpretability and the ability to incorporate expert domain knowledge. We leverage Anchored CorEx to identify how design is discussed on Stack Overflow and leveraged in GitHub projects. We conclude by describing how our experience using the semi-supervised CorEx approach leads us to believe that approaches like Anchored CorEx that combine domain knowledge and scalability are key for analyzing large SE text repositories. / Graduate
23

Semi-Supervised Domain Adaptation for Semantic Segmentation with Consistency Regularization : A learning framework under scarce dense labels / Semi-Superviced Domain Adaption för semantisk segmentering med konsistensregularisering : Ett nytt tillvägagångsätt för lärande under brist på täta etiketter

Morales Brotons, Daniel January 2023 (has links)
Learning from unlabeled data is a topic of critical significance in machine learning, as the large datasets required to train ever-growing models are costly and impractical to annotate. Semi-Supervised Learning (SSL) methods aim to learn from a few labels and a large unlabeled dataset. In another approach, Domain Adaptation (DA) leverages data from a similar source domain to train a model for a target domain. This thesis focuses on Semi-Supervised Domain Adaptation (SSDA) for the dense task of semantic segmentation, where labels are particularly costly to obtain. SSDA has not received much attention yet, even though it has a great potential and represents a realistic scenario. The few existing SSDA methods for semantic segmentation reuse ideas from Unsupervised DA, despite the di↵erences between the two settings. This thesis proposes a new semantic segmentation framework designed particularly for the SSDA setting. The approach followed was to forego domain alignment and focus instead on enhancing clusterability of target domain features, an idea from SSL. The method is based on consistency regularization, combined with pixel contrastive learning and self-training. The proposed framework is found to be e↵ective not only in SSDA, but also in SSL. Ultimately, a unified solution for SSL and SSDA semantic segmentation is presented. Experiments were conducted on the target dataset of Cityscapes and source dataset of GTA5. The method proposed is competitive in both SSL and SSDA, and sets a new state-of-the-art for SSDA achieving a 65.6% mIoU (+4.4) on Cityscapes with 100 labeled samples. This thesis has an immediate impact on practical applications by proposing a new best-performing framework for the under-explored setting of SSDA. Furthermore, it also contributes towards the more ambitious goal of designing a unified solution for learning from unlabeled data. / Inlärning med hjälp av omärkt data är ett område av stor vikt inom maskininlärning. Detta på grund av att de stora datamängder som blivit nödvändiga för att träna konstant växande modeller både är kostsamma och opraktiska att implementera. Målet med Semi-Supervised Learning (SSL) är att kombinera ett fåtal etiketter med en stor mängd omärkt data för inlärning. Som ett annat tillvägagångssätt använder Domain Adaptation (DA) data från en liknande domän för att träna en annan måldomän. I Denna avhandling används Semi-Supervised Domain Adaptation (SSDA) för att utföra sådan semantisk segmentering, i vilken etiketter är särskilt kostsamma att erhålla. SSDA är ännu inte genererat mycket uppmärksamhet, även om det har en stor potential och representerar ett realistiskt scenario. De få metoder av SSDA som existerar för semantisk segmentering återanvänder idéer från Unsupervised DA, trots de olikheter som finns mellan de två modellerna. Denna avhandling föreslår ett nytt ramverk för semantisk segmentering, designat speciellt för SSDA modellen. Detta genom att försaka domänanpassning och i stället fokusera på att förbättra klusterbarheten av måldomänens egenskaper, en idé tagen från SSL. Metoden är baserad på konsistensregularisering, i kombination med pixelkontrastinlärning och självinlärning. Det föreslagna ramverket visar sig vara effektivt, inte bara för SSDA, men även för SSL. Till slut presenteras en enad lösning för semantisk segmentering med SLL och SSDA. Experiment utfördes på måldata från Cityscapes samt källdata från GTA5. Den föreslagna metoden är konkurrenskraftig både för SSL och SSDA, och blir världsledande för SSDA genom att uppnå 65,6% mIoU (+4,4) för Cityscapes med 100 märkta testdata. Denna avhandling har en omedelbar effekt gällande praktiska applikationer genom att föreslå ett nytt ”bäst resulterande” ramverk för dåligt utforskade inställningar av SSDA. Till yttermera visso bidrar avhandlingen även till det mer ambitiösa målet att designa en enad lösning för maskininlärning från omärkta data.
24

New Directions in Gaussian Mixture Learning and Semi-supervised Learning

Sinha, Kaushik 01 November 2010 (has links)
No description available.
25

Semi-supervised Information Fusion for Clustering, Classification and Detection Applications

Li, Huaying January 2017 (has links)
Information fusion techniques have been widely applied in many applications including clustering, classification, detection and etc. The major objective is to improve the performance using information derived from multiple sources as compared to using information obtained from any of the sources individually. In our previous work, we demonstrated the performance improvement of Electroencephalography(EEG) based seizure detection using information fusion. In the detection problem, the optimal fusion rule is usually derived under the assumption that local decisions are conditionally independent given the hypotheses. However, due to the fact that local detectors observe the same phenomenon, it is highly possible that local decisions are correlated. To address the issue of correlation, we implement the fusion rule sub-optimally by first estimating the unknown parameters under one of the hypotheses and then using them as known parameters to estimate the rest of unknown parameters. In the aforementioned scenario, the hypotheses are uniquely defined, i.e., all local detectors follow the same labeling convention. However, in certain applications, the regions of interest (decisions, hypotheses, clusters and etc.) are not unique, i.e., may vary locally (from sources to sources). In this case, information fusion becomes more complicated. Historically, this problem was first observed in classification and clustering. In classification applications, the category information is pre-defined and training data is required. Therefore, a classification problem can be viewed as a detection problem by considering the pre-defined classes as the hypotheses in detection. However, information fusion in clustering applications is more difficult due to the lack of prior information and the correspondence problem caused by symbolic cluster labels. In the literature, information fusion in clustering problem is usually referred to as clustering ensemble problem. Most of the existing clustering ensemble methods are unsupervised. In this thesis, we proposed two semi-supervised clustering ensemble algorithms (SEA). Similar to existing ensemble methods, SEA consists of two major steps: the generation and fusion of base clusterings. Analogous to distributed detection, we propose a distributed clustering system which consists of a base clustering generator and a decision fusion center. The role of the base clustering generator is to generate multiple base clusterings for the given data set. The role of the decision fusion center is to combine all base clusterings into a single consensus clustering. Although training data is not required by conventional clustering algorithms (usually unsupervised), in many applications expert opinions are always available to label a small portion of data observations. These labels can be utilized as the guidance information in the fusion process. Therefore, we design two operational modes for the fusion center according to the absence or presence of the training data. In the unsupervised mode, any existing unsupervised clustering ensemble methods can be implemented as the fusion rule. In the semi-supervised mode, the proposed semi-supervised clustering ensemble methods can be implemented. In addition, a parallel distributed clustering system is also proposed to reduce the computational times of clustering high-volume data sets. Moreover, we also propose a new cluster detection algorithm based on SEA. It is implemented in the system to provide feedback information. When data observations from a new class (other than existing training classes) are detected, signal is sent out to request new training data or switching from the semi-supervised mode to the unsupervised mode. / Thesis / Doctor of Philosophy (PhD)
26

Interactively Guiding Semi-Supervised Clustering via Attribute-based Explanations

Lad, Shrenik 01 July 2015 (has links)
Unsupervised image clustering is a challenging and often ill-posed problem. Existing image descriptors fail to capture the clustering criterion well, and more importantly, the criterion itself may depend on (unknown) user preferences. Semi-supervised approaches such as distance metric learning and constrained clustering thus leverage user-provided annotations indicating which pairs of images belong to the same cluster (must-link) and which ones do not (cannot-link). These approaches require many such constraints before achieving good clustering performance because each constraint only provides weak cues about the desired clustering. In this work, we propose to use image attributes as a modality for the user to provide more informative cues. In particular, the clustering algorithm iteratively and actively queries a user with an image pair. Instead of the user simply providing a must-link/cannot-link constraint for the pair, the user also provides an attribute-based reasoning e.g. "these two images are similar because both are natural and have still water'' or "these two people are dissimilar because one is way older than the other''. Under the guidance of this explanation, and equipped with attribute predictors, many additional constraints are automatically generated. We demonstrate the effectiveness of our approach by incorporating the proposed attribute-based explanations in three standard semi-supervised clustering algorithms: Constrained K-Means, MPCK-Means, and Spectral Clustering, on three domains: scenes, shoes, and faces, using both binary and relative attributes. / Master of Science
27

Semi-Supervised Gait Recognition

Mitra, Sirshapan 01 January 2024 (has links) (PDF)
In this work, we examine semi-supervised learning for Gait recognition with a limited number of labeled samples. Our research focus on two distinct aspects for limited labels, 1)closed-set: with limited labeled samples per individual, and 2) open-set: with limited labeled individuals. We find open-set poses greater challenge compared to closed-set thus, having more labeled ids is important for performance than having more labeled samples per id. Moreover, obtaining labeled samples for a large number of individuals is usually more challenging, therefore limited id setup (closed-setup) is more important to study where most of the training samples belong to unknown ids. We further analyze that existing semi-supervised learning approaches are not well suited for scenario where unlabeled samples belong to novel ids. We propose a simple prototypical self-training approach to solve this problem, where, we integrate semi-supervised learning for closed set setting with self-training which can effectively utilize unlabeled samples from unknown ids. To further alleviate the challenges of limited labeled samples, we explore the role of synthetic data where we utilize diffusion model to generate samples from both known and unknown ids. We perform our experiments on two different Gait recognition benchmarks, CASIA-B and OUMVLP, and provide a comprehensive evaluation of the proposed method. The proposed approach is effective and generalizable for both closed and open-set settings. With merely 20% of labeled samples, we were able to achieve performance competitive to supervised methods utilizing 100% labeled samples while outperforming existing semi-supervised methods.
28

Classificação semi-supervisionada baseada em desacordo por similaridade / Semi-supervised learning based in disagreement by similarity

Gutiérrez, Victor Antonio Laguna 03 May 2010 (has links)
O aprendizado semi-supervisionado é um paradigma do aprendizado de máquina no qual a hipótese é induzida aproveitando tanto os dados rotulados quantos os dados não rotulados. Este paradigma é particularmente útil quando a quantidade de exemplos rotulados é muito pequena e a rotulação manual dos exemplos é uma tarefa muito custosa. Nesse contexto, foi proposto o algoritmo Cotraining, que é um algoritmo muito utilizado no cenário semi-supervisionado, especialmente quando existe mais de uma visão dos dados. Esta característica do algoritmo Cotraining faz com que a sua aplicabilidade seja restrita a domínios multi-visão, o que diminui muito o potencial do algoritmo para resolver problemas reais. Nesta dissertação, é proposto o algoritmo Co2KNN, que é uma versão mono-visão do algoritmo Cotraining na qual, ao invés de combinar duas visões dos dados, combina duas estratégias diferentes de induzir classificadores utilizando a mesma visão dos dados. Tais estratégias são chamados de k-vizinhos mais próximos (KNN) Local e Global. No KNN Global, a vizinhança utilizada para predizer o rótulo de um exemplo não rotulado é conformada por aqueles exemplos que contém o novo exemplo entre os seus k vizinhos mais próximos. Entretanto, o KNN Local considera a estratégia tradicional do KNN para recuperar a vizinhança de um novo exemplo. A teoria do Aprendizado Semi-supervisionado Baseado em Desacordo foi utilizada para definir a base teórica do algoritmo Co2KNN, pois argumenta que para o sucesso do algoritmo Cotraining, é suficiente que os classificadores mantenham um grau de desacordo que permita o processo de aprendizado conjunto. Para avaliar o desempenho do Co2KNN, foram executados diversos experimentos que sugerem que o algoritmo Co2KNN tem melhor performance que diferentes algoritmos do estado da arte, especificamente, em domínios mono-visão. Adicionalmente, foi proposto um algoritmo otimizado para diminuir a complexidade computacional do KNN Global, permitindo o uso do Co2KNN em problemas reais de classificação / Semi-supervised learning is a machine learning paradigm in which the induced hypothesis is improved by taking advantage of unlabeled data. Semi-supervised learning is particularly useful when labeled data is scarce and difficult to obtain. In this context, the Cotraining algorithm was proposed. Cotraining is a widely used semisupervised approach that assumes the availability of two independent views of the data. In most real world scenarios, the multi-view assumption is highly restrictive, impairing its usability for classifification purposes. In this work, we propose the Co2KNN algorithm, which is a one-view Cotraining approach that combines two different k-Nearest Neighbors (KNN) strategies referred to as global and local k-Nearest Neighbors. In the global KNN, the nearest neighbors used to classify a new instance are given by the set of training examples which contains this instance within its k-nearest neighbors. In the local KNN, on the other hand, the neighborhood considered to classify a new instance is the set of training examples computed by the traditional KNN approach. The Co2KNN algorithm is based on the theoretical background given by the Semi-supervised Learning by Disagreement, which claims that the success of the combination of two classifiers in the Cotraining framework is due to the disagreement between the classifiers. We carried out experiments showing that Co2KNN improves significatively the classification accuracy specially when just one view of training data is available. Moreover, we present an optimized algorithm to cope with time complexity of computing the global KNN, allowing Co2KNN to tackle real classification problems
29

Classificação semi-supervisionada baseada em desacordo por similaridade / Semi-supervised learning based in disagreement by similarity

Victor Antonio Laguna Gutiérrez 03 May 2010 (has links)
O aprendizado semi-supervisionado é um paradigma do aprendizado de máquina no qual a hipótese é induzida aproveitando tanto os dados rotulados quantos os dados não rotulados. Este paradigma é particularmente útil quando a quantidade de exemplos rotulados é muito pequena e a rotulação manual dos exemplos é uma tarefa muito custosa. Nesse contexto, foi proposto o algoritmo Cotraining, que é um algoritmo muito utilizado no cenário semi-supervisionado, especialmente quando existe mais de uma visão dos dados. Esta característica do algoritmo Cotraining faz com que a sua aplicabilidade seja restrita a domínios multi-visão, o que diminui muito o potencial do algoritmo para resolver problemas reais. Nesta dissertação, é proposto o algoritmo Co2KNN, que é uma versão mono-visão do algoritmo Cotraining na qual, ao invés de combinar duas visões dos dados, combina duas estratégias diferentes de induzir classificadores utilizando a mesma visão dos dados. Tais estratégias são chamados de k-vizinhos mais próximos (KNN) Local e Global. No KNN Global, a vizinhança utilizada para predizer o rótulo de um exemplo não rotulado é conformada por aqueles exemplos que contém o novo exemplo entre os seus k vizinhos mais próximos. Entretanto, o KNN Local considera a estratégia tradicional do KNN para recuperar a vizinhança de um novo exemplo. A teoria do Aprendizado Semi-supervisionado Baseado em Desacordo foi utilizada para definir a base teórica do algoritmo Co2KNN, pois argumenta que para o sucesso do algoritmo Cotraining, é suficiente que os classificadores mantenham um grau de desacordo que permita o processo de aprendizado conjunto. Para avaliar o desempenho do Co2KNN, foram executados diversos experimentos que sugerem que o algoritmo Co2KNN tem melhor performance que diferentes algoritmos do estado da arte, especificamente, em domínios mono-visão. Adicionalmente, foi proposto um algoritmo otimizado para diminuir a complexidade computacional do KNN Global, permitindo o uso do Co2KNN em problemas reais de classificação / Semi-supervised learning is a machine learning paradigm in which the induced hypothesis is improved by taking advantage of unlabeled data. Semi-supervised learning is particularly useful when labeled data is scarce and difficult to obtain. In this context, the Cotraining algorithm was proposed. Cotraining is a widely used semisupervised approach that assumes the availability of two independent views of the data. In most real world scenarios, the multi-view assumption is highly restrictive, impairing its usability for classifification purposes. In this work, we propose the Co2KNN algorithm, which is a one-view Cotraining approach that combines two different k-Nearest Neighbors (KNN) strategies referred to as global and local k-Nearest Neighbors. In the global KNN, the nearest neighbors used to classify a new instance are given by the set of training examples which contains this instance within its k-nearest neighbors. In the local KNN, on the other hand, the neighborhood considered to classify a new instance is the set of training examples computed by the traditional KNN approach. The Co2KNN algorithm is based on the theoretical background given by the Semi-supervised Learning by Disagreement, which claims that the success of the combination of two classifiers in the Cotraining framework is due to the disagreement between the classifiers. We carried out experiments showing that Co2KNN improves significatively the classification accuracy specially when just one view of training data is available. Moreover, we present an optimized algorithm to cope with time complexity of computing the global KNN, allowing Co2KNN to tackle real classification problems
30

Using Semi-supervised Clustering for Neurons Classification

Fakhraee Seyedabad, Ali January 2013 (has links)
We wish to understand brain; discover its sophisticated ways of calculations to invent improved computational methods. To decipher any complex system, first its components should be understood. Brain comprises neurons. Neurobiologists use morphologic properties like “somatic perimeter”, “axonal length”, and “number of dendrites” to classify neurons. They have discerned two types of neurons: “interneurons” and “pyramidal cells”, and have a consensus about five classes of interneurons: PV, 2/3, Martinotti, Chandelier, and NPY. They still need a more refined classification of interneurons because they suppose its known classes may contain subclasses or new classes may arise. This is a difficult process because of the great number and diversity of interneurons and lack of objective indices to classify them. Machine learning—automatic learning from data—can overcome the mentioned difficulties, but it needs a data set to learn from. To meet this demand neurobiologists compiled a data set from measuring 67 morphologic properties of 220 interneurons of mouse brains; they also labeled some of the samples—i.e. added their opinion about the sample’s classes. This project aimed to use machine learning to determine the true number of classes within the data set, classes of the unlabeled samples, and the accuracy of the available class labels. We used K-means, seeded K-means, and constrained K-means, and clustering validity techniques to achieve our objectives. Our results indicate that: the data set contains seven classes; seeded K-means outperforms K-means and constrained K-means; chandelier and 2/3 are the most consistent classes, whereas PV and Martinotti are the least consistent ones.

Page generated in 0.0674 seconds