Spelling suggestions: "subject:"semisupervised learning"" "subject:"semissupervised learning""
91 |
Inferring Aspect-Specific Opinion Structure in Product ReviewsCarter, David January 2015 (has links)
Identifying differing opinions on a given topic as expressed by multiple people (as in a set of written reviews for a given product, for example) presents challenges. Opinions about a particular subject are often nuanced: a person may have both negative and positive opinions about different aspects of the subject of interest, and these aspect-specific opinions can be independent of the overall opinion on the subject. Being able to identify, collect, and count these nuanced opinions in a large set of data offers more insight into the strengths and weaknesses of competing products and services than does aggregating the overall ratings of such products and services.
I make two useful and useable contributions in working with opinionated text.
First, I present my implementation of a semi-supervised co-training machine classification method for identifying both product aspects (features of products) and sentiments expressed about such aspects. It offers better precision than fully-supervised methods while requiring much less text to be manually tagged (a time-consuming process). This algorithm can also be run in a fully supervised manner when more data is available.
Second, I apply this co-training approach to reviews of restaurants and various electronic devices; such text contains both factual statements and opinions about features/aspects of products. The algorithm automatically identifies the product aspects and the words that indicate aspect-specific opinion polarity, while largely avoiding the problem of misclassifying the products themselves as inherently positive or negative.
This method performs well compared to other approaches. When run on a set of reviews of five technology products collected from Amazon, the system performed with some demonstrated competence (with an average precision of 0.83) at the difficult task of simultaneously identifying aspects and sentiments, though comparison to contemporaries' simpler rules-based approaches was difficult. When run on a set of opinionated sentences about laptops and restaurants that formed the basis of a shared challenge in the SemEval-2014 Task 4 competition, it was able to classify the sentiments expressed about aspects of laptops better than any team that competed in the task (achieving 0.72 accuracy). It was above the mean in its ability to identify the aspects of restaurants about which people expressed opinions, even when co-training using only half of the labelled training data at the outset.
While the SemEval-2014 aspect-based sentiment extraction task considered only separately the tasks of identifying product aspects and determining their polarities, I take an extra step and evaluate sentences as a whole, inferring aspects and the aspect-specific sentiments expressed simultaneously, a more difficult task that seems more applicable to real-world tasks. I present first results of this sentence-level task.
The algorithm uses both lexical and syntactic information in a manner that is shown to be able to handle new words that it has never before seen. It offers some demonstrated ability to adapt to new subject domains for which it has no training data. The system is characterizable by very high precision and weak-to-average recall and it estimates its own confidence in its predictions; this characteristic should make the algorithm suitable for use on its own or for combination in a confidence-based voting ensemble. The software created for and described in the course of this dissertation is made available online.
|
92 |
Nouvelles approches itératives avec garanties théoriques pour l'adaptation de domaine non supervisée / New iterative approaches with theoretical guarantees for unsupervised domain adaptationPeyrache, Jean-Philippe 11 July 2014 (has links)
Ces dernières années, l’intérêt pour l’apprentissage automatique n’a cessé d’augmenter dans des domaines aussi variés que la reconnaissance d’images ou l’analyse de données médicales. Cependant, une limitation du cadre classique PAC a récemment été mise en avant. Elle a entraîné l’émergence d’un nouvel axe de recherche : l’Adaptation de Domaine, dans lequel on considère que les données d’apprentissage proviennent d’une distribution (dite source) différente de celle (dite cible) dont sont issues les données de test. Les premiers travaux théoriques effectués ont débouché sur la conclusion selon laquelle une bonne performance sur le test peut s’obtenir en minimisant à la fois l’erreur sur le domaine source et un terme de divergence entre les deux distributions. Trois grandes catégories d’approches s’en inspirent : par repondération, par reprojection et par auto-étiquetage. Dans ce travail de thèse, nous proposons deux contributions. La première est une approche de reprojection basée sur la théorie du boosting et s’appliquant aux données numériques. Celle-ci offre des garanties théoriques intéressantes et semble également en mesure d’obtenir de bonnes performances en généralisation. Notre seconde contribution consiste d’une part en la proposition d’un cadre permettant de combler le manque de résultats théoriques pour les méthodes d’auto-étiquetage en donnant des conditions nécessaires à la réussite de ce type d’algorithme. D’autre part, nous proposons dans ce cadre une nouvelle approche utilisant la théorie des (epsilon, gamma, tau)-bonnes fonctions de similarité afin de contourner les limitations imposées par la théorie des noyaux dans le contexte des données structurées / During the past few years, an increasing interest for Machine Learning has been encountered, in various domains like image recognition or medical data analysis. However, a limitation of the classical PAC framework has recently been highlighted. It led to the emergence of a new research axis: Domain Adaptation (DA), in which learning data are considered as coming from a distribution (the source one) different from the one (the target one) from which are generated test data. The first theoretical works concluded that a good performance on the target domain can be obtained by minimizing in the same time the source error and a divergence term between the two distributions. Three main categories of approaches are derived from this idea : by reweighting, by reprojection and by self-labeling. In this thesis work, we propose two contributions. The first one is a reprojection approach based on boosting theory and designed for numerical data. It offers interesting theoretical guarantees and also seems able to obtain good generalization performances. Our second contribution consists first in a framework filling the gap of the lack of theoretical results for self-labeling methods by introducing necessary conditions ensuring the good behavior of this kind of algorithm. On the other hand, we propose in this framework a new approach, using the theory of (epsilon, gamma, tau)- good similarity functions to go around the limitations due to the use of kernel theory in the specific context of structured data
|
93 |
Anotação automática semissupervisionada de papéis semânticos para o português do Brasil / Automatic semi-supervised semantic role labeling for Brazilian PortugueseFernando Emilio Alva Manchego 22 January 2013 (has links)
A anotac~ao de papeis sem^anticos (APS) e uma tarefa do processamento de lngua natural (PLN) que permite analisar parte do signicado das sentencas atraves da detecc~ao dos participantes dos eventos (e dos eventos em si) que est~ao sendo descritos nelas, o que e essencial para que os computadores possam usar efetivamente a informac~ao codicada no texto. A maior parte das pesquisas desenvolvidas em APS tem sido feita para textos em ingl^es, considerando as particularidades gramaticais e sem^anticas dessa lngua, o que impede que essas ferramentas e resultados sejam diretamente transportaveis para outras lnguas como o portugu^es. A maioria dos sistemas de APS atuais emprega metodos de aprendizado de maquina supervisionado e, portanto, precisa de um corpus grande de senten cas anotadas com papeis sem^anticos para aprender corretamente a tarefa. No caso do portugu^es do Brasil, um recurso lexical que prov^e este tipo de informac~ao foi recentemente disponibilizado: o PropBank.Br. Contudo, em comparac~ao com os corpora para outras lnguas como o ingl^es, o corpus fornecido por este projeto e pequeno e, portanto, n~ao permitiria que um classicador treinado supervisionadamente realizasse a tarefa de anotac~ao com alto desempenho. Para tratar esta diculdade, neste trabalho emprega-se uma abordagem semissupervisionada capaz de extrair informac~ao relevante tanto dos dados anotados disponveis como de dados n~ao anotados, tornando-a menos dependente do corpus de treinamento. Implementa-se o algoritmo self-training com modelos de regress~ ao logstica (ou maxima entropia) como classicador base, para anotar o corpus Bosque (a sec~ao correspondente ao CETENFolha) da Floresta Sinta(c)tica com as etiquetas do PropBank.Br. Ao algoritmo original se incorpora balanceamento e medidas de similaridade entre os argumentos de um verbo especco para melhorar o desempenho na tarefa de classicac~ao de argumentos. Usando um benchmark de avaliac~ao implementado neste trabalho, a abordagem semissupervisonada proposta obteve um desempenho estatisticamente comparavel ao de um classicador treinado supervisionadamente com uma maior quantidade de dados anotados (80,5 vs. 82,3 de \'F IND. 1\', p > 0, 01) / Semantic role labeling (SRL) is a natural language processing (NLP) task able to analyze part of the meaning of sentences through the detection of the events they describe and the participants involved, which is essential for computers to eectively understand the information coded in text. Most of the research carried out in SRL has been done for texts in English, considering the grammatical and semantic particularities of that language, which prevents those tools and results to be directly transported to other languages such as Portuguese. Most current SRL systems use supervised machine learning methods and require a big corpus of sentences annotated with semantic roles in order to learn how to perform the task properly. For Brazilian Portuguese, a lexical resource that provides this type of information has recently become available: PropBank.Br. However, in comparison with corpora for other languages such as English, the corpus provided by that project is small and it wouldn\'t allow a supervised classier to perform the labeling task with good performance. To deal with this problem, in this dissertation we use a semi-supervised approach capable of extracting relevant information both from annotated and non-annotated data available, making it less dependent on the training corpus. We implemented the self-training algorithm with logistic regression (or maximum entropy) models as base classier to label the corpus Bosque (section CETENFolha) from the Floresta Sintá(c)tica with the PropBank.Br semantic role tags. To the original algorithm, we incorporated balancing and similarity measures between verb-specic arguments so as to improve the performance of the system in the argument classication task. Using an evaluation benchmark implemented in this research project, the proposed semi-supervised approach has a statistical comparable performance as the one of a supervised classier trained with more annotated data (80,5 vs. 82,3 de \'F IND. 1\', p > 0, 01).
|
94 |
Flexible Structured Prediction in Natural Language Processing with Partially Annotated CorporaXiao Zhang (8776265) 29 April 2020 (has links)
<div>Structured prediction makes coherent decisions as structured objects to present the interrelations of these predicted variables. They have been widely used in many areas, such as bioinformatics, computer vision, speech recognition, and natural language processing. Machine Learning with reduced supervision aims to leverage the laborious and error-prone annotation effects and benefit the low-resource languages. In this dissertation we study structured prediction with reduced supervision for two sets of problems, sequence labeling and dependency parsing, both of which are representatives of structured prediction problems in NLP. We investigate three different approaches.</div><div> </div><div>The first approach is learning with modular architecture by task decomposition. By decomposing the labels into location sub-label and type sub-label, we designed neural modules to tackle these sub-labels respectively, with an additional module to infuse the information. The experiments on the benchmark datasets show the modular architecture outperforms existing models and can make use of partially labeled data together with fully labeled data to improve on the performance of using fully labeled data alone.</div><div><br></div><div>The second approach builds the neural CRF autoencoder (NCRFAE) model that combines a discriminative component and a generative component for semi-supervised sequence labeling. The model has a unified structure of shared parameters, using different loss functions for labeled and unlabeled data. We developed a variant of the EM algorithm for optimizing the model with tractable inference. The experiments on several languages in the POS tagging task show the model outperforms existing systems in both supervised and semi-supervised setup.</div><div><br></div><div>The third approach builds two models for semi-supervised dependency parsing, namely local autoencoding parser (LAP) and global autoencoding parser (GAP). LAP assumes the chain-structured sentence has a latent representation and uses this representation to construct the dependency tree, while GAP treats the dependency tree itself as a latent variable. Both models have unified structures for sentence with and without annotated parse tree. The experiments on several languages show both parsers can use unlabeled sentences to improve on the performance with labeled sentences alone, and LAP is faster while GAP outperforms existing models.</div>
|
95 |
Positive unlabeled learning applications in music and healthcareArjannikov, Tom 10 September 2021 (has links)
The supervised and semi-supervised machine learning paradigms hinge on the idea that the training data is labeled. The label quality is often brought into question, and problems related to noisy, inaccurate, or missing labels are studied. One of these is an interesting and prevalent problem in the semi-supervised classification area where only some positive labels are known. At the same time, the remaining and often the majority of the available data is unlabeled, i.e., there are no negative examples. Known as Positive-Unlabeled (PU) learning, this problem has been identified with increasing frequency across many disciplines, including but not limited to health science, biology, bioinformatics, geoscience, physics, business, and politics. Also, there are several closely related machine learning problems, such as cost-sensitive learning and mixture proportion estimation.
This dissertation explores the PU learning problem from the perspective of density estimation and proposes a new modular method compatible with the relabeling framework that is common in PU learning literature. This approach is compared with two existing algorithms throughout the manuscript, one from a seminal work by Elkan and Noto and a current state-of-the-art algorithm by Ivanov. Furthermore, this thesis identifies two machine learning application domains that can benefit from PU learning approaches, which were not previously seen that way: predicting length of stay in hospitals and automatic music tagging. Experimental results with multiple synthetic and real-world datasets from different application domains validate the proposed approach.
Accurately predicting the in-hospital length of stay (LOS) at the time of admission can positively impact healthcare metrics, particularly in novel response scenarios such as the Covid-19 pandemic. During the regular steady-state operation, traditional classification algorithms can be used for this purpose to inform planning and resource management. However, when there are sudden changes to the admission and patient statistics, such as during the onset of a pandemic, these approaches break down because reliable training data becomes available only gradually over time. This thesis demonstrates the effectiveness of PU learning approaches in such situations through experiments by simulating the positive-unlabeled scenario using two fully-labeled publicly available LOS datasets.
Music auto-tagging systems are typically trained using tag labels provided by human listeners. In many cases, this labeling is weak, which means that the provided tags are valid for the associated tracks, but there can be tracks for which a tag would be valid but not present. This situation is analogous to PU learning with the additional complication of being a multi-label scenario. Experimental results on publicly available music datasets with tags representing three different labeling paradigms demonstrate the effectiveness of PU learning techniques in recovering the missing labels and improving auto-tagger performance. / Graduate
|
96 |
A contemporary machine learning approach to detect transportation mode - A case study of Borlänge, SwedenGolshan, Arman January 2020 (has links)
Understanding travel behavior and identifying the mode of transportation are essential for adequate urban devising and transportation planning. Global positioning systems (GPS) tracking data is mainly used to find human mobility patterns in cities. Some travel information, such as most visited location, temporal changes, and the trip speed, can be easily extracted from GPS raw tracking data. GPS trajectories can be used as a method to indicate the mobility modes of commuters. Most previous studies have applied traditional machine learning algorithms and manually computed data features, making the model error-prone. Thus, there is a demand for developing a new model to resolve these methods' weaknesses. The primary purpose of this study is to propose a semi-supervised model to identify transportation mode by using a contemporary machine learning algorithm and GPS tracking data. The model can accept GPS trajectory with adjustable length and extracts their latent information with LSTM Autoencoder. This study adopts a deep neural network architecture with three hidden layers to map the latent information to detect transportation mode. Moreover, different case studies are performed to evaluate the proposed model's efficiency. The model results in an accuracy of 93.6%, which significantly outperforms similar studies.
|
97 |
A Comparison on Supervised and Semi-Supervised Machine Learning Classifiers for Diabetes PredictionKola, Lokesh, Muriki, Vigneshwar January 2021 (has links)
Background: The main cause of diabetes is due to high sugar levels in the blood. There is no permanent cure for diabetes. However, it can be prevented by early diagnosis. In recent years, the hype for Machine Learning is increasing in disease prediction especially during COVID-19 times. In the present scenario, it is difficult for patients to visit doctors. A possible framework is provided using Machine Learning which can detect diabetes at early stages. Objectives: This thesis aims to identify the critical features that impact gestational (Type-3) diabetes and experiments are performed to identify the efficient algorithm for Type-3 diabetes prediction. The selected algorithms are Decision Trees, RandomForest, Support Vector Machine, Gaussian Naive Bayes, Bernoulli Naive Bayes, Laplacian Support Vector Machine. The algorithms are compared based on the performance. Methods: The method consists of gathering the dataset and preprocessing the data. SelectKBestunivariate feature selection was performed for selecting the important features, which influence the Type-3 diabetes prediction. A new dataset was created by binning some of the important features from the original dataset, leading to two datasets, non-binned and binned datasets. The original dataset was imbalanced due to the unequal distribution of class labels. The train-test split was performed on both datasets. Therefore, the oversampling technique was performed on both training datasets to overcome the imbalance nature. The selected Machine Learning algorithms were trained. Predictions were made on the test data. Hyperparameter tuning was performed on all algorithms to improve the performance. Predictions were made again on the test data and accuracy, precision, recall, and f1-score were measured on both binned and non-binned datasets. Results: Among selected Machine Learning algorithms, Laplacian Support Vector Machineattained higher performance with 89.61% and 86.93% on non-binned and binned datasets respectively. Hence, it is an efficient algorithm for Type-3 diabetes prediction. The second best algorithm is Random Forest with 74.5% and 72.72% on non-binned and binned datasets. The non-binned dataset performed well for the majority of selected algorithms. Conclusions: Laplacian Support Vector Machine scored high performance among the other algorithms on both binned and non-binned datasets. The non-binned dataset showed the best performance in almost all Machine Learning algorithms except Bernoulli naive Bayes. Therefore, the non-binned dataset is more suitable for the Type-3 diabetes prediction.
|
98 |
Semi-Supervised Learning Algorithm for Large Datasets Using Spark EnvironmentKacheria, Amar January 2021 (has links)
No description available.
|
99 |
Dynamic Information Density for Image Classification in an Active Learning FrameworkMorgan, Joshua Edward 01 May 2020 (has links)
No description available.
|
100 |
Semi-Supervised Semantic Segmentation for Agricultural Aerial ImagesChen-yi Lu (15383813) 01 May 2023 (has links)
<p>Unmanned Aerial Systems (UAS) have been an essential tool for field scouting, nutrient applications, and farm management. However, assessing the aerial images captured by UAS is labor-intensive, and human assessment can be misleading, introducing bias. Deep learning based image segmentation has been proposed to assist in segmenting different areas of interest in the field, but it usually requires significant pixel-level annotated data. To address this, we propose a semi-supervised learning algorithm, AgSemSeg, to train a robust image segmentation</p>
<p>model with less annotated data. Semi-supervised semantic segmentation aims to predict accurate pixel-level segmentation results via incorporating unlabeled images. Existing methods rely on computing the consistency loss on the output predictions between pseudo-labels and unlabeled images. In AgSemSeg, we exploit the intermediate feature representations rather than only using the output predictions to improve the overall performance of the</p>
<p>model. Specifically, we add a projection layer on the output of the backbone encoder, and inject consistency loss between intermediate feature representations with Sliced-Wasserstein distance. We evaluate AgSemSeg using Agriculture-Vision dataset and outperform the supervised baseline by up to 9.71%. We also evaluate AgSemSeg on benchmark datasets such as PASCAL VOC 2012 and Cityscapes datasets, and it outperforms supervised baselines by up to 24.6% and 7.5% mIoU, respectively. We also perform extensive ablation studies to show that our proposed components are key to the performance improvements of our method. </p>
|
Page generated in 0.0878 seconds