• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • Tagged with
  • 6
  • 6
  • 6
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Cyberbullying Detection Using Weakly Supervised and Fully Supervised Learning

Abhishek, Abhinav 22 September 2022 (has links)
No description available.
2

Weakly Supervised Machine Learning for Cyberbullying Detection

Raisi, Elaheh 23 April 2019 (has links)
The advent of social media has revolutionized human communication, significantly improving individuals' lives. It makes people closer to each other, provides access to enormous real-time information, and eases marketing and business. Despite its uncountable benefits, however, we must consider some of its negative implications such as online harassment and cyberbullying. Cyberbullying is becoming a serious, large-scale problem damaging people's online lives. This phenomenon is creating a need for automated, data-driven techniques for analyzing and detecting such behaviors. In this research, we aim to address the computational challenges associated with harassment-based cyberbullying detection in social media by developing machine-learning framework that only requires weak supervision. We propose a general framework that trains an ensemble of two learners in which each learner looks at the problem from a different perspective. One learner identifies bullying incidents by examining the language content in the message; another learner considers the social structure to discover bullying. Each learner is using different body of information, and the individual learner co-train one another to come to an agreement about the bullying concept. The models estimate whether each social interaction is bullying by optimizing an objective function that maximizes the consistency between these detectors. We first developed a model we referred to as participant-vocabulary consistency, which is an ensemble of two linear language-based and user-based models. The model is trained by providing a set of seed key-phrases that are indicative of bullying language. The results were promising, demonstrating its effectiveness and usefulness in recovering known bullying words, recognizing new bullying words, and discovering users involved in cyberbullying. We have extended this co-trained ensemble approach with two complementary goals: (1) using nonlinear embeddings as model families, (2) building a fair language-based detector. For the first goal, we incorporated the efficacy of distributed representations of words and nodes such as deep, nonlinear models. We represent words and users as low-dimensional vectors of real numbers as the input to language-based and user-based classifiers, respectively. The models are trained by optimizing an objective function that balances a co-training loss with a weak-supervision loss. Our experiments on Twitter, Ask.fm, and Instagram data show that deep ensembles outperform non-deep methods for weakly supervised harassment detection. For the second goal, we geared this research toward a very important topic in any online automated harassment detection: fairness against particular targeted groups including race, gender, religion, and sexual orientations. Our goal is to decrease the sensitivity of models to language describing particular social groups. We encourage the learning algorithm to avoid discrimination in the predictions by adding an unfairness penalty term to the objective function. We quantitatively and qualitatively evaluate the effectiveness of our proposed general framework on synthetic data and data from Twitter using post-hoc, crowdsourced annotation. In summary, this dissertation introduces a weakly supervised machine learning framework for harassment-based cyberbullying detection using both messages and user roles in social media. / Doctor of Philosophy / Social media has become an inevitable part of individuals social and business lives. Its benefits, however, come with various negative consequences such as online harassment, cyberbullying, hate speech, and online trolling especially among the younger population. According to the American Academy of Child and Adolescent Psychiatry,1 victims of bullying can suffer interference to social and emotional development and even be drawn to extreme behavior such as attempted suicide. Any widespread bullying enabled by technology represents a serious social health threat. In this research, we develop automated, data-driven methods for harassment-based cyberbullying detection. The availability of tools such as these can enable technologies that reduce the harm and toxicity created by these detrimental behaviors. Our general framework is based on consistency of two detectors that co-train one another. One learner identifies bullying incidents by examining the language content in the message; another learner considers social structure to discover bullying. When designing the general framework, we address three tasks: First, we use machine learning with weak supervision, which significantly alleviates the need for human experts to perform tedious data annotation. Second, we incorporate the efficacy of distributed representations of words and nodes such as deep, nonlinear models in the framework to improve the predictive power of models. Finally, we decrease the sensitivity of the framework to language describing particular social groups including race, gender, religion, and sexual orientation. This research represents important steps toward improving technological capability for automatic cyberbullying detection.
3

Learning with Constraint-Based Weak Supervision

Arachie, Chidubem Gibson 28 April 2022 (has links)
Recent adaptations of machine learning models in many businesses has underscored the need for quality training data. Typically, training supervised machine learning systems involves using large amounts of human-annotated data. Labeling data is expensive and can be a limiting factor in using machine learning models. To enable continued integration of machine learning systems in businesses and also easy access by users, researchers have proposed several alternatives to supervised learning. Weak supervision is one such alternative. Weak supervision or weakly supervised learning involves using noisy labels (weak signals of the data) from multiple sources to train machine learning systems. A weak supervision model aggregates multiple noisy label sources called weak signals in order to produce probabilistic labels for the data. The main allure of weak supervision is that it provides a cheap yet effective substitute for supervised learning without need for labeled data. The key challenge in training weakly supervised machine learning models is that the weak supervision leaves ambiguity about the possible true labelings of the data. In this dissertation, we aim to address the challenge associated with training weakly supervised learning models by developing new weak supervision methods. Our work focuses on learning with constraint-based weak supervision algorithms. Firstly, we will propose an adversarial labeling approach for weak supervision. In this method, the adversary chooses the labels for the data and a model learns by minimising the error made by the adversarial model. Secondly, we will propose a simple constrained based approach that minimises a quadratic objective function in order to solve for the labels of the data. Next we explain the notion of data consistency for weak supervision and propose a data consistent method for weakly supervised learning. This approach combines weak supervision labels with features of the training data to make the learned labels consistent with the data. Lastly, we use this data consistent approach to propose a general approach for improving the performance of weak supervision models. In this method, we combine weak supervision with active learning in order to generate a model that outperforms each individual approach using only a handful of labeled data. For each algorithm we propose, we report extensive empirical validation of it by testing it on standard text and image classification datasets. We compare each approach against baseline and state-of-the-art methods and show that in most cases we match or outperform the methods we compare against. We report significant gains of our method on both binary and multi-class classification tasks. / Doctor of Philosophy / Machine learning models learn to make predictions from data. In supervised learning, a machine learning model is fed data and corresponding labels for the data so that the model can learn to predict labels for new unseen test data. Curation of large fully supervised datasets is expensive and time consuming since it involves subject matter experts providing labels for each individual data example. The cost of collecting labels has become one of the major roadblocks for training machine learning models. An alternative to supervised training of machine learning models is weak supervision. Weak supervision or weakly supervised learning trains with cheap, and easy to define signals that noisily label the data. We refer to these signals as weak signals. A weak supervision model combines various weak signals to produce training labels for the data. The key challenge in weak supervision is how to combine the different weak signals while navigating misleading correlations in their errors. In this dissertation, we propose several algorithms for weakly supervised learning. We classify our methods as constraint-based weak supervision since weak supervision is provided as constraints to our algorithms. We use experiments on different text and image classification datasets to show that our methods are effective and outperform competing methods that we compare against. Lastly, we propose a general framework for improving the performance of weak supervision models by incorporating a few labeled data. With this method we are able to close the gap to supervised learning without the need for labeling all the data examples.
4

Prédiction structurée pour l’analyse de données séquentielles / Structured prediction for sequential data

Lajugie, Rémi 18 September 2015 (has links)
Dans cette thèse nous nous intéressons à des problèmes d’apprentissage automatique dans le cadre de sorties structurées avec une structure séquentielle. D’une part, nous considérons le problème de l’apprentissage de mesure de similarité pour deux tâches : (i) la détection de rupture dans des signaux multivariés et (ii) le problème de déformation temporelle entre paires de signaux. Les méthodes généralement utilisées pour résoudre ces deux problèmes dépendent fortement d’une mesure de similarité. Nous apprenons une mesure de similarité à partir de données totalement étiquetées. Nous présentons des algorithmes usuels de prédiction structuré, efficaces pour effectuer l’apprentissage. Nous validons notre approche sur des données réelles venant de divers domaines. D’autre part, nous nous intéressons au problème de la faible supervision pour la tâche d’alignement d’un enregistrement audio sur la partition jouée. Nous considérons la partition comme une représentation symbolique donnant (i) une information complète sur l’ordre des symboles et (ii) une information approximative sur la forme de l’alignement attendu. Nous apprenons un classifieur pour chaque symbole avec ces informations. Nous développons une méthode d’apprentissage fondée sur l’optimisation d’une fonction convexe. Nous démontrons la validité de l’approche sur des données musicales. / In this manuscript, we consider structured machine learning problems and consider more precisely the ones involving sequential structure. In a first part, we consider the problem of similarity measure learning for two tasks where sequential structure is at stake: (i) the multivariate change-point detection and (ii) the time warping of pairs of time series. The methods generally used to solve these tasks rely on a similarity measure to compare timestamps. We propose to learn a similarity measure from fully labelled data, i.e., signals already segmented or pairs of signals for which the optimal time warping is known. Using standard structured prediction methods, we present algorithmically efficient ways for learning. We propose to use loss functions specifically designed for the tasks. We validate our approach on real-world data. In a second part, we focus on the problem of weak supervision, in which sequential data are not totally labeled. We focus on the problem of aligning an audio recording with its score. We consider the score as a symbolic representation giving: (i) a complete information about the order of events or notes played and (ii) an approximate idea about the expected shape of the alignment. We propose to learn a classifier for each note using this information. Our learning problem is based onthe optimization of a convex function that takes advantage of the weak supervision and of the sequential structure of data. Our approach is validated through experiments on the task of audio-to-score on real musical data.
5

Zero-shot, One Kill: BERT for Neural Information Retrieval

Efes, Stergios January 2021 (has links)
[Background]: The advent of bidirectional encoder representation from trans- formers (BERT) language models (Devlin et al., 2018) and MS Marco, a large scale human-annotated dataset for machine reading comprehension (Bajaj et al., 2016) that made publicly available, led the field of information retrieval (IR) to experience a revolution (Lin et al., 2020). The retrieval model based on BERT of Nogueira and Cho (2019), by the time they published their paper, became the top entry in the MS Marco passage-reranking leaderboard, surpassing the previous state of the art by 27% in MRR@10. However, training such neural IR models for different domains than MS Marco is still hard because neural approaches often require a vast amount of training data to perform effectively, which is not always available. To address the problem of the shortage of labelled data a new line of research emerged, training neural models with weak supervision. In weak supervision, given an unlabelled dataset labels are generated automatically using an existing model and then a machine learning model is trained upon the artificial “weak“ data. In case of weak supervision for IR, the training dataset comes in the form of a tuple (query, passage). Dehghani et al. (2017) in their work used the AOL query logs (Pass et al., 2006), which is a set of millions of real web queries, and BM25 to retrieve the relevant passages for each of the user queries. A drawback with this approach is that it is hard to obtain query logs for every single different domain. [Objective]: This thesis proposes an intuitive approach for addressing the shortage of data in domains with limited or no data at all through transfer learning in the context of IR. We leverage Wikipedia’s structure for creating a Wikipedia-based generic IR training dataset for zero-shot neural models. [Method]: We create the “pseudo-queries“ by concatenating the titles of Wikipedia’s articles along with each of their title sections and we consider the associated section’s passage as the relevant passage of the pseudo-queries. All of our experiments are evaluated on a standard collection: MS Marco, which is a large scale web collection. For our zero-shot experiments, our proposed model, called “Wiki“, is a BERT model trained on the artificial Wikipedia-based dataset and the baseline is a default BERT model without any additional training. In our second line of experiments, we explore the benefits gained by pre-fine- tuning on the Wikipedia-based IR dataset and further fine-tuning on in-domain data. Our proposed model, "Wiki+Ma", is a BERT model pre-fine-tuned in the Wikipedia-based dataset and further fine-tuned in MS Marco, while the baseline is a BERT model fine-tuned only in MS Marco. [Results]: Results regarding our first experiments show that our BERT model trained on the Wikipedia-based IR dataset, called "Wiki", achieves a performance of 0.197 in MRR@10, which is about +10 points more in comparison to a BERT model with default weights; in addition, results in the development set indicate that the “Wiki“ model performs better than BERT model trained on in-domain data when the data is between 10k-50k instances. Results regarding our second line of experiments show that pre-fine-tuning on the Wikipedia-based IR dataset benefits later fine-tuning steps on in-domain data in terms of stability. [Conclusion]: Our findings suggest that transfer learning for IR tasks by leveraging the generic knowledge incorporated in Wikipedia is possible, though more experimentation is needed to understand its limitations in comparison with the traditional approaches such as the BM25.
6

WEAKLY SUPERVISED CHARACTERIZATION OF DISCOURSES ON SOCIAL AND POLITICAL MOVEMENTS ON ONLINE MEDIA

Shamik Roy (16317636) 14 June 2023 (has links)
<p>Nowadays an increasing number of people consume, share, and interact with information online. This results in posting and counter-posting on online media by different ideological groups on various polarized topics. Consequently, online media has become the primary platform for political and social influencers to directly interact with the citizens and share their perspectives, views, and stances with the goal of gaining support for their actions, bills, and legislation. Hence, understanding the perspectives and the influencing strategies in online media texts is important for an individual to avoid misinformation and improve trust between the general people and the influencers and the authoritative figures such as the government.</p> <p><br></p> <p>Automatically understanding the perspectives in online media is difficult because of two major challenges. Firstly, the proper grammar or mechanism to characterize the perspectives is not available. Recent studies in Natural Language Processing (NLP) have leveraged resources from social science to explain perspectives. For example, Policy Framing and Moral Foundation Theory are used for understanding how issues are framed and the moral appeal expressed in texts to gain support. However, these theories often fail to capture the nuances in perspectives and cannot generalize over all topics and events. Our research in this dissertation is one of the first studies that adapt social science theories in Natural Language Processing for understanding perspectives to the extent that they can capture differences in ideologies or stances. The second key challenge in understanding perspectives in online media texts is that annotated data is difficult to obtain to build automatic methods to detect the perspectives, that can generalize over the large corpus of online media text on different topics. To tackle this problem, in this dissertation, we used weak sources of supervision such as social network interaction of users who produce and interact with the messages, weak human interaction, or artificial few-shot data using Large Language Models. </p> <p><br></p> <p>Our insight is that various tasks such as perspectives, stances, sentiments toward entities, etc. are interdependent when characterizing online media messages. As a result, we proposed approaches that jointly model various interdependent problems such as perspectives, stances, sentiments toward entities, etc., and perform structured prediction to solve them jointly. Our research findings showed that the messaging choices and perspectives on online media in response to various real-life events and their prominence and contrast in different ideological camps can be efficiently captured using our developed methods.</p>

Page generated in 0.1105 seconds