• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • Tagged with
  • 16
  • 16
  • 8
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Evaluating the effects of data augmentations for specific latent features : Using self-supervised learning / Utvärdering av effekterna av datamodifieringar på inlärda representationer : Vid självövervakande maskininlärning

Ingemarsson, Markus, Henningsson, Jacob January 2022 (has links)
Supervised learning requires labeled data which is cumbersome to produce, making it costly and time-consuming. SimCLR is a self-supervising framework that uses data augmentations to learn without labels. This thesis investigates how well cropping and color distorting augmentations work for two datasets, MPI3D and Causal3DIdent. The representations learned are evaluated using representation similarity analysis. The data augmentations were meant to make the model learn invariant representations of the object shape in the images regarding it as content while ignoring unnecessary features and regarding them as style. As a result, 8 models were created, models A-H. A and E were trained using supervised learning as a benchmark for the remaining self-supervised models. B and C learned invariant features of style instead of learning invariant representations of shape. Model D learned invariant representations of shape. Although, it also regarded style-related factors as content. Model F, G, and H managed to learn invariant representations of shape with varying intensities while regarding the rest of the features as style. The conclusion was that models can learn invariant representations of features related to content using self-supervised learning with the chosen augmentations. However, the augmentation settings must be suitable for the dataset. / Övervakad maskininlärning kräver annoterad data, vilket är dyrt och tidskrävande att producera. SimCLR är ett självövervakande maskininlärningsramverk som använder datamodifieringar för att lära sig utan annoteringar. Detta examensarbete utvärderar hur väl beskärning och färgförvrängande datamodifieringar fungerar för två dataset, MPI3D och Causal3DIdent. De inlärda representationerna utvärderas med hjälp av representativ likhetsanalys. Syftet med examensarbetet var att få de självövervakande maskininlärningsmodellerna att lära sig oföränderliga representationer av objektet i bilderna. Meningen med datamodifieringarna var att påverka modellens lärande så att modellen tolkar objektets form som relevant innehåll, men resterande egenskaper som icke-relevant innehåll. Åtta modeller skapades (A-H). A och E tränades med övervakad inlärning och användes som riktmärke för de självövervakade modellerna. B och C lärde sig oföränderliga representationer som bör ha betraktas som irrelevant istället för att lära sig form. Modell D lärde sig oföränderliga representationer av form men också irrelevanta representationer. Modellerna F, G och H lyckades lära sig oföränderliga representationer av form med varierande intensitet, samtidigt som de resterande egenskaperna betraktades som irrelevant. Beskärning och färgförvrängande datamodifieringarna gör således att självövervakande modeller kan lära sig oföränderliga representationer av egenskaper relaterade till relevant innehåll. Specifika inställningar för datamodifieringar måste dock vara lämpliga för datasetet.
12

Imitation from observation using behavioral learning

Djeafea Sonwa, Medric B. 11 1900 (has links)
L'Imitation par observation (IPO) est un paradigme d'apprentissage qui consiste à entraîner des agents autonomes dans un processus de décision markovien (PDM) en observant les démonstrations d'un expert et sans avoir accès à ses actions. Ces démonstrations peuvent être des séquences d'états de l'environnement ou des observations visuelles brutes de l'environnement. Bien que le cadre utilisant des états à dimensions réduites ait permis d'obtenir des résultats convaincants avec des approches récentes, l'utilisation d'observations visuelles reste un défi important en IPO. Une des procédures très adoptée pour résoudre le problème d’IPO consiste à apprendre une fonction de récompense à partir des démonstrations, toutefois la nécessité d’analyser l'environnement et l'expert à partir de vidéos pour récompenser l'agent augmente la complexité du problème. Nous abordons ce problème avec une méthode basée sur la représentation des comportements de l'agent dans un espace vectoriel en utilisant des vidéos démonstratives. Notre approche exploite les techniques récentes d'apprentissage contrastif d'images et vidéos et utilise un algorithme de bootstrapping pour entraîner progressivement une fonction d'encodage de trajectoires à partir de la variation du comportement de l'agent. Simultanément, cette fonction récompense l'agent imitateur lors de l'exécution d'un algorithme d'apprentissage par renforcement. Notre méthode utilise un nombre limité de vidéos démonstratives et nous n'avons pas accès à comportement expert. Nos agents imitateurs montrent des performances convaincantes sur un ensemble de tâches de contrôle et démontrent que l'apprentissage d'une fonction de codage du comportement à partir de vidéos permet de construire une fonction de récompense efficace dans un PDM. / Imitation from observation (IfO) is a learning paradigm that consists of training autonomous agents in a Markov Decision Process (MDP) by observing an expert's demonstrations and without access to its actions. These demonstrations could be sequences of environment states or raw visual observations of the environment. Although the setting using low-dimensional states has allowed obtaining convincing results with recent approaches, the use of visual observations remains an important challenge in IfO. One of the most common procedures adopted to solve the IfO problem is to learn a reward function from the demonstrations, but the need to understand the environment and the expert's moves through videos to appropriately reward the learning agent increases the complexity of the problem. We approach this problem with a method that focuses on the representation of the agent’s behaviors in a latent space using demonstrative videos. Our approach exploits recent techniques of contrastive learning of image and video and uses a bootstrapping algorithm to progressively train a trajectory encoding function from the variation of the agent’s policy. Simultaneously, this function rewards the imitating agent through a Reinforcement Learning (RL) algorithm. Our method uses a limited number of demonstrative videos and we do not have access to any expert policy. Our imitating agents in experiments show convincing performances on a set of control tasks and demonstrate that learning a behavior encoding function from videos allows for building an efficient reward function in MDP.
13

Messing With The Gap: On The Modality Gap Phenomenon In Multimodal Contrastive Representation Learning

Al-Jaff, Mohammad January 2023 (has links)
In machine learning, a sub-field of computer science, a two-tower architecture model is a specialised type of neural network model that encodes paired data from different modalities (like text and images, sound and video, or proteomics and gene expression profiles) into a shared latent representation space. However, when training these models using a specific contrastive loss function, known as the multimodalinfoNCE loss, seems to often lead to a unique geometric phenomenon known as the modality gap. This gap is a clear geometric separation of the embeddings of the modalities in the joint contrastive latent space. This thesis investigates the modality gap in multimodal machine learning, specifically in two-tower neural networks trained with multimodal-infoNCE loss. We examine the adequacy of the current definition of the modality gap, the conditions under which the modality gap phenomenon manifests, and its impact on representation quality and downstream task performance. The approach to address these questions consists of a two-phase experimental strategy. Phase I involves a series of experiments, ranging from toy synthetic simulations to true multimodal machine learning with complex datasets, to explore and characterise the modality gap under varying conditions. Phase II focuses on modifying the modality gap and analysing representation quality, evaluating different loss functions and their impact on the modality gap. This methodical exploration allows us to systematically dissect the emergence and implications of the modality gap phenomenon, providing insights into its impact on downstream tasks, measured with proxy metrics based on semantic clustering in the shared latent representation space and modality-specific linear probe evaluation. Our findings reveal that the modality gap definition proposed by W. Liang et al. 2022, is insufficient. We demonstrate that similar modality gap magnitudes can exhibit varying linear separability between modality embeddings in the contrastive latent space and varying embedding topologies, indicating the need for additional metrics to capture the true essence of the gap. Furthermore, our experiments show that the temperature hyperparameter in the multimodal infoNCE loss function plays a crucial role in the emergence of the modality gap, and this effect varies with different data sets. This suggests that individual dataset characteristics significantly influence the modality gap's manifestation. A key finding is the consistent emergence of modality gaps with small temperature settings in the fixed temperature mode of the loss function and almost invariably under learned temperature mode settings, regardless of the initial temperature value. Additionally, we observe that the magnitude of the modality gap is influenced by distribution shifts, with the gap magnitude increasing progressively from the training set to the validation set, then to the test set, and finally to more distributionally shifted datasets. We discover that the choice of contrastive learning method, temperature settings, and temperature values is crucial in influencing the modality gap. However, reducing the gap does not consistently improve downstream task performance, suggesting its role may be more nuanced than previously understood. This insight indicates that the modality gap might be a geometric by-product of the learning methods rather than a critical determinant of representation quality. Our results encourage the need to reevaluate the modality gap's significance in multimodal contrastive learning, emphasising the importance of dataset characteristics and contrastive learning methodology.
14

WEAKLY SUPERVISED CHARACTERIZATION OF DISCOURSES ON SOCIAL AND POLITICAL MOVEMENTS ON ONLINE MEDIA

Shamik Roy (16317636) 14 June 2023 (has links)
<p>Nowadays an increasing number of people consume, share, and interact with information online. This results in posting and counter-posting on online media by different ideological groups on various polarized topics. Consequently, online media has become the primary platform for political and social influencers to directly interact with the citizens and share their perspectives, views, and stances with the goal of gaining support for their actions, bills, and legislation. Hence, understanding the perspectives and the influencing strategies in online media texts is important for an individual to avoid misinformation and improve trust between the general people and the influencers and the authoritative figures such as the government.</p> <p><br></p> <p>Automatically understanding the perspectives in online media is difficult because of two major challenges. Firstly, the proper grammar or mechanism to characterize the perspectives is not available. Recent studies in Natural Language Processing (NLP) have leveraged resources from social science to explain perspectives. For example, Policy Framing and Moral Foundation Theory are used for understanding how issues are framed and the moral appeal expressed in texts to gain support. However, these theories often fail to capture the nuances in perspectives and cannot generalize over all topics and events. Our research in this dissertation is one of the first studies that adapt social science theories in Natural Language Processing for understanding perspectives to the extent that they can capture differences in ideologies or stances. The second key challenge in understanding perspectives in online media texts is that annotated data is difficult to obtain to build automatic methods to detect the perspectives, that can generalize over the large corpus of online media text on different topics. To tackle this problem, in this dissertation, we used weak sources of supervision such as social network interaction of users who produce and interact with the messages, weak human interaction, or artificial few-shot data using Large Language Models. </p> <p><br></p> <p>Our insight is that various tasks such as perspectives, stances, sentiments toward entities, etc. are interdependent when characterizing online media messages. As a result, we proposed approaches that jointly model various interdependent problems such as perspectives, stances, sentiments toward entities, etc., and perform structured prediction to solve them jointly. Our research findings showed that the messaging choices and perspectives on online media in response to various real-life events and their prominence and contrast in different ideological camps can be efficiently captured using our developed methods.</p>
15

Better representation learning for TPMS

Raza, Amir 10 1900 (has links)
Avec l’augmentation de la popularité de l’IA et de l’apprentissage automatique, le nombre de participants a explosé dans les conférences AI/ML. Le grand nombre d’articles soumis et la nature évolutive des sujets constituent des défis supplémentaires pour les systèmes d’évaluation par les pairs qui sont cruciaux pour nos communautés scientifiques. Certaines conférences ont évolué vers l’automatisation de l’attribution des examinateurs pour les soumissions, le TPMS [1] étant l’un de ces systèmes existants. Actuellement, TPMS prépare des profils de chercheurs et de soumissions basés sur le contenu, afin de modéliser l’adéquation des paires examinateur-soumission. Dans ce travail, nous explorons différentes approches pour le réglage fin auto-supervisé des transformateurs BERT pour les données des documents de conférence. Nous démontrons quelques nouvelles approches des vues d’augmentation pour l’auto-supervision dans le traitement du langage naturel, qui jusqu’à présent était davantage axée sur les problèmes de vision par ordinateur. Nous utilisons ensuite ces représentations d’articles individuels pour construire un modèle d’expertise qui apprend à combiner la représentation des différents travaux publiés d’un examinateur et à prédire leur pertinence pour l’examen d’un article soumis. Au final, nous montrons que de meilleures représentations individuelles des papiers et une meilleure modélisation de l’expertise conduisent à de meilleures performances dans la tâche de prédiction de l’adéquation de l’examinateur. / With the increase in popularity of AI and Machine learning, participation numbers have exploded in AI/ML conferences. The large number of submission papers and the evolving nature of topics constitute additional challenges for peer-review systems that are crucial for our scientific communities. Some conferences have moved towards automating the reviewer assignment for submissions, TPMS [1] being one such existing system. Currently, TPMS prepares content-based profiles of researchers and submission papers, to model the suitability of reviewer-submission pairs. In this work, we explore different approaches to self-supervised fine-tuning of BERT transformers for conference papers data. We demonstrate some new approaches to augmentation views for self-supervision in natural language processing, which till now has been more focused on problems in computer vision. We then use these individual paper representations for building an expertise model which learns to combine the representation of different published works of a reviewer and predict their relevance for reviewing a submission paper. In the end, we show that better individual paper representations and expertise modeling lead to better performance on the reviewer suitability prediction task.
16

Finding duplicate offers in the online marketplace catalogue using transformer based methods : An exploration of transformer based methods for the task of entity resolution / Hitta dubbletter av erbjudanden i online marknadsplatskatalog med hjälp av transformer-baserade metoder : En utforskning av transformer-baserad metoder för uppgiften att deduplicera

Damian, Robert-Andrei January 2022 (has links)
The amount of data available on the web is constantly growing, and e-commerce websites are no exception. Considering the abundance of available information, finding offers for the same product in the catalogue of different retailers represents a challenge. This problem is an interesting one and addresses the needs of multiple actors. A customer is interested in finding the best deal for the product they want to buy. A retailer wants to keep up to date with the competition and adapt its pricing strategy accordingly. Various services already offer the possibility of finding duplicate products in catalogues of e-commerce retailers, but their solutions are based on matching a Global Trade Identification Number (GTIN). This strategy is limited because a GTIN may not be made publicly available by a competitor, may be different for the same product exported by the manufacturer to different markets or may not even exist for low-value products. The field of Entity Resolution (ER), a sub-branch of Natural Language Processing (NLP), focuses on solving the issue of matching duplicate database entries when a deterministic identifier is not available. We investigate various solutions from the the field and present a new model called Spring R-SupCon that focuses on low volume datasets. Our work builds upon the recently introduced model, R-SupCon, introducing a new learning scheme that improves R-SupCon’s performance by up to 74.47% F1 score, and surpasses Ditto by up 12% F1 score for low volume datasets. Moreover, our experiments show that smaller language models can be used for ER with minimal loss in performance. This has the potential to extend the adoption of Transformer-based solutions to companies and markets where datasets are difficult to create, like it is the case for the Swedish marketplace Fyndiq. / Mängden data på internet växer konstant och e-handeln är inget undantag. Konsumenter har idag många valmöjligheter varifrån de väljer att göra sina inköp från. Detta gör att det blir svårare och svårare att hitta det bästa erbjudandet. Även för återförsäljare ökar svårigheten att veta vilken konkurrent som har lägst pris. Det finns tillgängliga lösningar på detta problem men de använder produktunika identifierare såsom Global Trade Identification Number (förkortat “GTIN”). Då det finns en rad utmaningar att bara förlita sig på lösningar som baseras på GTIN behövs ett alternativt tillvägagångssätt. GTIN är exempelvis inte en offentlig information och identifieraren kan dessutom vara en annan när samma produkt erbjuds på en annan marknad. Det här projektet undersöker alternativa lösningar som inte är baserade på en deterministisk identifierare. Detta projekt förlitar sig istället på text såsom produktens namn för att fastställa matchningar mellan olika erbjudanden. En rad olika implementeringar baserade på maskininlärning och djupinlärning studeras i detta projekt. Projektet har dock ett särskilt fokus på “Transformer”-baserade språkmodeller såsom BERT. Detta projekt visar hur man generera proprietär data. Projektet föreslår även ett nytt inlärningsschema och bevisar dess fördelar. / Le volume des données qui se trouve sur l’internet est en une augmentation constante et les commerces électroniques ne font pas note discordante. Le consommateur a aujourd’hui beaucoup des options quand il decide d’où faire son achat. Trouver le meilleur prix devient de plus en plus difficile. Les entreprises qui gerent cettes plates-formes ont aussi la difficulté de savoir en tous moments lesquels de ses concurrents ont le meilleur prix. Il y-a déjà des solutions en ligne qui ont l’objectif de résoudre ce problème, mais ils utilisent un identifiant de produit unique qui s’appelle Global Trade identification number (ou GTIN). Plusieurs difficultés posent des barriers sur cette solution. Par exemple, GTIN n’est pas public peut-être, ou des GTINs différents peut-être assigne par la fabricante au même produit pour distinguer des marchés différents. Ce projet étudie des solutions alternatives qui ne sont pas basées sur avoir un identifiant unique. On discute des methods qui font la décision en fonction du nom des produits, en utilisant des algorithmes d’apprentissage automatique ou d’apprentissage en profondeur. Le projet se concentre sur des solutions avec ”Transformer” modèles de langages, comme BERT. On voit aussi comme peut-on créer un ensemble de données propriétaire pour enseigner le modèle. Finalement, une nouvelle method d’apprentissage est proposée et analysée.

Page generated in 0.1192 seconds