Spelling suggestions: "subject:"representationsinlärning"" "subject:"representation:taking""
81 |
Latent data augmentation and modular structure for improved generalizationLamb, Alexander 08 1900 (has links)
This thesis explores the nature of generalization in deep learning and several settings in which it fails. In particular, deep neural networks can struggle to generalize in settings with limited data, insufficient supervision, challenging long-range dependencies, or complex structure and subsystems. This thesis explores the nature of these challenges for generalization in deep learning and presents several algorithms which seek to address these challenges. In the first article, we show how training with interpolated hidden states can improve generalization and calibration in deep learning. We also introduce a theory showing how our algorithm, which we call Manifold Mixup, leads to a flattening of the per-class hidden representations, which can be seen as a compression of the information in the hidden states. The second article is related to the first and shows how interpolated examples can be used for semi-supervised learning. In addition to interpolating the input examples, the model’s interpolated predictions are used as targets for these examples. This improves results on standard benchmarks as well as classic 2D toy problems for semi-supervised learning. The third article studies how a recurrent neural network can be divided into multiple modules with different parameters and well separated hidden states, as well as a competition mechanism restricting updating of the hidden states to a subset of the most relevant modules on a specific time-step. This improves systematic generalization when the pattern distribution is changed between the training and evaluation phases. It also improves generalization in reinforcement learning. In the fourth article, we show that attention can be used to control the flow of information between successive layers in deep networks. This allows each layer to only process the subset of the previously computed layers’ outputs which are most relevant. This improves generalization on relational reasoning tasks as well as standard benchmark classification tasks. / Cette thèse explore la nature de la généralisation dans l’apprentissage en profondeur et
plusieurs contextes dans lesquels elle échoue. En particulier, les réseaux de neurones profonds
peuvent avoir du mal à se généraliser dans des contextes avec des données limitées, une
supervision insuffisante, des dépendances à longue portée difficiles ou une structure et des
sous-systèmes complexes.
Cette thèse explore la nature de ces défis pour la généralisation en apprentissage profond
et présente plusieurs algorithmes qui cherchent à relever ces défis.
Dans le premier article, nous montrons comment l’entraînement avec des états cachés
interpolés peut améliorer la généralisation et la calibration en apprentissage profond. Nous
introduisons également une théorie montrant comment notre algorithme, que nous appelons
Manifold Mixup, conduit à un aplatissement des représentations cachées par classe, ce qui
peut être vu comme une compression de l’information dans les états cachés.
Le deuxième article est lié au premier et montre comment des exemples interpolés peuvent
être utilisés pour un apprentissage semi-supervisé. Outre l’interpolation des exemples d’entrée,
les prédictions interpolées du modèle sont utilisées comme cibles pour ces exemples. Cela
améliore les résultats sur les benchmarks standard ainsi que sur les problèmes de jouets 2D
classiques pour l’apprentissage semi-supervisé.
Le troisième article étudie comment un réseau de neurones récurrent peut être divisé en
plusieurs modules avec des paramètres différents et des états cachés bien séparés, ainsi qu’un
mécanisme de concurrence limitant la mise à jour des états cachés à un sous-ensemble des
modules les plus pertinents sur un pas de temps spécifique. . Cela améliore la généralisation
systématique lorsque la distribution des modèles est modifiée entre les phases de entraînement
et d’évaluation. Il améliore également la généralisation dans l’apprentissage par renforcement.
Dans le quatrième article, nous montrons que l’attention peut être utilisée pour contrôler le
flux d’informations entre les couches successives des réseaux profonds. Cela permet à chaque
couche de ne traiter que le sous-ensemble des sorties des couches précédemment calculées
qui sont les plus pertinentes. Cela améliore la généralisation sur les tâches de raisonnement
relationnel ainsi que sur les tâches de classification de référence standard.
|
82 |
Cognitively Guided Modeling of Visual Perception in Intelligent VehiclesPlebe, Alice 20 April 2021 (has links)
This work proposes a strategy for visual perception in the context of autonomous driving. Despite the growing research aiming to implement self-driving cars, no artificial system can claim to have reached the driving performance of a human, yet. Humans---when not distracted or drunk---are still the best drivers you can currently find. Hence, the theories about the human mind and its neural organization could reveal precious insights on how to design a better autonomous driving agent. This dissertation focuses specifically on the perceptual aspect of driving, and it takes inspiration from four key theories on how the human brain achieves the cognitive capabilities required by the activity of driving. The first idea lies at the foundation of current cognitive science, and it argues that thinking nearly always involves some sort of mental simulation, which takes the form of imagery when dealing with visual perception. The second theory explains how the perceptual simulation takes place in neural circuits called convergence-divergence zones, which expand and compress information to extract abstract concepts from visual experience and code them into compact representations. The third theory highlights that perception---when specialized for a complex task as driving---is refined by experience in a process called perceptual learning. The fourth theory, namely the free-energy principle of predictive brains, corroborates the role of visual imagination as a fundamental mechanism of inference. In order to implement these theoretical principles, it is necessary to identify the most appropriate computational tools currently available. Within the consolidated and successful field of deep learning, I select the artificial architectures and strategies that manifest a sounding resemblance with their cognitive counterparts. Specifically, convolutional autoencoders have a strong correspondence with the architecture of convergence-divergence zones and the process of perceptual abstraction. The free-energy principle of predictive brains is related to variational Bayesian inference and the use of recurrent neural networks. In fact, this principle can be translated into a training procedure that learns abstract representations predisposed to predicting how the current road scenario will change in the future. The main contribution of this dissertation is a method to learn conceptual representations of the driving scenario from visual information. This approach forces a semantic internal organization, in the sense that distinct parts of the representation are explicitly associated to specific concepts useful in the context of driving. Specifically, the model uses as few as 16 neurons for each of the two basic concepts here considered: vehicles and lanes. At the same time, the approach biases the internal representations towards the ability to predict the dynamics of objects in the scene. This property of temporal coherence allows the representations to be exploited to predict plausible future scenarios and to perform a simplified form of mental imagery. In addition, this work includes a proposal to tackle the problem of opaqueness affecting deep neural networks. I present a method that aims to mitigate this issue, in the context of longitudinal control for automated vehicles.
A further contribution of this dissertation experiments with higher-level spaces of prediction, such as occupancy grids, which could conciliate between the direct application to motor controls and the biological plausibility.
|
83 |
Learning representations of features of fish for performing regression tasks / Lärande av representationer av särdrag från fiskar för användande i regressionsstudierJónsson, Kristmundur January 2021 (has links)
In the ever-changing landscape of the fishing industry, demands for automating specific processes are increasing substantially. Predicting future events eliminates much of the existing communication latency between fishing vessels and their customers and makes real-time analysis of onboard catch possible for the fishing industry. Further, machine learning models, may reduce the number of human resources necessary for the numerous processes that may be automated. In this document, we focus on weight estimation of three different species of fish. Namely, we want to estimate the fish weight given its specie through datadriven techniques. Due to the high complexity of image data, the overhead expenses of collecting images at sea, and the complexities of fish features, we consider a dimensionality reduction on the inputs to reduce the curse of dimensionality and increase interpretability. We will study the viability of modeling fish weights from lower-dimensional feature vectors and the conjunction of lower-dimensional feature vectors and algorithmically obtained features. We found that modeling the residuals with latent representations of a simple power model fitted on length features resulted in a significant difference in the weight estimates for two types of fish and a decrease in Root Mean Squared Error (rMSE) and Mean Absolute Percentage Error (MAPE) scores in favour of the estimations utilizing latent representations. / I fiskeindustrins ständigt föränderliga landskap ökar kraven på att automatisera specifika processer väsentligt. Att förutsäga framtida händelser eliminerar mycket av den befintliga kommunikationsfördröjningen mellan fiskefartyg och deras kunder och möjliggör analys i realtid av ombordfångst för fiskeindustrin. Vidare kan det minska antalet personalresurser som krävs för de många processer som kan automatiseras. I detta dokument studerar vi två olika beslutsproblem relaterade till att sortera fisk av tre olika arter. Vi vill nämligen bestämma fiskvikten och dess art genom datadrivna tekniker. På grund av bilddatas höga komplexitet, de allmänna kostnaderna för att samla bilder till sjöss och komplexiteten hos fiskegenskaper, anser vi att en dimensionalitetsminskning av särdragen minskar problemet relaterat till dimensionsexplosion och ökar tolkbarheten. Vi kommer att studera lämpligheten av modellering av fiskvikter och arter från lägre dimensionella särdragsvektorer samt kombinationen av dessa med algoritmiskt erhållna funktioner. Vi fann att modellering av residual med latenta representationer av en enkel potensfunktionsmodell som är anpassad till fisklängder resulterade i en signifikant skillnad i viktuppskattningarna för två typer av fisk och en minskning av rMSE och MAPE poäng.
|
84 |
Bidirectional Encoder Representations from Transformers (BERT) for Question Answering in the Telecom Domain. : Adapting a BERT-like language model to the telecom domain using the ELECTRA pre-training approach / BERT för frågebesvaring inom telekomdomänen : Anpassning till telekomdomänen av en BERT-baserad språkmodell genom ELECTRA-förträningsmetodenHolm, Henrik January 2021 (has links)
The Natural Language Processing (NLP) research area has seen notable advancements in recent years, one being the ELECTRA model which improves the sample efficiency of BERT pre-training by introducing a discriminative pre-training approach. Most publicly available language models are trained on general-domain datasets. Thus, research is lacking for niche domains with domain-specific vocabulary. In this paper, the process of adapting a BERT-like model to the telecom domain is investigated. For efficiency in training the model, the ELECTRA approach is selected. For measuring target- domain performance, the Question Answering (QA) downstream task within the telecom domain is used. Three domain adaption approaches are considered: (1) continued pre- training on telecom-domain text starting from a general-domain checkpoint, (2) pre-training on telecom-domain text from scratch, and (3) pre-training from scratch on a combination of general-domain and telecom-domain text. Findings indicate that approach 1 is both inexpensive and effective, as target- domain performance increases are seen already after small amounts of training, while generalizability is retained. Approach 2 shows the highest performance on the target-domain QA task by a wide margin, albeit at the expense of generalizability. Approach 3 combines the benefits of the former two by achieving good performance on QA both in the general domain and the telecom domain. At the same time, it allows for a tokenization vocabulary well-suited for both domains. In conclusion, the suitability of a given domain adaption approach is shown to depend on the available data and computational budget. Results highlight the clear benefits of domain adaption, even when the QA task is learned through behavioral fine-tuning on a general-domain QA dataset due to insufficient amounts of labeled target-domain data being available. / Dubbelriktade språkmodeller som BERT har på senare år nått stora framgångar inom språkteknologiområdet. Flertalet vidareutvecklingar av BERT har tagits fram, bland andra ELECTRA, vars nyskapande diskriminativa träningsprocess förkortar träningstiden. Majoriteten av forskningen inom området utförs på data från den allmänna domänen. Med andra ord finns det utrymme för kunskapsbildning inom domäner med områdesspecifikt språk. I detta arbete utforskas metoder för att anpassa en dubbelriktad språkmodell till telekomdomänen. För att säkerställa hög effektivitet i förträningsstadiet används ELECTRA-modellen. Uppnådd prestanda i måldomänen mäts med hjälp av ett frågebesvaringsdataset för telekom-området. Tre metoder för domänanpassning undersöks: (1) fortsatt förträning på text från telekom-området av en modell förtränad på den allmänna domänen; (2) förträning från grunden på telekom-text; samt (3) förträning från grunden på en kombination av text från telekom-området och den allmänna domänen. Experimenten visar att metod 1 är både kostnadseffektiv och fördelaktig ur ett prestanda-perspektiv. Redan efter kort fortsatt förträning kan tydliga förbättringar inom frågebesvaring inom måldomänen urskiljas, samtidigt som generaliserbarhet kvarhålls. Tillvägagångssätt 2 uppvisar högst prestanda inom måldomänen, om än med markant sämre förmåga att generalisera. Metod 3 kombinerar fördelarna från de tidigare två metoderna genom hög prestanda dels inom måldomänen, dels inom den allmänna domänen. Samtidigt tillåter metoden användandet av ett tokenizer-vokabulär väl anpassat för båda domäner. Sammanfattningsvis bestäms en domänanpassningsmetods lämplighet av den respektive situationen och datan som tillhandahålls, samt de tillgängliga beräkningsresurserna. Resultaten påvisar de tydliga vinningar som domänanpassning kan ge upphov till, även då frågebesvaringsuppgiften lärs genom träning på ett dataset hämtat ur den allmänna domänen på grund av otillräckliga mängder frågebesvaringsdata inom måldomänen.
|
85 |
Representation Learning for Modulation Recognition of LPI Radar Signals Through Clustering / Representationsinlärning för modulationsigenkänning av LPI-radarsignaler genom klustringGrancharova, Mila January 2020 (has links)
Today, there is a demand for reliable ways to perform automatic modulation recognition of Low Probability of Intercept (LPI) radar signals, not least in the defense industry. This study explores the possibility of performing automatic modulation recognition on these signals through clustering and more specifically how to learn representations of input signals for this task. A semi-supervised approach using a bootstrapped convolutional neural network classifier for representation learning is proposed. A comparison is made between training the representation learner on raw time-series and on spectral representations of the input signals. It is concluded that, overall, the system trained on spectral representations performs better, though both approaches show promise and should be explored further. The proposed system is tested both on known modulation types and on previously unseen modulation types in the task of novelty detection. The results show that the system can successfully identify known modulation types with adjusted mutual information of 0.86 for signal-to-noise ratios ranging from -10 dB to 10 dB. When introducing previously unseen modulations, up to six modulations can be identified with adjusted mutual information above 0.85. Furthermore, it is shown that the system can learn to separate LPI radar signals from telecom signals which are present in most signal environments. / Idag finns ett behov av pålitlig automatiserad modulationsigenkänning (AMR) av Low Probability of Inercept (LPI)-radarsignaler, inte minst hos försvarsindustrin. Denna studie utforskar möjligheten att utföra AMR av dessa signaler genom klustring och mer specifikt hur man bör lära in representationer av signalerna i detta syfte. En halvövervakad inlärningsmetod som använder en klassificerare baserad på faltningsnätverk föreslås. En jämförelse görs mellan ett system som tränar för representationsinlärning på råa tidsserier och ett system som tränar på spektrala representationer av signalerna. Resultaten visar att systemet tränat på spektrala representationer på det stora hela presterar bättre, men båda metoderna visar lovande resultat och bör utforskas vidare. Systemet testas på signaler från både kända och för systemet tidigare okända modulationer i syfte att pröva förmågan att upptäcka nya typer av modulationer. Systemet identifierar kända modulationer med adjusted mutual information på 0.86 i brusnivåer från -10 dB till 10 dB. När tidigare okända modulationer introduceras till systemet ligger adjusted mutual information över 0.85 för upp till sex modulationer. Studien visar dessutom att systemet kan lära sig skilja LPI-radarsignaler från telekommunikationssignaler som är vanliga i de flesta signalmiljöer.
|
86 |
Deep learning, LSTM and Representation Learning in Empirical Asset Pricingvon Essen, Benjamin January 2022 (has links)
In recent years, machine learning models have gained traction in the field of empirical asset pricing for their risk premium prediction performance. In this thesis, we build upon the work of [1] by first evaluating models similar to their best performing model in a similar fashion, by using the same dataset and measures, and then expanding upon that. We explore the impact of different feature extraction techniques, ranging from simply removing added complex- ity to representation learning techniques such as incremental PCA and autoen- coders. Furthermore, we also introduce recurrent connections with LSTM and combine them with the earlier mentioned representation learning techniques. We significantly outperform [1] in terms of monthly out-of-sample R2, reach- ing a score of over 3%, by using a condensed version of the dataset, without interaction terms and dummy variables, with a feedforward neural network. However, across the board, all of our models fall short in terms of Sharpe ratio. Even though we find that LSTM works better than the benchmark, it does not outperform the feedforward network using the condensed dataset. We reason that this is because the features already contain a lot of temporal information, such as recent price trends. Overall, the autoencoder based models perform poorly. While the linear incremental PCA based models perform better than the nonlinear autoencoder based ones, they still perform worse than the bench- mark. / Under de senaste åren har maskininlärningsmodeller vunnit kredibilitet inom området empirisk tillgångsvärdering för deras förmåga att förutsäga riskpre- mier. I den här uppsatsen bygger vi på [1]s arbetet genom att först implemente- ra modeller som liknar deras bäst presterande modell och utvärdera dem på ett liknande sätt, genom att använda samma data och mått, och sedan bygga vida- re på det. Vi utforskar effekterna av olika variabelextraktionstekniker, allt från att helt enkelt ta bort extra komplexitet till representationsinlärningstekniker som inkrementell PCA och autoencoders. Vidare introducerar vi även LSTM och kombinerar dem med de tidigare nämnda representationsinlärningstekni- kerna. Min bästa modell presterar betydligt bättre än [1]s i termer av månatlig R2 för testdatan, och når ett resultat på över 3%, genom att använda en kompri- merad version av datan, utan interaktionstermer och dummyvariabler, med ett feedforward neuralt nätverk. Men överlag så brister alla mina modeller i ter- mer av Sharpe ratio. Även om LSTM fungerar bättre än riktvärdet, överträffar det inte feedforward-nätverket med den komprimerade datamängden. Vi re- sonerar att detta är på grund av inputvariablerna som redan innehåller en hel del information över tid, som de senaste pristrenderna. Sammantaget presterar de autoencoderbaserade modellerna dåligt. Även om de linjära inkrementell PCA-baserade modellerna presterar bättre än de olinjära autoencoderbaserade modellerna, presterar de fortfarande sämre än riktvärdet.
|
87 |
Action Recognition with Knowledge TransferChoi, Jin-Woo 07 January 2021 (has links)
Recent progress on deep neural networks has shown remarkable action recognition performance from videos. The remarkable performance is often achieved by transfer learning: training a model on a large-scale labeled dataset (source) and then fine-tuning the model on the small-scale labeled datasets (targets). However, existing action recognition models do not always generalize well on new tasks or datasets because of the following two reasons. i) Current action recognition datasets have a spurious correlation between action types and background scene types. The models trained on these datasets are biased towards the scene instead of focusing on the actual action. This scene bias leads to poor generalization performance. ii) Directly testing the model trained on the source data on the target data leads to poor performance as the source, and target distributions are different. Fine-tuning the model on the target data can mitigate this issue. However, manual labeling small- scale target videos is labor-intensive. In this dissertation, I propose solutions to these two problems. For the first problem, I propose to learn scene-invariant action representations to mitigate the scene bias in action recognition models. Specifically, I augment the standard cross-entropy loss for action classification with 1) an adversarial loss for the scene types and 2) a human mask confusion loss for videos where the human actors are invisible. These two losses encourage learning representations unsuitable for predicting 1) the correct scene types and 2) the correct action types when there is no evidence. I validate the efficacy of the proposed method by transfer learning experiments. I trans- fer the pre-trained model to three different tasks, including action classification, temporal action localization, and spatio-temporal action detection. The results show consistent improvement over the baselines for every task and dataset. I formulate human action recognition as an unsupervised domain adaptation (UDA) problem to handle the second problem. In the UDA setting, we have many labeled videos as source data and unlabeled videos as target data. We can use already exist- ing labeled video datasets as source data in this setting. The task is to align the source and target feature distributions so that the learned model can generalize well on the target data. I propose 1) aligning the more important temporal part of each video and 2) encouraging the model to focus on action, not the background scene, to learn domain-invariant action representations. The proposed method is simple and intuitive while achieving state-of-the-art performance without training on a lot of labeled target videos. I relax the unsupervised target data setting to a sparsely labeled target data setting. Then I explore the semi-supervised video action recognition, where we have a lot of labeled videos as source data and sparsely labeled videos as target data. The semi-supervised setting is practical as sometimes we can afford a little bit of cost for labeling target data. I propose multiple video data augmentation methods to inject photometric, geometric, temporal, and scene invariances to the action recognition model in this setting. The resulting method shows favorable performance on the public benchmarks. / Doctor of Philosophy / Recent progress on deep learning has shown remarkable action recognition performance. The remarkable performance is often achieved by transferring the knowledge learned from existing large-scale data to the small-scale data specific to applications. However, existing action recog- nition models do not always work well on new tasks and datasets because of the following two problems. i) Current action recognition datasets have a spurious correlation between action types and background scene types. The models trained on these datasets are biased towards the scene instead of focusing on the actual action. This scene bias leads to poor performance on the new datasets and tasks. ii) Directly testing the model trained on the source data on the target data leads to poor performance as the source, and target distributions are different. Fine-tuning the model on the target data can mitigate this issue. However, manual labeling small-scale target videos is labor-intensive. In this dissertation, I propose solutions to these two problems. To tackle the first problem, I propose to learn scene-invariant action representations to mitigate background scene- biased human action recognition models for the first problem. Specifically, the proposed method learns representations that cannot predict the scene types and the correct actions when there is no evidence. I validate the proposed method's effectiveness by transferring the pre-trained model to multiple action understanding tasks. The results show consistent improvement over the baselines for every task and dataset. To handle the second problem, I formulate human action recognition as an unsupervised learning problem on the target data. In this setting, we have many labeled videos as source data and unlabeled videos as target data. We can use already existing labeled video datasets as source data in this setting. The task is to align the source and target feature distributions so that the learned model can generalize well on the target data. I propose 1) aligning the more important temporal part of each video and 2) encouraging the model to focus on action, not the background scene. The proposed method is simple and intuitive while achieving state-of-the-art performance without training on a lot of labeled target videos. I relax the unsupervised target data setting to a sparsely labeled target data setting. Here, we have many labeled videos as source data and sparsely labeled videos as target data. The setting is practical as sometimes we can afford a little bit of cost for labeling target data. I propose multiple video data augmentation methods to inject color, spatial, temporal, and scene invariances to the action recognition model in this setting. The resulting method shows favorable performance on the public benchmarks.
|
88 |
Learning representations for reasoning : generalizing across diverse structuresZhu, Zhaocheng 08 1900 (has links)
Le raisonnement, la capacité de tirer des conclusions logiques à partir de connaissances existantes, est une caractéristique marquante de l’être humain. Avec la perception, ils constituent les deux thèmes majeurs de l’intelligence artificielle. Alors que l’apprentissage profond a repoussé les limites de la perception au-delà des performances humaines en vision par ordinateur et en traitement du langage naturel, les progrès dans les domaines du raisonnement sont loin derrière. L’une des raisons fondamentales est que les problèmes de raisonnement ont généralement des structures flexibles à la fois pour les connaissances (par exemple, les graphes de connaissances) et les requêtes (par exemple, les requêtes en plusieurs étapes), et de nombreux modèles existants ne fonctionnent bien que sur les structures vues pendant l’entraînement.
Dans cette thèse, nous visons à repousser les limites des modèles de raisonnement en concevant des algorithmes qui généralisent à travers les structures de connaissances et de requêtes, ainsi que des systèmes qui accélèrent le développement sur des données structurées. Cette thèse est composée de trois parties. Dans la partie I, nous étudions des modèles qui peuvent généraliser de manière inductive à des graphes de connaissances invisibles, qui impliquent de nouveaux vocabulaires d’entités et de relations. Pour les nouvelles entités, nous proposons un nouveau cadre qui apprend les opérateurs neuronaux dans un algorithme de programmation dynamique calculant des représentations de chemin. Ce cadre peut être étendu à des graphes de connaissances à l’échelle d’un million en apprenant une fonction de priorité. Pour les relations, nous construisons un graphe de relations pour capturer les interactions entre les relations, convertissant ainsi les nouvelles relations en nouvelles entités. Cela nous permet de développer un modèle pré-entraîné unique pour des graphes de connaissances arbitraires. Dans la partie II, nous proposons deux solutions pour généraliser les requêtes en plusieurs étapes sur les graphes de connaissances et sur le texte respectivement. Pour les graphes de connaissances, nous montrons que les requêtes en plusieurs étapes peuvent être résolues par plusieurs appels de réseaux neuronaux graphes et d’opérations de logique floue. Cette conception permet la généralisation à de nouvelles entités, et peut être intégrée à notre modèle pré-entraîné pour prendre en charge des graphes de connaissances arbitraires. Pour le texte, nous concevons un nouvel algorithme pour apprendre des connaissances explicites sous forme de règles textuelles afin d’améliorer les grands modèles de langage sur les requêtes en plusieurs étapes. Dans la partie III, nous proposons deux systèmes pour faciliter le développement de l’apprentissage automatique sur des données structurées. Notre bibliothèque open source traite les données structurées comme des citoyens de première classe et supprime la barrière au développement d’algorithmes d’apprentissage automatique sur des données structurées, y compris des graphes, des molécules et des protéines. Notre système d’intégration de noeuds résout le goulot d’étranglement de la mémoire GPU des matrices d’intégration et s’adapte aux graphes avec des milliards de noeuds. / Reasoning, the ability to logically draw conclusions from existing knowledge, is a hallmark of human. Together with perception, they constitute the two major themes of artificial intelligence. While deep learning has pushed the limit of perception beyond human-level performance in computer vision and natural language processing, the progress in reasoning domains is way behind. One fundamental reason is that reasoning problems usually have flexible structures for both knowledge (e.g. knowledge graphs) and queries (e.g. multi-step queries), and many existing models only perform well on structures seen during training.
In this thesis, we aim to push the boundary of reasoning models by devising algorithms that generalize across knowledge and query structures, as well as systems that accelerate development on structured data. This thesis is composed of three parts. In Part I, we study models that can inductively generalize to unseen knowledge graphs, which involve new entity and relation vocabularies. For new entities, we propose a novel framework that learns neural operators in a dynamic programming algorithm computing path representations. This framework can be further scaled to million-scale knowledge graphs by learning a priority function. For relations, we construct a relation graph to capture the interactions between relations, thereby converting new relations into new entities. This enables us to develop a single pre-trained model for arbitrary knowledge graphs. In Part II, we propose two solutions for generalizing across multi-step queries on knowledge graphs and text respectively. For knowledge graphs, we show multi-step queries can be solved by multiple calls of graph neural networks and fuzzy logic operations. This design enables generalization to new entities, and can be integrated with our pre-trained model to accommodate arbitrary knowledge graphs. For text, we devise a new algorithm to learn explicit knowledge as textual rules to improve large language models on multi-step queries. In Part III, we propose two systems to facilitate machine learning development on structured data. Our open-source library treats structured data as first-class citizens and removes the barrier for developing machine learning algorithms on structured data, including graphs, molecules and proteins. Our node embedding system solves the GPU memory bottleneck of embedding matrices and scales to graphs with billion nodes.
|
89 |
Learning Pose and State-Invariant Object Representations for Fine-Grained Recognition and RetrievalRohan Sarkar (19065215) 11 July 2024 (has links)
<p dir="ltr">Object Recognition and Retrieval is a fundamental problem in Computer Vision that involves
recognizing objects and retrieving similar object images through visual queries. While
deep metric learning is commonly employed to learn image embeddings for solving such
problems, the representations learned using existing methods are not robust to changes in
viewpoint, pose, and object state, especially for fine-grained recognition and retrieval tasks.
To overcome these limitations, this dissertation aims to learn robust object representations
that remain invariant to such transformations for fine-grained tasks. First, it focuses on
learning dual pose-invariant embeddings to facilitate recognition and retrieval at both the
category and finer object-identity levels by learning category and object-identity specific representations
in separate embedding spaces simultaneously. For this, the PiRO framework is
introduced that utilizes an attention-based dual encoder architecture and novel pose-invariant
ranking losses for each embedding space to disentangle the category and object representations
while learning pose-invariant features. Second, the dissertation introduces ranking
losses that cluster multi-view images of an object together in both the embedding spaces
while simultaneously pulling the embeddings of two objects from the same category closer in
the category embedding space to learn fundamental category-specific attributes and pushing
them apart in the object embedding space to learn discriminative features to distinguish
between them. Third, the dissertation addresses state-invariance and introduces a novel ObjectsWithStateChange
dataset to facilitate research in recognizing fine-grained objects with
state changes involving structural transformations in addition to pose and viewpoint changes.
Fourth, it proposes a curriculum learning strategy to progressively sample object images that
are harder to distinguish for training the model, enhancing its ability to capture discriminative
features for fine-grained tasks amidst state changes and other transformations. Experimental
evaluations demonstrate significant improvements in object recognition and retrieval
performance compared to previous methods, validating the effectiveness of the proposed
approaches across several challenging datasets under various transformations.</p>
|
90 |
Unsupervised representation learning in interactive environmentsRacah, Evan 08 1900 (has links)
Extraire une représentation de tous les facteurs de haut niveau de l'état d'un agent à partir d'informations sensorielles de bas niveau est une tâche importante, mais difficile, dans l'apprentissage automatique. Dans ce memoire, nous explorerons plusieurs approches non supervisées pour apprendre ces représentations. Nous appliquons et analysons des méthodes d'apprentissage de représentations non supervisées existantes dans des environnements d'apprentissage par renforcement, et nous apportons notre propre suite d'évaluations et notre propre méthode novatrice d'apprentissage de représentations d'état.
Dans le premier chapitre de ce travail, nous passerons en revue et motiverons l'apprentissage non supervisé de représentations pour l'apprentissage automatique en général et pour l'apprentissage par renforcement. Nous introduirons ensuite un sous-domaine relativement nouveau de l'apprentissage de représentations : l'apprentissage auto-supervisé. Nous aborderons ensuite deux approches fondamentales de l'apprentissage de représentations, les méthodes génératives et les méthodes discriminatives. Plus précisément, nous nous concentrerons sur une collection de méthodes discriminantes d'apprentissage de représentations, appelées méthodes contrastives d'apprentissage de représentations non supervisées (CURL). Nous terminerons le premier chapitre en détaillant diverses approches pour évaluer l'utilité des représentations.
Dans le deuxième chapitre, nous présenterons un article de workshop dans lequel nous évaluons un ensemble de méthodes d'auto-supervision standards pour les problèmes d'apprentissage par renforcement. Nous découvrons que la performance de ces représentations dépend fortement de la dynamique et de la structure de l'environnement. À ce titre, nous déterminons qu'une étude plus systématique des environnements et des méthodes est nécessaire.
Notre troisième chapitre couvre notre deuxième article, Unsupervised State Representation Learning in Atari, où nous essayons d'effectuer une étude plus approfondie des méthodes d'apprentissage de représentations en apprentissage par renforcement, comme expliqué dans le deuxième chapitre. Pour faciliter une évaluation plus approfondie des représentations en apprentissage par renforcement, nous introduisons une suite de 22 jeux Atari entièrement labellisés. De plus, nous choisissons de comparer les méthodes d'apprentissage de représentations de façon plus systématique, en nous concentrant sur une comparaison entre méthodes génératives et méthodes contrastives, plutôt que les méthodes générales du deuxième chapitre choisies de façon moins systématique. Enfin, nous introduisons une nouvelle méthode contrastive, ST-DIM, qui excelle sur ces 22 jeux Atari. / Extracting a representation of all the high-level factors of an agent’s state from level-level sensory information is an important, but challenging task in machine learning. In this thesis, we will explore several unsupervised approaches for learning these state representations. We apply and analyze existing unsupervised representation learning methods in reinforcement learning environments, as well as contribute our own evaluation benchmark and our own novel state representation learning method.
In the first chapter, we will overview and motivate unsupervised representation learning for machine learning in general and for reinforcement learning. We will then introduce a relatively new subfield of representation learning: self-supervised learning. We will then cover two core representation learning approaches, generative methods and discriminative methods. Specifically, we will focus on a collection of discriminative representation learning methods called contrastive unsupervised representation learning (CURL) methods. We will close the first chapter by detailing various approaches for evaluating the usefulness of representations.
In the second chapter, we will present a workshop paper, where we evaluate a handful of off-the-shelf self-supervised methods in reinforcement learning problems. We discover that the performance of these representations depends heavily on the dynamics and visual structure of the environment. As such, we determine that a more systematic study of environments and methods is required.
Our third chapter covers our second article, Unsupervised State Representation Learning in Atari, where we try to execute a more thorough study of representation learning methods in RL as motivated by the second chapter. To facilitate a more thorough evaluation of representations in RL we introduce a benchmark of 22 fully labelled Atari games. In addition, we choose the representation learning methods for comparison in a more systematic way by focusing on comparing generative methods with contrastive methods, instead of the less systematically chosen off-the-shelf methods from the second chapter. Finally, we introduce a new contrastive method, ST-DIM, which excels at the 22 Atari games.
|
Page generated in 0.1484 seconds