• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 84
  • 6
  • 3
  • 1
  • Tagged with
  • 121
  • 121
  • 50
  • 47
  • 41
  • 35
  • 35
  • 34
  • 34
  • 30
  • 29
  • 27
  • 27
  • 27
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Auto-Encoders, Distributed Training and Information Representation in Deep Neural Networks

Alain, Guillaume 10 1900 (has links)
No description available.
92

Self-Supervised Representation Learning for Content Based Image Retrieval

Govindarajan, Hariprasath January 2020 (has links)
Automotive technologies and fully autonomous driving have seen a tremendous growth in recent times and have benefitted from extensive deep learning research. State-of-the-art deep learning methods are largely supervised and require labelled data for training. However, the annotation process for image data is time-consuming and costly in terms of human efforts. It is of interest to find informative samples for labelling by Content Based Image Retrieval (CBIR). Generally, a CBIR method takes a query image as input and returns a set of images that are semantically similar to the query image. The image retrieval is achieved by transforming images to feature representations in a latent space, where it is possible to reason about image similarity in terms of image content. In this thesis, a self-supervised method is developed to learn feature representations of road scenes images. The self-supervised method learns feature representations for images by adapting intermediate convolutional features from an existing deep Convolutional Neural Network (CNN). A contrastive approach based on Noise Contrastive Estimation (NCE) is used to train the feature learning model. For complex images like road scenes where mutiple image aspects can occur simultaneously, it is important to embed all the salient image aspects in the feature representation. To achieve this, the output feature representation is obtained as an ensemble of feature embeddings which are learned by focusing on different image aspects. An attention mechanism is incorporated to encourage each ensemble member to focus on different image aspects. For comparison, a self-supervised model without attention is considered and a simple dimensionality reduction approach using SVD is treated as the baseline. The methods are evaluated on nine different evaluation datasets using CBIR performance metrics. The datasets correspond to different image aspects and concern the images at different spatial levels - global, semi-global and local. The feature representations learned by self-supervised methods are shown to perform better than the SVD approach. Taking into account that no labelled data is required for training, learning representations for road scenes images using self-supervised methods appear to be a promising direction. Usage of multiple query images to emphasize a query intention is investigated and a clear improvement in CBIR performance is observed. It is inconclusive whether the addition of an attentive mechanism impacts CBIR performance. The attention method shows some positive signs based on qualitative analysis and also performs better than other methods for one of the evaluation datasets containing a local aspect. This method for learning feature representations is promising but requires further research involving more diverse and complex image aspects.
93

Towards Privacy and Communication Efficiency in Distributed Representation Learning

Sheikh S Azam (12836108) 10 June 2022 (has links)
<p>Over the past decade, distributed representation learning has emerged as a popular alternative to conventional centralized machine learning training. The increasing interest in distributed representation learning, specifically federated learning, can be attributed to its fundamental property that promotes data privacy and communication savings. While conventional ML encourages aggregating data at a central location (e.g., data centers), distributed representation learning advocates keeping data at the source and instead transmitting model parameters across the network. However, since the advent of deep learning, model sizes have become increasingly large often comprising million-billions of parameters, which leads to the problem of communication latency in the learning process. In this thesis, we propose to tackle the problem of communication latency in two different ways: (i) learning private representation of data to enable its sharing, and (ii) reducing the communication latency by minimizing the corresponding long-range communication requirements.</p> <p><br></p> <p>To tackle the former goal, we first start by studying the problem of learning representations that are private yet informative, i.e., providing information about intended ''ally'' targets while hiding sensitive ''adversary'' attributes. We propose Exclusion-Inclusion Generative Adversarial Network (EIGAN), a generalized private representation learning (PRL) architecture that accounts for multiple ally and adversary attributes, unlike existing PRL solutions. We then address the practical constraints of the distributed datasets by developing Distributed EIGAN (D-EIGAN), the first distributed PRL method that learns a private representation at each node without transmitting the source data. We theoretically analyze the behavior of adversaries under the optimal EIGAN and D-EIGAN encoders and the impact of dependencies among ally and adversary tasks on the optimization objective. Our experiments on various datasets demonstrate the advantages of EIGAN in terms of performance, robustness, and scalability. In particular, EIGAN outperforms the previous state-of-the-art by a significant accuracy margin (47% improvement), and D-EIGAN's performance is consistently on par with EIGAN under different network settings.</p> <p><br></p> <p>We next tackle the latter objective - reducing the communication latency - and propose two timescale hybrid federated learning (TT-HF), a semi-decentralized learning architecture that combines the conventional device-to-server communication paradigm for federated learning with device-to-device (D2D) communications for model training. In TT-HF, during each global aggregation interval, devices (i) perform multiple stochastic gradient descent iterations on their individual datasets, and (ii) aperiodically engage in consensus procedure of their model parameters through cooperative, distributed D2D communications within local clusters. With a new general definition of gradient diversity, we formally study the convergence behavior of TT-HF, resulting in new convergence bounds for distributed ML. We leverage our convergence bounds to develop an adaptive control algorithm that tunes the step size, D2D communication rounds, and global aggregation period of TT-HF over time to target a sublinear convergence rate of O(1/t) while minimizing network resource utilization. Our subsequent experiments demonstrate that TT-HF significantly outperforms the current art in federated learning in terms of model accuracy and/or network energy consumption in different scenarios where local device datasets exhibit statistical heterogeneity. Finally, our numerical evaluations demonstrate robustness against outages caused by fading channels, as well favorable performance with non-convex loss functions.</p>
94

Data-efficient reinforcement learning with self-predictive representations

Schwarzer, Max 08 1900 (has links)
L'efficacité des données reste un défi majeur dans l'apprentissage par renforcement profond. Bien que les techniques modernes soient capables d'atteindre des performances élevées dans des tâches extrêmement complexes, y compris les jeux de stratégie comme le StarCraft, les échecs, le shogi et le go, ainsi que dans des domaines visuels exigeants comme les jeux Atari, cela nécessite généralement d'énormes quantités de données interactives, limitant ainsi l'application pratique de l'apprentissage par renforcement. Dans ce mémoire, nous proposons la SPR, une méthode inspirée des récentes avancées en apprentissage auto-supervisé de représentations, conçue pour améliorer l'efficacité des données des agents d'apprentissage par renforcement profond. Nous évaluons cette méthode sur l'environement d'apprentissage Atari, et nous montrons qu'elle améliore considérablement les performances des agents avec un surcroît de calcul modéré. Lorsqu'on lui accorde à peu près le même temps d'apprentissage qu'aux testeurs humains, un agent d'apprentissage par renforcement augmenté de SPR atteint des performances surhumaines dans 7 des 26 jeux, une augmentation de 350% par rapport à l'état de l'art précédent, tout en améliorant fortement les performances moyennes et médianes. Nous évaluons également cette méthode sur un ensemble de tâches de contrôle continu, montrant des améliorations substantielles par rapport aux méthodes précédentes. Le chapitre 1 présente les concepts nécessaires à la compréhension du travail présenté, y compris des aperçus de l'apprentissage par renforcement profond et de l'apprentissage auto-supervisé de représentations. Le chapitre 2 contient une description détaillée de nos contributions à l'exploitation de l'apprentissage de représentation auto-supervisé pour améliorer l'efficacité des données dans l'apprentissage par renforcement. Le chapitre 3 présente quelques conclusions tirées de ces travaux, y compris des propositions pour les travaux futurs. / Data efficiency remains a key challenge in deep reinforcement learning. Although modern techniques have been shown to be capable of attaining high performance in extremely complex tasks, including strategy games such as StarCraft, Chess, Shogi, and Go as well as in challenging visual domains such as Atari games, doing so generally requires enormous amounts of interactional data, limiting how broadly reinforcement learning can be applied. In this thesis, we propose SPR, a method drawing from recent advances in self-supervised representation learning designed to enhance the data efficiency of deep reinforcement learning agents. We evaluate this method on the Atari Learning Environment, and show that it dramatically improves performance with limited computational overhead. When given roughly the same amount of learning time as human testers, a reinforcement learning agent augmented with SPR achieves super-human performance on 7 out of 26 games, an increase of 350% over the previous state of the art, while also strongly improving mean and median performance. We also evaluate this method on a set of continuous control tasks, showing substantial improvements over previous methods. Chapter 1 introduces concepts necessary to understand the work presented, including overviews of Deep Reinforcement Learning and Self-Supervised Representation learning. Chapter 2 contains a detailed description of our contributions towards leveraging self-supervised representation learning to improve data-efficiency in reinforcement learning. Chapter 3 provides some conclusions drawn from this work, including a number of proposals for future work.
95

Self-supervision for data interpretability in image classification and sample efficiency in reinforcement learning

Rajkumar, Nitarshan 06 1900 (has links)
L'apprentissage auto-surveillé (AAS), c'est-à-dire l'apprentissage de connaissances en exploitant la structure intrinsèque présente dans un ensemble de données non étiquettées, a beaucoup fait progresser l'apprentissage automatique dans la dernière décennie, et plus particulièrement dans les dernières deux années en vision informatique. Dans cet ouvrage, nous nous servons de l'AAS comme outil dans deux champs applicatifs: Pour interpréter efficacement les ensembles de données et les décisions prises par des modèles statistiques, et pour pré-entrainer un modèle d'apprentissage par renforcement pour grandement augmenter l'efficacité de son échantillonnage dans son contexte d'entraînement. Le Chapitre 1 présente les connaissances de fond nécessaires à la compréhension du reste du mémoire. Il offre un aperçu de l'apprentissage automatique, de l'apprentissage profond, de l'apprentissage auto-surveillé et de l'apprentissage par renforcement (profond). Le Chapitre 2 se détourne brièvement du sujet de l'auto-surveillance pour étudier comment le phénomène de la mémorisation se manifeste dans les réseaux de neurones profonds. Les observations que nous ferons seront alors utilisées comme pièces justificatives pour les travaux présentés dans le Chapitre 3. Ce chapitre aborde la manière dont l'auto-surveillance peut être utilisée pour découvrir efficacement les régularités structurelles présentes dans un ensemble de données d'entraînement, estimer le degré de mémorisation de celui-ci par le modèle, et l'influence d'un échantillon d'entraînement sur les résultats pour un échantillon-test. Nous passons aussi en revue de récents travaux touchant à l'importance de mémoriser la ``longue traîne'' d'un jeu de données. Le Chapitre 4 fait la démonstration d'une combinaison d'objectifs de pré-entraînement AAS axés sur les caractéristiques des données en apprentissage par renforcement, de ce fait élevant l'efficacité d'échantillonnage à un niveau comparable à celui d'un humain. De plus, nous montrons que l'AAS ouvre la porte à de plus grands modèles, ce qui a été par le passé un défi à surmonter en apprentissage par renforcement profond. Finalement, le Chapitre 5 conclut l'ouvrage avec un bref survol des contributions scientifiques et propose quelque avenues pour des recherches poussées dans le futur. / Self-Supervised Learning (SSL), or learning representations of data by exploiting inherent structure present in it without labels, has driven significant progress in machine learning over the past decade, and in computer vision in particular over the past two years. In this work, we explore applications of SSL towards two separate goals - first, as a tool for efficiently interpreting datasets and model decisions, and second, as a tool for pretraining in reinforcement learning (RL) to greatly advance sample efficiency in that setting. Chapter 1 introduces background material necessary to understand the remainder of this thesis. In particular, it provides an overview of Machine Learning, Deep Learning, Self-Supervised Representation Learning, and (Deep) Reinforcement Learning. Chapter 2 briefly detours away from this thesis' focus on self-supervision, to examine how the phenomena of memorization manifests in deep neural networks. These results are then used to partially justify work presented in Chapter 3, which examines how self-supervision can be used to efficiently uncover structural regularity in training datasets, and to estimate training memorization and the influence of training samples on test samples. Recent experimental work on understanding the importance of memorizing the long-tail of data is also revisited. Chapter 4 demonstrates how a combination of SSL pretraining objectives designed for the structure of data in RL can greatly improve sample efficiency to nearly human-level performance. Furthermore, it is shown that SSL enables the use of larger models, which has historically been a challenge in deep RL. Chapter 5 concludes by reviewing the contributions of this work, and discusses future directions.
96

On representation learning for generative models of text

Subramanian, Sandeep 08 1900 (has links)
Cette thèse fait des petits pas dans la construction et la compréhension des systèmes d'apprentissage des représentations neuronales et des modèles génératifs pour le traitement du langage naturel. Il est présenté comme une thèse par article qui contient quatre travaux. Dans le premier article, nous montrons que l'apprentissage multi-tâches peut être utilisé pour combiner les biais inductifs de plusieurs tâches d'apprentissage auto-supervisées et supervisées pour apprendre des représentations de phrases distribuées de longueur fixe à usage général qui obtiennent des résultats solides sur les tâches d'apprentissage par transfert en aval sans tout modèle de réglage fin. Le deuxième article s'appuie sur le premier et présente un modèle génératif en deux étapes pour le texte qui modélise la distribution des représentations de phrases pour produire de nouveaux plongements de phrases qui servent de "contour neuronal" de haut niveau qui est reconstruit en mots avec un récurrent neuronal autorégressif conditionnel décodeur. Le troisième article étudie la nécessité de représentations démêlées pour la génération de texte contrôlable. Une grande partie des systèmes de génération de texte contrôlables reposent sur l'idée que le contrôle d'un attribut (ou d'un style) particulier nécessite la construction de représentations dissociées qui séparent le contenu et le style. Nous démontrons que les représentations produites dans des travaux antérieurs qui utilisent la formation contradictoire du domaine ne sont pas dissociées dans la pratique. Nous présentons ensuite une approche qui ne vise pas à apprendre des représentations démêlées et montrons qu'elle permet d'obtenir des résultats nettement meilleurs que les travaux antérieurs. Dans le quatrième article, nous concevons des modèles de langage de transformateur qui apprennent les représentations à plusieurs échelles de temps et montrent que ceux-ci peuvent aider à réduire l'empreinte mémoire importante de ces modèles. Il présente trois architectures multi-échelles différentes qui présentent des compromis favorables entre la perplexité et l'empreinte mémoire. / This thesis takes baby steps in building and understanding neural representation learning systems and generative models for natural language processing. It is presented as a thesis by article that contains four pieces of work. In the first article, we show that multi-task learning can be used to combine the inductive biases of several self-supervised and supervised learning tasks to learn general-purpose fixed-length distributed sentence representations that achieve strong results on downstream transfer learning tasks without any model fine-tuning. The second article builds on the first and presents a two-step generative model for text that models the distribution of sentence representations to produce novel sentence embeddings that serves as a high level ``neural outline'' that is reconstructed to words with a conditional autoregressive RNN decoder. The third article studies the necessity of disentangled representations for controllable text generation. A large fraction of controllable text generation systems rely on the idea that control over a particular attribute (or style) requires building disentangled representations that separate content and style. We demonstrate that representations produced in previous work that uses domain adversarial training are not disentangled in practice. We then present an approach that does not aim to learn disentangled representations and show that it achieves significantly better results than prior work. In the fourth article, we design transformer language models that learn representations at multiple time scales and show that these can help address the large memory footprint these models typically have. It presents three different multi-scale architectures that exhibit favorable perplexity vs memory footprint trade-offs.
97

Intersecting Graph Representation Learning and Cell Profiling : A Novel Approach to Analyzing Complex Biomedical Data

Chamyani, Nima January 2023 (has links)
In recent biomedical research, graph representation learning and cell profiling techniques have emerged as transformative tools for analyzing high-dimensional biological data. The integration of these methods, as investigated in this study, has facilitated an enhanced understanding of complex biological systems, consequently improving drug discovery. The research aimed to decipher connections between chemical structures and cellular phenotypes while incorporating other biological information like proteins and pathways into the workflow. To achieve this, machine learning models' efficacy was examined for classification and regression tasks. The newly proposed graph-level and bio-graph integrative predictors were compared with traditional models. Results demonstrated their potential, particularly in classification tasks. Moreover, the topology of the COVID-19 BioGraph was analyzed, revealing the complex interconnections between chemicals, proteins, and biological pathways. By combining network analysis, graph representation learning, and statistical methods, the study was able to predict active chemical combinations within inactive compounds, thereby exhibiting significant potential for further investigations. Graph-based generative models were also used for molecule generation opening up further research avenues in finding lead compounds. In conclusion, this study underlines the potential of combining graph representation learning and cell profiling techniques in advancing biomedical research in drug repurposing and drug combination. This integration provides a better understanding of complex biological systems, assists in identifying therapeutic targets, and contributes to optimizing molecule generation for drug discovery. Future investigations should optimize these models and validate the drug combination discovery approach. As these techniques continue to evolve, they hold the potential to significantly impact the future of drug screening, drug repurposing, and drug combinations.
98

Multi-modal Models for Product Similarity : Comparative evaluation of unimodal and multi-modal architectures for product similarity prediction and product retrieval / Multimodala modeller för produktlikhet

Frantzolas, Christos January 2023 (has links)
With the rapid growth of e-commerce, enabling effective product recommendation systems and improving product search for shoppers plays a crucial role in driving customer satisfaction. Traditional product retrieval approaches have mainly relied on unimodal models focusing on text data. However, to capture auxiliary context and improve the accuracy of similarity predictions, it is crucial to explore architectures that can leverage additional sources of information, such as images. This thesis compares the performance of multi- and unimodal methods for product similarity prediction and product retrieval. Both approaches are applied to two e-commerce datasets, one containing English and another containing Swedish product descriptions. A pre-trained multi-modal model called CLIP is used as a feature extractor. Different models are trained on CLIP embeddings using either text-only, image-only or image-text inputs. An extension of triplet loss with margins is tested, along with various training setups. Given the lack of similarity labels between products, product similarity prediction is studied by measuring the performance of a K-Nearest Neighbour classifier implemented on features extracted by the trained models. The thesis results demonstrate that multi-modal architectures outperform unimodal models in predicting product similarity. The same is true for product retrieval. Combining textual and visual information seems to lead to more accurate predictions than models relying on only one modality. The findings of this research have considerable implications for e-commerce platforms and recommendation systems, providing insights into the effectiveness of multi-modal models for product-related tasks. Overall, the study contributes to the existing body of knowledge by highlighting the advantages of leveraging multiple sources of information for deep learning. It also presents recommendations for designing and implementing effective multi-modal architectures. / I och med den snabba tillväxten av e-handel spelar att möjliggöra effektivare produktrekommendationssystem och att förbättra produktsök för konsumenter en viktig roll för att öka kundnöjdheten. Traditionella angreppsätt för produktsök har huvudsakligen tillförlitat sig på unimodala textmodeller. För att fånga ett bredare kontext och förbättra exaktheten av prediktioner av likhet mellan produkter är det viktigt att utforska arkitekturer som kan utnyttja fler informationskällor så som bilder. Den här avhandlingen jämför prestanda hos multimodala och unimodala metoder för produktlikhetsprediktioner och produktsök. Båda angreppsätten är tillämpade på två e-handelsdatamängder, en med engelska produktbeskrivningar och en med svenska. En förtränad multimodal modell kallad CLIP används för att skapa produktrepresentationer. Olika modeller har tränats på CLIPs representationer, antingen med enbart text, enbart bild eller både bild och text. En utökning av ett triplettmått med marginaler har testats som träningskriterium, i kombination med olika träningsinställningar. Givet en avsaknad av likhetsannoteringar mellan produkter så har produktlikhetsprediktion studerats genom att mäta prestandan av K-närmaste-grannar-klassificering genom att använda vektor-representationer från de tränade modellerna. Avhandlingens resultat visar att multimodala arkitekturer överträffar unimodala modeller för produktlikhetsprediktion. Att kombinera textuell och visuell information verkar leda till mer korrekta prediktioner jämfört med modeller som förlitar sig på endast en modalitet. Forskningsresultaten har markanta implikationer för e-handelsplattformar och rekommendationssystem, genom att tillhandahålla insikter i multimodala modellers effektivitet i produktrelaterade uppgifter. Överlag så bidrar studien till den existerande litteraturen genom att förtydliga fördelarna av att utnyttja flera informationskällor för djupinlärning. Den resulterar också i rekommendationer för att designa och implementera effektiva multimodala modellarkitekturer.
99

Reasoning with structure : graph neural networks algorithms and applications

Deac, Andreea-Ioana 08 1900 (has links)
L’avènement de l'apprentissage profond a permis à l'apprentissage automatique d’exceller dans le traitement d'images et de texte. Donnant lieu à de nombreux succès dans les domaines d’applications tels que la vision par ordinateur ou le traitement du langage naturel. Cependant, il demeure un grand nombre de problèmes d’intérêt dont les données d’entrées ne peuvent être exprimées sous l’un de ces deux formats sans perte d'informations potentiellement cruciales pour leur résolution. C’est dans l’optique de répondre à ce besoin qu’a été développée la branche de l'apprentissage profond géométrique (GDL), qui s’intéresse aux espaces de représentations plus générales, mieux adaptées aux données dont la structure sous-jacente ne correspond pas au format de chaîne de caractères unidimensionnel (texte) ou bidimensionnel (images). Dans cette thèse, nous nous concentrerons plus particulièrement sur les graphes. Les graphes sont des structures de données omniprésentes, sous-jacentes à pratiquement toutes les tâches d'intérêt, y compris celles portant sur les données naturelles (par exemple les molécules), les relations entre entités (par exemple les réseaux de transport et les placements de puces), ou encore la liaison de concepts dans les processus de raisonnement (par exemple les algorithmes et autres constructions théoriques). Alors que les architectures modernes de réseaux de neurones de graphes (GNNs) dits expressifs peuvent obtenir des résultats impressionnants sur des benchmarks comme susmentionnés, leur application pratique est toujours en proie à de nombreux problèmes et lacunes, que cette thèse abordera. Les considérations issues de ces applications préparerons le terrain pour les chapitres suivants, qui se concentreront sur la résolution des limites des réseaux de neurones de graphes en proposant de nouveaux algorithmes d'apprentissage de graphes. Tout d'abord, nous porterons notre attention sur l'amélioration des réseaux de neurones de graphes pour les données qui nécessitent des interactions à longue portée, en construisant des modèles généraux pour compléter leur graphe de calcul. Viennent ensuite les réseaux de neurones de graphes pour les données hétérophiles, où les arêtes ont tendance à connecter des nœuds de différentes classes; dans ce cas, nous proposerons une modification particulière du graphe de calcul destinée à améliorer l'homophilie atténue le problème. Dans un troisième temps, nous tirerons parti d'une caractéristique avantageuse des réseaux de neurones de graphes - leur alignement avec la programmation dynamique. Elle permet aux réseaux de neurones de graphes d'exécuter des algorithmes, sur la base desquels nous proposons une nouvelle classe de planificateurs implicites pour la prise de décision. Enfin, nous capitalisons sur l'utilité de l'apprentissage profond géométrique dans l'apprentissage par renforcement et l'étendrons au-delà des GNNs, en tirant parti des réseaux de neurones à rotation équivariante dans les agents basés sur des modèles. / Since the deep learning revolution, machine learning has excelled at tasks based on images and text, many successes being possible under the umbrella of the computer vision and natural language processing fields. However, much remains that cannot be expressed in these forms without losing information. For these cases, the field of geometric deep learning was developed, covering the space of more general representations, for data whose underlying structure doesn't match the single-dimensional string of characters (text) or 2-D shape (images) format. In this thesis, I will particularly focus on graphs. Graphs are ubiquitous data structures underlying virtually all tasks of interest, including natural inputs such as molecules, entity relations for example transportation networks and chip placements, or concept linking in reasoning processes, including algorithms and other theoretical constructs. While modern expressive graph neural network architectures can achieve impressive results on benchmarks like these, their practical application is still plagued with many issues and shortcomings, which this thesis will address. The considerations from these applications will set the scene for the following chapters, which focus on tackling the limitations of graph neural networks by proposing new graph learning algorithms. Firstly, I focus on improving graph neural networks for data that requires long-range interactions by building general templates to complement their computation graph. This is followed by graph neural networks for heterophilic data, where the edges tend to connect nodes from different classes; in this case, a specialised modification of the computation graph meant to improve homophily alleviates the problem. In the third article, I leverage a strength of graph neural networks -- their alignment with dynamic programming. This enables graph neural networks to execute algorithms, based on which I propose a new class of implicit planners for decision making. Lastly, I capitalise on the utility of geometric deep learning in reinforcement learning and extend it beyond GNNs, leveraging rotation-equivariant neural networks in model-based agents.
100

Matching Sticky Notes Using Latent Representations / Matchning av klisterlappar med hjälp av latent representation

García San Vicent, Javier January 2022 (has links)
his project addresses the issue of accurately identifying repeated images of sticky notes. Due to environmental conditions and the 3D location of the camera, different pictures taken of sticky notes may look distinct enough to be hard to determine if they belong to the same note. More specifically, this thesis aims to create latent representations of these pictures of sticky notes to encode their content so that all the pictures of the same note have a similar representation that allows to identify them. Thus, those representations must be invariant to light conditions, blur and camera position. To that end, a Siamese neural architecture will be trained based on data augmentation methods. The method consists of learning to embed two augmented versions of the same image into similar representations. This architecture has been trained with unsupervised learning and fine-tuned with supervised learning to detect if two representations belong or not to the same note. The performance of ResNet, EfficientNet and Vision Transformers in encoding the images into their representations has been compared with different configurations. The results show that, while the most complex models overfit small amounts of data, the simplest encoders are capable of properly identifying more than 95% of the sticky notes in grey scale. Those models can create invariant representations that are close to each other in the latent space for pictures of the same sticky note. Gathering more data could result in an improvement of the performance of the model and the possibility of applying it to other fields such as handwritten documents. / Detta projekt tar upp frågan om att identifiera upprepade bilder av klisterlappar. På grund av miljöförhållanden och kamerans 3D-placering kan olika bilder som tagits till klisterlappar se tillräckligt distinkta ut för att det ska vara svårt att avgöra om de faktiskt tillhör samma klisterlappar. Mer specifikt är syftet med denna avhandling att skapa latenta representationer av bilder av klisterlappar som kodar deras innehåll, så att alla bilder av en klisterlapp har en liknande representation som gör det möjligt att identifiera dem. Sålunda måste representationerna vara oföränderliga för ljusförhållanden, oskärpa och kameraposition. För det ändamålet kommer en enkel siamesisk neural arkitektur att tränas baserad på dataförstärkningsmetoder. Metoden går ut på att lära sig att göra representationerna av två förstärkta versioner av en bild så lika som möjligt. Genomatt tillämpa vissa förbättringar av arkitekturen kan oövervakat lärande användas för att träna nätverket. Prestandan hos ResNet, EfficientNet och Vision Transformers när det gäller att koda bilderna till deras representationer har jämförts med olika konfigurationer. Resultaten visar att även om de mest komplexa modellerna överpassar små mängder data, kan de enklaste kodarna korrekt identifiera mer än 95% av klisterlapparna. Dessa modeller kan skapa oföränderliga representationer som är nära i det latenta utrymmet för bilder av samma klisterlapp. Att samla in mer data kan resultera i en förbättring av modellens prestanda och möjligheten att tillämpa den på andra områden som till exempel handskrivna dokument.

Page generated in 0.1846 seconds