• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 400
  • 64
  • 43
  • 27
  • 6
  • 4
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 632
  • 632
  • 286
  • 223
  • 213
  • 150
  • 138
  • 132
  • 104
  • 96
  • 94
  • 89
  • 80
  • 78
  • 78
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
551

Balancing signals for semi-supervised sequence learning

Xu, Ge Ya 12 1900 (has links)
Recurrent Neural Networks(RNNs) are powerful models that have obtained outstanding achievements in many sequence learning tasks. Despite their accomplishments, RNN models still suffer with long sequences during training. It is because error propagate backwards from output to input layers carrying gradient signals, and with long input sequence, issues like vanishing and exploding gradients can arise. This thesis reviews many current studies and existing architectures designed to circumvent the long-term dependency problems in backpropagation through time (BPTT). Mainly, we focus on the method proposed by Trinh et al. (2018) which uses semi- supervised learning method to alleviate the long-term dependency problems in BPTT. Despite the good results Trinh et al. (2018)’s model achieved, we suggest that the model can be further improved with a more systematic way of balancing auxiliary signals. In this thesis, we present our paper – RNNs with Private and Shared Representations for Semi-Supervised Learning – which is currently under review for AAAI-2019. We propose a semi-supervised RNN architecture with explicitly designed private and shared representations that regulates the gradient flow from auxiliary task to main task. / Les réseaux neuronaux récurrents (RNN) sont des modèles puissants qui ont obtenu des réalisations exceptionnelles dans de nombreuses tâches d’apprentissage séquentiel. Malgré leurs réalisations, les modèles RNN sou˙rent encore de longues séquences pendant l’entraî-nement. C’est parce que l’erreur se propage en arrière de la sortie vers les couches d’entrée transportant des signaux de gradient, et avec une longue séquence d’entrée, des problèmes comme la disparition et l’explosion des gradients peuvent survenir. Cette thèse passe en revue de nombreuses études actuelles et architectures existantes conçues pour contour-ner les problèmes de dépendance à long terme de la rétropropagation dans le temps (BPTT). Nous nous concentrons principalement sur la méthode proposée par cite Trinh2018 qui utilise une méthode d’apprentissage semi-supervisée pour atténuer les problèmes de dépendance à long terme dans BPTT. Malgré les bons résultats obtenus avec le modèle de cite Trinh2018, nous suggérons que le modèle peut être encore amélioré avec une manière plus systématique d’équilibrer les signaux auxiliaires. Dans cette thèse, nous présentons notre article - emph RNNs with Private and Shared Representations for Semi-Supervised Learning - qui est actuellement en cours de révision pour AAAI-2019. Nous propo-sons une architecture RNN semi-supervisée avec des représentations privées et partagées explicitement conçues qui régule le flux de gradient de la tâche auxiliaire à la tâche principale.
552

Self-Supervised Representation Learning for Content Based Image Retrieval

Govindarajan, Hariprasath January 2020 (has links)
Automotive technologies and fully autonomous driving have seen a tremendous growth in recent times and have benefitted from extensive deep learning research. State-of-the-art deep learning methods are largely supervised and require labelled data for training. However, the annotation process for image data is time-consuming and costly in terms of human efforts. It is of interest to find informative samples for labelling by Content Based Image Retrieval (CBIR). Generally, a CBIR method takes a query image as input and returns a set of images that are semantically similar to the query image. The image retrieval is achieved by transforming images to feature representations in a latent space, where it is possible to reason about image similarity in terms of image content. In this thesis, a self-supervised method is developed to learn feature representations of road scenes images. The self-supervised method learns feature representations for images by adapting intermediate convolutional features from an existing deep Convolutional Neural Network (CNN). A contrastive approach based on Noise Contrastive Estimation (NCE) is used to train the feature learning model. For complex images like road scenes where mutiple image aspects can occur simultaneously, it is important to embed all the salient image aspects in the feature representation. To achieve this, the output feature representation is obtained as an ensemble of feature embeddings which are learned by focusing on different image aspects. An attention mechanism is incorporated to encourage each ensemble member to focus on different image aspects. For comparison, a self-supervised model without attention is considered and a simple dimensionality reduction approach using SVD is treated as the baseline. The methods are evaluated on nine different evaluation datasets using CBIR performance metrics. The datasets correspond to different image aspects and concern the images at different spatial levels - global, semi-global and local. The feature representations learned by self-supervised methods are shown to perform better than the SVD approach. Taking into account that no labelled data is required for training, learning representations for road scenes images using self-supervised methods appear to be a promising direction. Usage of multiple query images to emphasize a query intention is investigated and a clear improvement in CBIR performance is observed. It is inconclusive whether the addition of an attentive mechanism impacts CBIR performance. The attention method shows some positive signs based on qualitative analysis and also performs better than other methods for one of the evaluation datasets containing a local aspect. This method for learning feature representations is promising but requires further research involving more diverse and complex image aspects.
553

Evaluating the effects of data augmentations for specific latent features : Using self-supervised learning / Utvärdering av effekterna av datamodifieringar på inlärda representationer : Vid självövervakande maskininlärning

Ingemarsson, Markus, Henningsson, Jacob January 2022 (has links)
Supervised learning requires labeled data which is cumbersome to produce, making it costly and time-consuming. SimCLR is a self-supervising framework that uses data augmentations to learn without labels. This thesis investigates how well cropping and color distorting augmentations work for two datasets, MPI3D and Causal3DIdent. The representations learned are evaluated using representation similarity analysis. The data augmentations were meant to make the model learn invariant representations of the object shape in the images regarding it as content while ignoring unnecessary features and regarding them as style. As a result, 8 models were created, models A-H. A and E were trained using supervised learning as a benchmark for the remaining self-supervised models. B and C learned invariant features of style instead of learning invariant representations of shape. Model D learned invariant representations of shape. Although, it also regarded style-related factors as content. Model F, G, and H managed to learn invariant representations of shape with varying intensities while regarding the rest of the features as style. The conclusion was that models can learn invariant representations of features related to content using self-supervised learning with the chosen augmentations. However, the augmentation settings must be suitable for the dataset. / Övervakad maskininlärning kräver annoterad data, vilket är dyrt och tidskrävande att producera. SimCLR är ett självövervakande maskininlärningsramverk som använder datamodifieringar för att lära sig utan annoteringar. Detta examensarbete utvärderar hur väl beskärning och färgförvrängande datamodifieringar fungerar för två dataset, MPI3D och Causal3DIdent. De inlärda representationerna utvärderas med hjälp av representativ likhetsanalys. Syftet med examensarbetet var att få de självövervakande maskininlärningsmodellerna att lära sig oföränderliga representationer av objektet i bilderna. Meningen med datamodifieringarna var att påverka modellens lärande så att modellen tolkar objektets form som relevant innehåll, men resterande egenskaper som icke-relevant innehåll. Åtta modeller skapades (A-H). A och E tränades med övervakad inlärning och användes som riktmärke för de självövervakade modellerna. B och C lärde sig oföränderliga representationer som bör ha betraktas som irrelevant istället för att lära sig form. Modell D lärde sig oföränderliga representationer av form men också irrelevanta representationer. Modellerna F, G och H lyckades lära sig oföränderliga representationer av form med varierande intensitet, samtidigt som de resterande egenskaperna betraktades som irrelevant. Beskärning och färgförvrängande datamodifieringarna gör således att självövervakande modeller kan lära sig oföränderliga representationer av egenskaper relaterade till relevant innehåll. Specifika inställningar för datamodifieringar måste dock vara lämpliga för datasetet.
554

Personalizing the post-purchase experience in online sales using machine learning. / Personalisering av efterköpsupplevelsen inom onlineförsäljning med hjälp av maskininlärning.

Kamau, Nganga, Dehoky, Dylan January 2021 (has links)
Advances in machine learning, together with an abundance of available data has lead to an explosion in personalized offerings and being able to predict what consumers want, and need without them having to ask for it. During the last decade, it has become a multi billion dollar industry, and a capability upon many of the leading tech companies rely on in their business model. Indeed, in today's business world, it is not only a capability for competitive advantage, but in many cases a matter of survival. This thesis aims to create a machine learning model able to predict customers interested in an upselling opportunity of changing their payment method after completing a purchase with the Swedish payment solutions company, Klarna Bank. Hence, the overall aim is to personalize the customer experience on the confirmation page. Two gradient boosting methods and one deep learning method were trained, evaluated and compared for this task. A logistic regression model was also trained and used as a baseline model. The results showed that all models performed better than the baseline model, with the gradient boosting methods showing the best performance. All of the models were also able to outperform the current solution with no personalization, with the best model reducing the amount of false positives by 50%. / Tillgång till stora datamängder har tillsammans med framsteg inom maskininlärning resulterat i en explotionsartad ökning i personifierade erbjudanden och möjligheter att förutspå kunders behov. Det har under det senaste decenniet utvecklats till en multimiljardindustri och en förmåga som många av de ledande techbolagen i världen förlitar sig på i sina verksamheter. I många fall är det till och med en förutsättning för att överleva i dagens industrilandskap. Det här examensarbetet ämnar att skapa en maskininlärningsmodell som är kapabel till att förutspå kunders intresse för att "uppgradera" sin betalmetod efter ett slutfört köp med den svenska betallösningsföretaget Klarna Bank. Konceptet att erbjuda en kund att uppgradera en redan vald produkt eller tjänst är på engelska känt som upselling. Det övergripande syftet för detta projekt är därför att skapa en personifierad kundupplevelse på Klarnas bekräftelsesida. Följaktligen implementerades och utvärderades två så kallade gradient boosting - metoder samt en djupinlärningsmetod. Vidare implementerades även en logistisk regressionsmodell som basmodell för att jämföra de övriga modeller med. Resultaten visar hur alla modeller överträffade den tillämpade basmodellen, där gradient boosting-metoderna påvisade bättre resultat än djupinlärningsmetoden. Därtill visar alla modeller en förbättring i jämförelse med dagens lösning på Klarnas bekräftelssesida, utan personifiering, där den bästa modellen förbättrade utfallet med 50%.
555

Classifying and Comparing Latent Space Representation of Unstructured Log Data. / Klassificering och jämförelse av latenta rymdrepresentationer av ostrukturerad loggdata.

Sharma, Bharat January 2021 (has links)
This thesis explores and compares various methods for producing vector representation of unstructured log data. Ericsson wanted to investigate machine learning methods to analyze logs produced by their systems to reduce the cost and effort required for manual log analysis. Four NLP methods were used to produce vector embeddings for logs: Doc2Vec, DAN, XLNet, and RoBERTa. Also, a Random forest classifier was used to classify those embeddings. The experiments were performed on three different datasets and the results showed that the performance of the models varied based on the dataset being used. The results also show that in the case of log data, fine-tuning makes the transformer models computationally heavy and the performance gain is very low. RoBERTa without fine-tuning produced optimal vector representations for the first and third datasets used whereas DAN had better performance for the second dataset. The study also concluded that the NLP models were able to better understand and classify the third dataset as it contained more plain text information as contrasted against more technical and less human readable datasets. / I den här uppsatsen undersöks och jämförs olika metoder för att skapa vektorrepresentationer av ostrukturerad loggdata. Ericsson vill undersöka om det är möjligt att använda tekniker inom maskininlärning för att analysera loggdata som produceras av deras nuvarande system och på så sätt underlätta och minska kostnaderna för manuell logganalys. Fyra olika språkteknologier undersöks för att skapa vektorrepresentationer av loggdata: Doc2vec, DAN, XLNet and RoBERTa. Dessutom används en Random Forest klassificerare för att klassificera vektorrepresentationerna. Experimenten utfördes på tre olika datamängder och resultaten visade att modellernas prestanda varierade baserat på datauppsättningen som används. Resultaten visar också att finjustering av transformatormodeller gör dem beräkningskrävande och prestandavinsten är liten.. RoBERTa utan finjustering producerade optimala vektorrepresentationer för de första och tredje dataset som användes, medan DAN hade bättre prestanda för det andra datasetet. Studien visar också att språkmodellerna kunde klassificera det tredje datasetet bättre då det innehöll mer information i klartext jämfört med mer tekniska och mindre lättlästa dataseten.
556

Data-efficient reinforcement learning with self-predictive representations

Schwarzer, Max 08 1900 (has links)
L'efficacité des données reste un défi majeur dans l'apprentissage par renforcement profond. Bien que les techniques modernes soient capables d'atteindre des performances élevées dans des tâches extrêmement complexes, y compris les jeux de stratégie comme le StarCraft, les échecs, le shogi et le go, ainsi que dans des domaines visuels exigeants comme les jeux Atari, cela nécessite généralement d'énormes quantités de données interactives, limitant ainsi l'application pratique de l'apprentissage par renforcement. Dans ce mémoire, nous proposons la SPR, une méthode inspirée des récentes avancées en apprentissage auto-supervisé de représentations, conçue pour améliorer l'efficacité des données des agents d'apprentissage par renforcement profond. Nous évaluons cette méthode sur l'environement d'apprentissage Atari, et nous montrons qu'elle améliore considérablement les performances des agents avec un surcroît de calcul modéré. Lorsqu'on lui accorde à peu près le même temps d'apprentissage qu'aux testeurs humains, un agent d'apprentissage par renforcement augmenté de SPR atteint des performances surhumaines dans 7 des 26 jeux, une augmentation de 350% par rapport à l'état de l'art précédent, tout en améliorant fortement les performances moyennes et médianes. Nous évaluons également cette méthode sur un ensemble de tâches de contrôle continu, montrant des améliorations substantielles par rapport aux méthodes précédentes. Le chapitre 1 présente les concepts nécessaires à la compréhension du travail présenté, y compris des aperçus de l'apprentissage par renforcement profond et de l'apprentissage auto-supervisé de représentations. Le chapitre 2 contient une description détaillée de nos contributions à l'exploitation de l'apprentissage de représentation auto-supervisé pour améliorer l'efficacité des données dans l'apprentissage par renforcement. Le chapitre 3 présente quelques conclusions tirées de ces travaux, y compris des propositions pour les travaux futurs. / Data efficiency remains a key challenge in deep reinforcement learning. Although modern techniques have been shown to be capable of attaining high performance in extremely complex tasks, including strategy games such as StarCraft, Chess, Shogi, and Go as well as in challenging visual domains such as Atari games, doing so generally requires enormous amounts of interactional data, limiting how broadly reinforcement learning can be applied. In this thesis, we propose SPR, a method drawing from recent advances in self-supervised representation learning designed to enhance the data efficiency of deep reinforcement learning agents. We evaluate this method on the Atari Learning Environment, and show that it dramatically improves performance with limited computational overhead. When given roughly the same amount of learning time as human testers, a reinforcement learning agent augmented with SPR achieves super-human performance on 7 out of 26 games, an increase of 350% over the previous state of the art, while also strongly improving mean and median performance. We also evaluate this method on a set of continuous control tasks, showing substantial improvements over previous methods. Chapter 1 introduces concepts necessary to understand the work presented, including overviews of Deep Reinforcement Learning and Self-Supervised Representation learning. Chapter 2 contains a detailed description of our contributions towards leveraging self-supervised representation learning to improve data-efficiency in reinforcement learning. Chapter 3 provides some conclusions drawn from this work, including a number of proposals for future work.
557

Sistema de gestión y clasificación automática de denuncias ambientales mediante aprendizaje de máquina / Management and automatic classification of environmental complaints system using machine learning

Concepción Tiza, Miguel Angel 04 January 2021 (has links)
Desde las últimas décadas, el impacto negativo que generan las actividades humanas ha incrementado la importancia de la protección del medio ambiente año tras año tanto en el mundo como en el Perú. Por esta razón, los gobiernos a nivel mundial implementan mecanismos de protección ambiental tales como las denuncias ambientales. Estas permiten a la población informar sobre una posible contaminación ambiental a las autoridades competentes con el fin de que tomen las acciones necesarias, para esto, es necesario que las denuncias sean formuladas, clasificadas y derivadas de forma correcta y oportuna. Sin embargo, para realizar esas tareas de forma correcta se requiere de un amplio conocimiento técnico y legal que pocas personas poseen, esto lleva a que las denuncias ambientales no puedan ser atendidas de forma rápida y eficiente generando malestar en la población afectada. Frente a esta problemática, se propone una solución informática que gestione de forma automática la clasificación y derivación de denuncias ambientales mediante el uso del aprendizaje de máquina. Considerando que la mayoría de las denuncias ambientales consisten en textos se aplica técnicas de procesamiento de lenguaje natural que mediante algoritmos de clasificación de múltiples etiquetas se pueda clasificar automáticamente las denuncias ambientales lo que mejorará los tiempos de atención. / Since the last decades, the negative impact generated by human activities has increased the importance of protecting the environment year after year both in the world and in Peru. For this reason, governments worldwide implement mechanisms for environmental protection such as environmental complaints. These allow the population to report possible environmental contamination to the competent authorities for them to take the necessary actions, for this, it is necessary that the complaints be formulated, classified, and derived in a correct and timely manner. However, to perform these tasks correctly requires extensive technical and legal knowledge that few people possess, this means that environmental complaints cannot be dealt with quickly and efficiently, generating discomfort in the affected population. Faced with this problem, a computer solution is proposed that automatically manages the classification and derivation of environmental complaints using machine learning. Considering that most environmental complaints consists of texts, natural language processing techniques are applied that, using multi-label classification algorithms, environmental complaints can be automatically classified, which will improve service times. / Tesis
558

Cooperative security log analysis using machine learning : Analyzing different approaches to log featurization and classification / Kooperativ säkerhetslogganalys med maskininlärning

Malmfors, Fredrik January 2022 (has links)
This thesis evaluates the performance of different machine learning approaches to log classification based on a dataset derived from simulating intrusive behavior towards an enterprise web application. The first experiment consists of performing attacks towards the web app in correlation with the logs to create a labeled dataset. The second experiment consists of one unsupervised model based on a variational autoencoder and four super- vised models based on both conventional feature-engineering techniques with deep neural networks and embedding-based feature techniques followed by long-short-term memory architectures and convolutional neural networks. With this dataset, the embedding-based approaches performed much better than the conventional one. The autoencoder did not perform well compared to the supervised models. To conclude, embedding-based ap- proaches show promise even on datasets with different characteristics compared to natural language.
559

Writer identification using semi-supervised GAN and LSR method on offline block characters

Hagström, Adrian, Stanikzai, Rustam January 2020 (has links)
Block characters are often used when filling out forms, for example when writing ones personal number. The question of whether or not there is recoverable, biometric (identity related) information within individual digits of hand written personal numbers is then relevant. This thesis investigates the question by using both handcrafted features and extracting features via Deep learning (DL) models, and successively limiting the amount of available training samples. Some recent works using DL have presented semi-supervised methods using Generative adveserial network (GAN) generated data together with a modified Label smoothing regularization (LSR) function. Using this training method might improve performance on a baseline fully supervised model when doing authentication. This work additionally proposes a novel modified LSR function named Bootstrap label smooting regularizer (BLSR) designed to mitigate some of the problems of previous methods, and is compared to the others. The DL feature extraction is done by training a ResNet50 model to recognize writers of a personal numbers and then extracting the feature vector from the second to last layer of the network.Results show a clear indication of recoverable identity related information within the hand written (personal number) digits in boxes. Our results indicate an authentication performance, expressed in Equal error rate (EER), of around 25% with handcrafted features. The same performance measured in EER was between 20-30% when using the features extracted from the DL model. The DL methods, while showing potential for greater performance than the handcrafted, seem to suffer from fluctuation (noisiness) of results, making conclusions on their use in practice hard to draw. Additionally when using 1-2 training samples the handcrafted features easily beat the DL methods.When using the LSR variant semi-supervised methods there is no noticeable performance boost and BLSR gets the second best results among the alternatives.
560

Using supervised learning methods to predict the stop duration of heavy vehicles.

Oldenkamp, Emiel January 2020 (has links)
In this thesis project, we attempt to predict the stop duration of heavy vehicles using data based on GPS positions collected in a previous project. All of the training and prediction is done in AWS SageMaker, and we explore possibilities with Linear Learner, K-Nearest Neighbors and XGBoost, all of which are explained in this paper. Although we were not able to construct a production-grade model within the time frame of the thesis, we were able to show that the potential for such a model does exist given more time, and propose some suggestions for the paths one can take to improve on the endpoint of this project.

Page generated in 0.1019 seconds