• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 229
  • 10
  • 10
  • 9
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 309
  • 309
  • 139
  • 117
  • 111
  • 93
  • 71
  • 62
  • 60
  • 56
  • 55
  • 53
  • 50
  • 49
  • 47
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

TASK DETECTORS FOR PROGRESSIVE SYSTEMS

Maxwell Joseph Jacobson (10669431) 30 April 2021 (has links)
While methods like learning-without-forgetting [11] and elastic weight consolidation [22] accomplish high-quality transfer learning while mitigating catastrophic forgetting, progressive techniques such as Deepmind’s progressive neural network accomplish this while completely nullifying forgetting. However, progressive systems like this strictly require task labels during test time. In this paper, I introduce a novel task recognizer built from anomaly detection autoencoders that is capable of detecting the nature of the required task from input data.Alongside a progressive neural network or other progressive learning system, this task-aware network is capable of operating without task labels during run time while maintaining any catastrophic forgetting reduction measures implemented by the task model.
72

Quantile Regression Deep Q-Networks for Multi-Agent System Control

Howe, Dustin 05 1900 (has links)
Training autonomous agents that are capable of performing their assigned job without fail is the ultimate goal of deep reinforcement learning. This thesis introduces a dueling Quantile Regression Deep Q-network, where the network learns the state value quantile function and advantage quantile function separately. With this network architecture the agent is able to learn to control simulated robots in the Gazebo simulator. Carefully crafted reward functions and state spaces must be designed for the agent to learn in complex non-stationary environments. When trained for only 100,000 timesteps, the agent is able reach asymptotic performance in environments with moving and stationary obstacles using only the data from the inertial measurement unit, LIDAR, and positional information. Through the use of transfer learning, the agents are also capable of formation control and flocking patterns. The performance of agents with frozen networks is improved through advice giving in Deep Q-networks by use of normalized Q-values and majority voting.
73

Night Setback Identification of District Heating Substations

Gerima, Kassaye January 2021 (has links)
Energy efficiency of district heating systems is of great interest to energy stakeholders. However, it is not uncommon that district heating systems fail to achieve the expected performance due to inappropriate operations. Night setback is one control strategy, which has been proved to be not a suitable setting for well-insulated modern buildings in terms of both economic and energy efficiency. Therefore, identification of a night setback control is vital to district heating companies to smoothly manage their heat energy distribution to their customers. This study is motivated to automate this identification process. The method used in this thesis is a Convolutional Neural Network(CNN) approach using the concept of transfer learning. 133 substations in Oslo are used in this case study to design a machine learning model that can identify a substation as night setback or non-night setback series. The results show that the proposed method can classify the substations with approximately 97% accuracy and 91% F1-score. This shows that the proposed method has a high potential to be deployed and used in practice to identify a night setback control in district heating substations.
74

Transfer Learning on Ultrasound Spectrograms of Weld Joints for Predictive Maintenance

Bergström, Joakim January 2020 (has links)
A big hurdle for many companies to start using machine learning is that trending techniques need a huge amount of structured data. One potential way to reduce the need for data is taking advantage of previous knowledge from a related task. This is so called transfer learning. A basic description of it would be when you take a model trained on existing data and reuse that for another problem. The purpose of this master thesis is to investigate if transfer learning can reduce the need for data when faced with a new machine learning task which is, in particular, to use transfer learning on ultrasound spectrograms of weld joints for predictive maintenance. The base for transfer learning is VGGish, a convolutional neural network model trained on audio samples collected from YouTube videos. The pre-trained weights are kept, and the prediction layer is replaced with a new prediction layer consisting of two neurons. The whole model is re-trained on the ultrasound spectrograms. The dataset is restricted to a minimum of ten and a maximum of 100 training samples. The results are evaluated and compared to a regular convolutional neural network trained on the same data. The results show that transfer learning improves the test accuracy compared to the regular convolutional neural network when the dataset is small. This thesis project concludes that transfer learning can reduce the need for data when faced with a new machine learning task. The results indicate that transfer learning could be useful in the industry.
75

Domain adaptation from 3D synthetic images to real images

Manamasa, Krishna Himaja January 2020 (has links)
Background. Domain adaptation is described as, a model learning from a source data distribution and performing well on the target data. This concept, Domain adaptation is applied to assembly-line production tasks to perform an automatic quality inspection. Objectives. The aim of this master thesis is to apply this concept of 3D domain adaptation from synthetic images to real images. It is an attempt to bridge the gap between different domains (synthetic and real point cloud images), by implementing deep learning models that learn from synthetic 3D point cloud (CAD model images) and perform well on the actual 3D point cloud (3D Camera images). Methods. Through this course of thesis project, various methods for understand- ing the data and analyzing it for bridging the gap between CAD and CAM to make them similar is looked into. Literature review and controlled experiment are research methodologies followed during implementation. In this project, we experiment with four different deep learning models with data generated and compare their performance to know which deep learning model performs best for the data. Results. The results are explained through metrics i.e, accuracy and train time, which were the outcomes of each of the deep learning models after the experiment. These metrics are illustrated in the form of graphs for comparative analysis between the models on which the data is trained and tested on. PointDAN showed better results with higher accuracy compared to the other 3 models. Conclusions. The results attained show that domain adaptation for synthetic images to real images is possible with the data generated. PointDAN deep learning model which focuses on local feature alignment and global feature alignment with single-view point data shows better results with our data.
76

Energy Predictions of Multiple Buildings using Bi-directional Long short-term Memory

Gustafsson, Anton, Sjödal, Julian January 2020 (has links)
The process of energy consumption and monitoring of a buildingis time-consuming. Therefore, an feasible approach for using trans-fer learning is presented to decrease the necessary time to extract re-quired large dataset. The technique applies a bidirectional long shortterm memory recurrent neural network using sequence to sequenceprediction. The idea involves a training phase that extracts informa-tion and patterns of a building that is presented with a reasonablysized dataset. The validation phase uses a dataset that is not sufficientin size. This dataset was acquired through a related paper, the resultscan therefore be validated accordingly. The conducted experimentsinclude four cases that involve different strategies in training and val-idation phases and percentages of fine-tuning. Our proposed modelgenerated better scores in terms of prediction performance comparedto the related paper.
77

Stockidentifiering och estimering av diameterfördelning med djupinlärning / Log Detection and Diameter Distribution Estimation Using Deep Learning

Almlöf, Mattias January 2020 (has links)
Mabema har en produkt som mäter vedvolym av virkestravar på lastbilar. Systemet är byggt på att en bildbehandlingsalgoritm hittar silhuetterna av stockarna på renderade bilder av lastbilstravar. Arbetsgivaren är inte helt nöjd med prestandan av algoritmen och vill utreda om djupinlärning kan förbättra resultatet. Detta arbete undersöker hur diameterfördelningen i varje trave kan estimeras med hjälp av djupinlärning och objektdetektering i synnerhet. Två metoder granskas, den ena hanterar problemet abstrakt med djup regression medan den andra metoden går in i detalj och nyttjar objektigenkänning för att hitta stockändar. Arbetet utvärderar även möjliheterna att träna dessa modeller baserat på data från fysiska simulationer. Det visar sig vara användbart att nyttja syntetisk data för träning och med transfer learning lyckas de syntetiska modellen uppnå kraven Biometria ställer på automatiserad diameterberäkning. Med objektdetektering visar det sig också gå att uppnå samma prestanda som arbetsgivarens algoritm med en bättre stocksökning tre gånger så snabbt eller snabbare.
78

Learning from electrophysiology time series during sleep : from scoring to event detection / Apprentissage à partir de séries temporelles d'électrophysiologie pendant le sommeil : de l'annotation manuelle à la détection automatique d'évènements

Chambon, Stanislas 14 December 2018 (has links)
Le sommeil est un phénomène biologique universel complexe et encore peu compris. La méthode de référence actuelle pour caractériser les états de vigilance au cours du sommeil est la polysomnographie (PSG) qui enregistre de manière non invasive à la surface de la peau, les modifications électrophysiologiques de l’activité cérébrale (électroencéphalographie, EEG), oculaire (électro-oculographie, EOG) et musculaire (électromyographie, EMG). Traditionnellement, les signaux électrophysiologiques sont ensuite analysés par un expert du sommeil qui annote manuellement les évènements d’intérêt comme les stades de sommeil ou certains micro-évènements (grapho éléments EEG). Toutefois, l’annotation manuelle est chronophage et sujette à la subjectivité de l’expert. De plus, le développement exponentiel d’outils de monitoring du sommeil enregistrant et analysant automatiquement les signaux électrophysiologiques tels que le bandeau Dreem rend nécessaire une automatisation de ces tâches.L’apprentissage machine bénéficie d’une attention croissante car il permet d’apprendre à un ordinateur à réaliser certaines tâches de décision à partir d’un ensemble d’exemples d’apprentissage et d’obtenir des performances de prédictions plus élevées qu’avec les méthodes classiques. Les avancées techniques dans le domaine de l’apprentissage profond ont ouvert de nouvelles perspectives pour la science du sommeil tout en soulevant de nouveaux défis techniques. L’entraînement des algorithmes d’apprentissage profond nécessite une grande quantité de données annotées qui n’est pas nécessairement disponible pour les données PSG. De plus, les algorithmes d’apprentissage sont très sensibles à la variabilité des données qui est non négligeable en ce qui concerne ces données. Cela s’explique par la variabilité intra et inter-sujet (pathologies / sujets sains, âge…).Cette thèse étudie le développement d’algorithmes d’apprentissage profond afin de réaliser deux types de tâches: la prédiction des stades de sommeil et la détection de micro-événements. Une attention particulière est portée (a) sur la quantité de données annotées requise pour l’entraînement des algorithmes proposés et (b) sur la sensibilité de ces algorithmes à la variabilité des données. Des stratégies spécifiques, basées sur l’apprentissage par transfert, sont proposées pour résoudre les problèmes techniques dus au manque de données annotées et à la variabilité des données. / Sleep is a complex and not fully understood biological phenomenon. The traditional process to monitor sleep relies on the polysomnography exam (PSG). It records, in a non invasive fashion at the level of the skin, electrophysiological modifications of the brain activity (electroencephalography, EEG), ocular (electro-oculography, EOG) and muscular (electro-myography, EMG). The recorded signals are then analyzed by a sleep expert who manually annotates the events of interest such as the sleep stages or some micro-events. However, manual labeling is time-consuming and prone to the expert subjectivity. Furthermore, the development of sleep monitoring consumer wearable devices which record and process automatically electrophysiological signals, such as Dreem headband, requires to automate some labeling tasks.Machine learning (ML) has received much attention as a way to teach a computer to perform some decision tasks automatically from a set of learning examples. Furthermore, the rise of deep learning (DL) algorithms in several fields have opened new perspectives for sleep sciences. On the other hand, this is also raising new concerns related to the scarcity of labeled data that may prevent their training processes and the variability of data that may hurt their performances. Indeed, sleep data is scarce due to the labeling burden and exhibits also some intra and inter-subject variability (due to sleep disorders, aging...).This thesis has investigated and proposed ML algorithms to automate the detection of sleep related events from raw PSG time series. Through the prism of DL, it addressed two main tasks: sleep stage classification and micro-event detection. A particular attention was brought (a) to the quantity of labeled data required to train such algorithms and (b) to the generalization performances of these algorithms to new (variable) data. Specific strategies, based on transfer learning, were designed to cope with the issues related to the scarcity of labeled data and the variability of data.
79

Learning from Task Heterogeneity in Social Media

January 2019 (has links)
abstract: In recent years, the rise in social media usage both vertically in terms of the number of users by platform and horizontally in terms of the number of platforms per user has led to data explosion. User-generated social media content provides an excellent opportunity to mine data of interest and to build resourceful applications. The rise in the number of healthcare-related social media platforms and the volume of healthcare knowledge available online in the last decade has resulted in increased social media usage for personal healthcare. In the United States, nearly ninety percent of adults, in the age group 50-75, have used social media to seek and share health information. Motivated by the growth of social media usage, this thesis focuses on healthcare-related applications, study various challenges posed by social media data, and address them through novel and effective machine learning algorithms. The major challenges for effectively and efficiently mining social media data to build functional applications include: (1) Data reliability and acceptance: most social media data (especially in the context of healthcare-related social media) is not regulated and little has been studied on the benefits of healthcare-specific social media; (2) Data heterogeneity: social media data is generated by users with both demographic and geographic diversity; (3) Model transparency and trustworthiness: most existing machine learning models for addressing heterogeneity are considered as black box models, not many providing explanations for why they do what they do to trust them. In response to these challenges, three main research directions have been investigated in this thesis: (1) Analyzing social media influence on healthcare: to study the real world impact of social media as a source to offer or seek support for patients with chronic health conditions; (2) Learning from task heterogeneity: to propose various models and algorithms that are adaptable to new social media platforms and robust to dynamic social media data, specifically on modeling user behaviors, identifying similar actors across platforms, and adapting black box models to a specific learning scenario; (3) Explaining heterogeneous models: to interpret predictive models in the presence of task heterogeneity. In this thesis, novel algorithms with theoretical analysis from various aspects (e.g., time complexity, convergence properties) have been proposed. The effectiveness and efficiency of the proposed algorithms is demonstrated by comparison with state-of-the-art methods and relevant case studies. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2019
80

Cross Platform Training of Neural Networks to Enable Object Identification by Autonomous Vehicles

January 2019 (has links)
abstract: Autonomous vehicle technology has been evolving for years since the Automated Highway System Project. However, this technology has been under increased scrutiny ever since an autonomous vehicle killed Elaine Herzberg, who was crossing the street in Tempe, Arizona in March 2018. Recent tests of autonomous vehicles on public roads have faced opposition from nearby residents. Before these vehicles are widely deployed, it is imperative that the general public trusts them. For this, the vehicles must be able to identify objects in their surroundings and demonstrate the ability to follow traffic rules while making decisions with human-like moral integrity when confronted with an ethical dilemma, such as an unavoidable crash that will injure either a pedestrian or the passenger. Testing autonomous vehicles in real-world scenarios would pose a threat to people and property alike. A safe alternative is to simulate these scenarios and test to ensure that the resulting programs can work in real-world scenarios. Moreover, in order to detect a moral dilemma situation quickly, the vehicle should be able to identify objects in real-time while driving. Toward this end, this thesis investigates the use of cross-platform training for neural networks that perform visual identification of common objects in driving scenarios. Here, the object detection algorithm Faster R-CNN is used. The hypothesis is that it is possible to train a neural network model to detect objects from two different domains, simulated or physical, using transfer learning. As a proof of concept, an object detection model is trained on image datasets extracted from CARLA, a virtual driving environment, via transfer learning. After bringing the total loss factor to 0.4, the model is evaluated with an IoU metric. It is determined that the model has a precision of 100% and 75% for vehicles and traffic lights respectively. The recall is found to be 84.62% and 75% for the same. It is also shown that this model can detect the same classes of objects from other virtual environments and real-world images. Further modifications to the algorithm that may be required to improve performance are discussed as future work. / Dissertation/Thesis / Masters Thesis Mechanical Engineering 2019

Page generated in 0.0758 seconds