• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 238
  • 10
  • 10
  • 9
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 319
  • 319
  • 142
  • 121
  • 115
  • 97
  • 73
  • 65
  • 61
  • 57
  • 57
  • 54
  • 52
  • 51
  • 51
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Strojový překlad mluvené řeči přes fonetickou reprezentaci zdrojové řeči / Spoken Language Translation via Phoneme Representation of the Source Language

Polák, Peter January 2020 (has links)
We refactor the traditional two-step approach of automatic speech recognition for spoken language translation. Instead of conventional graphemes, we use phonemes as an intermediate speech representation. Starting with the acoustic model, we revise the cross-lingual transfer and propose a coarse-to-fine method providing further speed-up and performance boost. Further, we review the translation model. We experiment with source and target encoding, boosting the robustness by utilizing the fine-tuning and transfer across ASR and SLT. We empirically document that this conventional setup with an alternative representation not only performs well on standard test sets but also provides robust transcripts and translations on challenging (e.g., non-native) test sets. Notably, our ASR system outperforms commercial ASR systems. 1
72

Deep Transfer Learning Applied to Time-series Classification for Predicting Heart Failure Worsening Using Electrocardiography

Pan, Xiang 20 April 2020 (has links)
Computational ECG (electrocardiogram) analysis enables accurate and faster diagnosis and early prediction of heart failure related symptoms (heart failure worsening). Machine learning, particularly deep learning, has been applied for ECG data successfully. The previous applications, however, either mainly focused on classifying occurrent, known patterns of on-going heart failure or heart failure related diseases such arrhythmia, which have undesirable predictability beforehand, or emphasizing on data from pre-processed public database data. In this dissertation, we developed an approach, however, does not fully capitalize on the potential of deep learning, which directly learns important features from raw input data without relying on a priori knowledge. Here, we present a deep transfer learning pipeline which combines an image-based pretrained deep neural network model with manifold learning to predict the precursors of heart failure (heart failure-worsening and recurrent heart failure related re-hospitalization) using raw ECG time series from wearable devices. In this dissertation, we used the unprocessed real-life ECG data from the SENTINEL-HF study by Dovancescu, et al. to predict the precursors of heart failure worsening. To extract rich features from ECG time series, we took a deep transfer learning approach where 1D time-series of five heartbeats were transformed to 2D images by Gramian Angular Summation Field (GASF) and then the pretrained models, VGG19 were used for feature extraction. Then, we applied UMAP (Uniform Manifold Approximation and Projection) to capture the manifold of the standardized feature space and reduce the dimension, followed by SVM (Support Vector Machine) training. Using our pipeline, we demonstrated that our classifier was able to predict heart failure worsening with 92.1% accuracy, 92.9% precision, 92.6% recall and F1 score of 0.93 bypassing the detection of known abnormal ECG patterns. In conclusion, we demonstrate the feasibility of early alerts of heart failure by predicting the precursor of heart failure worsening based on raw ECG signals. We expected that our approached provided an innovative method to assess the recovery and successfulness for the treatment patient received during the first hospitalization, to predict whether recurrent heart failure is likely to occur, and to evaluate whether the patient should be discharged.
73

TASK DETECTORS FOR PROGRESSIVE SYSTEMS

Maxwell Joseph Jacobson (10669431) 30 April 2021 (has links)
While methods like learning-without-forgetting [11] and elastic weight consolidation [22] accomplish high-quality transfer learning while mitigating catastrophic forgetting, progressive techniques such as Deepmind’s progressive neural network accomplish this while completely nullifying forgetting. However, progressive systems like this strictly require task labels during test time. In this paper, I introduce a novel task recognizer built from anomaly detection autoencoders that is capable of detecting the nature of the required task from input data.Alongside a progressive neural network or other progressive learning system, this task-aware network is capable of operating without task labels during run time while maintaining any catastrophic forgetting reduction measures implemented by the task model.
74

Quantile Regression Deep Q-Networks for Multi-Agent System Control

Howe, Dustin 05 1900 (has links)
Training autonomous agents that are capable of performing their assigned job without fail is the ultimate goal of deep reinforcement learning. This thesis introduces a dueling Quantile Regression Deep Q-network, where the network learns the state value quantile function and advantage quantile function separately. With this network architecture the agent is able to learn to control simulated robots in the Gazebo simulator. Carefully crafted reward functions and state spaces must be designed for the agent to learn in complex non-stationary environments. When trained for only 100,000 timesteps, the agent is able reach asymptotic performance in environments with moving and stationary obstacles using only the data from the inertial measurement unit, LIDAR, and positional information. Through the use of transfer learning, the agents are also capable of formation control and flocking patterns. The performance of agents with frozen networks is improved through advice giving in Deep Q-networks by use of normalized Q-values and majority voting.
75

Night Setback Identification of District Heating Substations

Gerima, Kassaye January 2021 (has links)
Energy efficiency of district heating systems is of great interest to energy stakeholders. However, it is not uncommon that district heating systems fail to achieve the expected performance due to inappropriate operations. Night setback is one control strategy, which has been proved to be not a suitable setting for well-insulated modern buildings in terms of both economic and energy efficiency. Therefore, identification of a night setback control is vital to district heating companies to smoothly manage their heat energy distribution to their customers. This study is motivated to automate this identification process. The method used in this thesis is a Convolutional Neural Network(CNN) approach using the concept of transfer learning. 133 substations in Oslo are used in this case study to design a machine learning model that can identify a substation as night setback or non-night setback series. The results show that the proposed method can classify the substations with approximately 97% accuracy and 91% F1-score. This shows that the proposed method has a high potential to be deployed and used in practice to identify a night setback control in district heating substations.
76

Transfer Learning on Ultrasound Spectrograms of Weld Joints for Predictive Maintenance

Bergström, Joakim January 2020 (has links)
A big hurdle for many companies to start using machine learning is that trending techniques need a huge amount of structured data. One potential way to reduce the need for data is taking advantage of previous knowledge from a related task. This is so called transfer learning. A basic description of it would be when you take a model trained on existing data and reuse that for another problem. The purpose of this master thesis is to investigate if transfer learning can reduce the need for data when faced with a new machine learning task which is, in particular, to use transfer learning on ultrasound spectrograms of weld joints for predictive maintenance. The base for transfer learning is VGGish, a convolutional neural network model trained on audio samples collected from YouTube videos. The pre-trained weights are kept, and the prediction layer is replaced with a new prediction layer consisting of two neurons. The whole model is re-trained on the ultrasound spectrograms. The dataset is restricted to a minimum of ten and a maximum of 100 training samples. The results are evaluated and compared to a regular convolutional neural network trained on the same data. The results show that transfer learning improves the test accuracy compared to the regular convolutional neural network when the dataset is small. This thesis project concludes that transfer learning can reduce the need for data when faced with a new machine learning task. The results indicate that transfer learning could be useful in the industry.
77

Domain adaptation from 3D synthetic images to real images

Manamasa, Krishna Himaja January 2020 (has links)
Background. Domain adaptation is described as, a model learning from a source data distribution and performing well on the target data. This concept, Domain adaptation is applied to assembly-line production tasks to perform an automatic quality inspection. Objectives. The aim of this master thesis is to apply this concept of 3D domain adaptation from synthetic images to real images. It is an attempt to bridge the gap between different domains (synthetic and real point cloud images), by implementing deep learning models that learn from synthetic 3D point cloud (CAD model images) and perform well on the actual 3D point cloud (3D Camera images). Methods. Through this course of thesis project, various methods for understand- ing the data and analyzing it for bridging the gap between CAD and CAM to make them similar is looked into. Literature review and controlled experiment are research methodologies followed during implementation. In this project, we experiment with four different deep learning models with data generated and compare their performance to know which deep learning model performs best for the data. Results. The results are explained through metrics i.e, accuracy and train time, which were the outcomes of each of the deep learning models after the experiment. These metrics are illustrated in the form of graphs for comparative analysis between the models on which the data is trained and tested on. PointDAN showed better results with higher accuracy compared to the other 3 models. Conclusions. The results attained show that domain adaptation for synthetic images to real images is possible with the data generated. PointDAN deep learning model which focuses on local feature alignment and global feature alignment with single-view point data shows better results with our data.
78

Energy Predictions of Multiple Buildings using Bi-directional Long short-term Memory

Gustafsson, Anton, Sjödal, Julian January 2020 (has links)
The process of energy consumption and monitoring of a buildingis time-consuming. Therefore, an feasible approach for using trans-fer learning is presented to decrease the necessary time to extract re-quired large dataset. The technique applies a bidirectional long shortterm memory recurrent neural network using sequence to sequenceprediction. The idea involves a training phase that extracts informa-tion and patterns of a building that is presented with a reasonablysized dataset. The validation phase uses a dataset that is not sufficientin size. This dataset was acquired through a related paper, the resultscan therefore be validated accordingly. The conducted experimentsinclude four cases that involve different strategies in training and val-idation phases and percentages of fine-tuning. Our proposed modelgenerated better scores in terms of prediction performance comparedto the related paper.
79

Stockidentifiering och estimering av diameterfördelning med djupinlärning / Log Detection and Diameter Distribution Estimation Using Deep Learning

Almlöf, Mattias January 2020 (has links)
Mabema har en produkt som mäter vedvolym av virkestravar på lastbilar. Systemet är byggt på att en bildbehandlingsalgoritm hittar silhuetterna av stockarna på renderade bilder av lastbilstravar. Arbetsgivaren är inte helt nöjd med prestandan av algoritmen och vill utreda om djupinlärning kan förbättra resultatet. Detta arbete undersöker hur diameterfördelningen i varje trave kan estimeras med hjälp av djupinlärning och objektdetektering i synnerhet. Två metoder granskas, den ena hanterar problemet abstrakt med djup regression medan den andra metoden går in i detalj och nyttjar objektigenkänning för att hitta stockändar. Arbetet utvärderar även möjliheterna att träna dessa modeller baserat på data från fysiska simulationer. Det visar sig vara användbart att nyttja syntetisk data för träning och med transfer learning lyckas de syntetiska modellen uppnå kraven Biometria ställer på automatiserad diameterberäkning. Med objektdetektering visar det sig också gå att uppnå samma prestanda som arbetsgivarens algoritm med en bättre stocksökning tre gånger så snabbt eller snabbare.
80

Learning from electrophysiology time series during sleep : from scoring to event detection / Apprentissage à partir de séries temporelles d'électrophysiologie pendant le sommeil : de l'annotation manuelle à la détection automatique d'évènements

Chambon, Stanislas 14 December 2018 (has links)
Le sommeil est un phénomène biologique universel complexe et encore peu compris. La méthode de référence actuelle pour caractériser les états de vigilance au cours du sommeil est la polysomnographie (PSG) qui enregistre de manière non invasive à la surface de la peau, les modifications électrophysiologiques de l’activité cérébrale (électroencéphalographie, EEG), oculaire (électro-oculographie, EOG) et musculaire (électromyographie, EMG). Traditionnellement, les signaux électrophysiologiques sont ensuite analysés par un expert du sommeil qui annote manuellement les évènements d’intérêt comme les stades de sommeil ou certains micro-évènements (grapho éléments EEG). Toutefois, l’annotation manuelle est chronophage et sujette à la subjectivité de l’expert. De plus, le développement exponentiel d’outils de monitoring du sommeil enregistrant et analysant automatiquement les signaux électrophysiologiques tels que le bandeau Dreem rend nécessaire une automatisation de ces tâches.L’apprentissage machine bénéficie d’une attention croissante car il permet d’apprendre à un ordinateur à réaliser certaines tâches de décision à partir d’un ensemble d’exemples d’apprentissage et d’obtenir des performances de prédictions plus élevées qu’avec les méthodes classiques. Les avancées techniques dans le domaine de l’apprentissage profond ont ouvert de nouvelles perspectives pour la science du sommeil tout en soulevant de nouveaux défis techniques. L’entraînement des algorithmes d’apprentissage profond nécessite une grande quantité de données annotées qui n’est pas nécessairement disponible pour les données PSG. De plus, les algorithmes d’apprentissage sont très sensibles à la variabilité des données qui est non négligeable en ce qui concerne ces données. Cela s’explique par la variabilité intra et inter-sujet (pathologies / sujets sains, âge…).Cette thèse étudie le développement d’algorithmes d’apprentissage profond afin de réaliser deux types de tâches: la prédiction des stades de sommeil et la détection de micro-événements. Une attention particulière est portée (a) sur la quantité de données annotées requise pour l’entraînement des algorithmes proposés et (b) sur la sensibilité de ces algorithmes à la variabilité des données. Des stratégies spécifiques, basées sur l’apprentissage par transfert, sont proposées pour résoudre les problèmes techniques dus au manque de données annotées et à la variabilité des données. / Sleep is a complex and not fully understood biological phenomenon. The traditional process to monitor sleep relies on the polysomnography exam (PSG). It records, in a non invasive fashion at the level of the skin, electrophysiological modifications of the brain activity (electroencephalography, EEG), ocular (electro-oculography, EOG) and muscular (electro-myography, EMG). The recorded signals are then analyzed by a sleep expert who manually annotates the events of interest such as the sleep stages or some micro-events. However, manual labeling is time-consuming and prone to the expert subjectivity. Furthermore, the development of sleep monitoring consumer wearable devices which record and process automatically electrophysiological signals, such as Dreem headband, requires to automate some labeling tasks.Machine learning (ML) has received much attention as a way to teach a computer to perform some decision tasks automatically from a set of learning examples. Furthermore, the rise of deep learning (DL) algorithms in several fields have opened new perspectives for sleep sciences. On the other hand, this is also raising new concerns related to the scarcity of labeled data that may prevent their training processes and the variability of data that may hurt their performances. Indeed, sleep data is scarce due to the labeling burden and exhibits also some intra and inter-subject variability (due to sleep disorders, aging...).This thesis has investigated and proposed ML algorithms to automate the detection of sleep related events from raw PSG time series. Through the prism of DL, it addressed two main tasks: sleep stage classification and micro-event detection. A particular attention was brought (a) to the quantity of labeled data required to train such algorithms and (b) to the generalization performances of these algorithms to new (variable) data. Specific strategies, based on transfer learning, were designed to cope with the issues related to the scarcity of labeled data and the variability of data.

Page generated in 0.1279 seconds