• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 305
  • 96
  • 41
  • 24
  • 17
  • 11
  • 9
  • 6
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 614
  • 318
  • 204
  • 170
  • 140
  • 115
  • 102
  • 101
  • 88
  • 77
  • 65
  • 56
  • 55
  • 55
  • 54
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Retention Length and Memory Capacity of Recurrent Neural Networks

Pretorius, Abraham Daniel January 2020 (has links)
Recurrent Neural Networks (RNNs) are variants of Neural Networks that are able to learn temporal relationships between sequences presented to the neural network. RNNs are often employed to learn underlying relationships in time series and sequential data. This dissertation examines the extent of RNN’s memory retention and how it is influenced by different activation functions, network structures and recurrent network types. To investigate memory retention, three approaches (and variants thereof) are used. First the number of patterns each network is able to retain is measured. Thereafter the length of retention is investigated. Lastly the previous experiments are combined to measure the retention of patterns over time. During each investigation, the effect of using different activation functions and network structures are considered to determine the configurations’ effect on memory retention. The dissertation concludes that memory retention of a network is not necessarily improved when adding more parameters to a network. Activation functions have a large effect on the performance of RNNs when retaining patterns, especially temporal patterns. Deeper network structures have the trade-off of less memory retention per parameter in favour of the ability to model more complex relationships. / Dissertation (MSc (Computer Science))--University of Pretoria, 2020. / Computer Science / MSc (Computer Science) / Unrestricted
162

Algoritmy pro rozpoznávání pojmenovaných entit / Algorithms for named entities recognition

Winter, Luca January 2017 (has links)
The aim of this work is to find out which algorithm is the best at recognizing named entities in e-mail messages. The theoretical part explains the existing tools in this field. The practical part describes the design of two tools specifically designed to create new models capable of recognizing named entities in e-mail messages. The first tool is based on a neural network and the second tool uses a CRF graph model. The existing and newly created tools and their ability to generalize are compared on a subset of e-mail messages provided by Kiwi.com.
163

Komprese obrazu pomocí neuronových sítí / Image Compression with Neural Networks

Teuer, Lukáš January 2018 (has links)
This document describes image compression using different types of neural networks. Features of neural networks like convolutional and recurrent networks are also discussed here. The document contains detailed description of various neural network architectures and their inner workings. In addition, experiments are carried out on various neural network structures and parameters in order to find the most appropriate properties for image compression. Also, there are proposed new concepts for image compression using neural networks that are also immediately tested. Finally, a network of the best concepts and parts discovered during experimentation is designed.
164

Detekce ohně a kouře z obrazového signálu / Image based smoke and fire detection

Ďuriš, Denis January 2020 (has links)
This diploma thesis deals with the detection of fire and smoke from the image signal. The approach of this work uses a combination of convolutional and recurrent neural network. Machine learning models created in this work contain inception modules and blocks of long short-term memory. The research part describes selected models of machine learning used in solving the problem of fire detection in static and dynamic image data. As part of the solution, a data set containing videos and still images used to train the designed neural networks was created. The results of this approach are evaluated in conclusion.
165

Segmentace obrazových dat pomocí grafových neuronových sítí / Image segmentation using graph neural networks

Boszorád, Matej January 2020 (has links)
This diploma thesis describes and implements the design of a graph neural network usedfor 2D segmentation of neural structure. The first chapter of the thesis briefly introduces the problem of segmentation. In this chapter, segmentation techniques are divided according to the principles of the methods they use. Each type of technique contains the essence of this category as well as a description of one representative. The second chapter of the diploma thesis explains graph neural networks (GNN for short). Here, the thesis divides graph neural networks in general and describes recurrent graph neural networks(RGNN for short) and graph autoencoders, that can be used for image segmentation, in more detail. The specific image segmentation solution is based on the message passing method in RGNN, which can replace convolution masks in convolutional neural networks.RGNN also provides a simpler multilayer perceptron topology. The second type of graph neural networks characterised in the thesis are graph autoencoders, which use various methods for better encoding of graph vertices into Euclidean space. The last part ofthe diploma thesis deals with the analysis of the problem, the proposal of its specific solution and the evaluation of results. The purpose of the practical part of the work was the implementation of GNN for image data segmentation. The advantage of using neural networks is the ability to solve different types of segmentation by changing training data. RGNN with messaging passing and node2vec were used as implementation GNNf or segmentation problem. RGNN training was performed on graphics cards provided bythe school and Google Colaboratory. Learning RGNN using node2vec was very memory intensive and therefore it was necessary to train on a processor with an operating memory larger than 12GB. As part of the RGNN optimization, learning was tested using various loss functions, changing topology and learning parameters. A tree structure method was developed to use node2vec to improve segmentation, but the results did not confirman improvement for a small number of iterations. The best outcomes of the practical implementation were evaluated by comparing the tested data with the convolutional neural network U-Net. It is possible to state comparable results to the U-Net network, but further testing is needed to compare these neural networks. The result of the thesisis the use of RGNN as a modern solution to the problem of image segmentation and providing a foundation for further research.
166

Forecasting Atmospheric Turbulence Conditions From Prior Environmental Parameters Using Artificial Neural Networks: An Ensemble Study

Grose, Mitchell 18 May 2021 (has links)
No description available.
167

Réseaux de neurones récurrents pour le traitement automatique de la parole / Speech processing using recurrent neural networks

Gelly, Grégory 22 September 2017 (has links)
Le domaine du traitement automatique de la parole regroupe un très grand nombre de tâches parmi lesquelles on trouve la reconnaissance de la parole, l'identification de la langue ou l'identification du locuteur. Ce domaine de recherche fait l'objet d'études depuis le milieu du vingtième siècle mais la dernière rupture technologique marquante est relativement récente et date du début des années 2010. C'est en effet à ce moment qu'apparaissent des systèmes hybrides utilisant des réseaux de neurones profonds (DNN) qui améliorent très notablement l'état de l'art. Inspirés par le gain de performance apporté par les DNN et par les travaux d'Alex Graves sur les réseaux de neurones récurrents (RNN), nous souhaitions explorer les capacités de ces derniers. En effet, les RNN nous semblaient plus adaptés que les DNN pour traiter au mieux les séquences temporelles du signal de parole. Dans cette thèse, nous nous intéressons tout particulièrement aux RNN à mémoire court-terme persistante (Long Short Term Memory (LSTM) qui permettent de s'affranchir d'un certain nombre de difficultés rencontrées avec des RNN standards. Nous augmentons ce modèle et nous proposons des processus d'optimisation permettant d'améliorer les performances obtenues en segmentation parole/non-parole et en identification de la langue. En particulier, nous introduisons des fonctions de coût dédiées à chacune des deux tâches: un simili-WER pour la segmentation parole/non-parole dans le but de diminuer le taux d'erreur d'un système de reconnaissance de la parole et une fonction de coût dite de proximité angulaire pour les problèmes de classification multi-classes tels que l'identification de la langue parlée. / Automatic speech processing is an active field of research since the 1950s. Within this field the main area of research is automatic speech recognition but simpler tasks such as speech activity detection, language identification or speaker identification are also of great interest to the community. The most recent breakthrough in speech processing appeared around 2010 when speech recognition systems using deep neural networks drastically improved the state-of-the-art. Inspired by this gains and the work of Alex Graves on recurrent neural networks (RNN), we decided to explore the possibilities brought by these models on realistic data for two different tasks: speech activity detection and spoken language identification. In this work, we closely look at a specific model for the RNNs: the Long Short Term Memory (LSTM) which mitigates a lot of the difficulties that can arise when training an RNN. We augment this model and introduce optimization methods that lead to significant performance gains for speech activity detection and language identification. More specifically, we introduce a WER-like loss function to train a speech activity detection system so as to minimize the word error rate of a downstream speech recognition system. We also introduce two different methods to successfully train a multiclass classifier based on neural networks for tasks such as LID. The first one is based on a divide-and-conquer approach and the second one is based on an angular proximity loss function. Both yield performance gains but also speed up the training process.
168

Single asset trading: a recurrent reinforcement learning approach

Nikolic, Marko January 2020 (has links)
Asset trading using machine learning has become popular within the financial industry in the recent years. This can for instance be seen in the large number of daily trading volume which are defined by an automatic algorithm. This thesis presents a recurrent reinforcement learning model to trade an asset. The benefits, drawdowns and the derivations of the model are presented. Different parameters of the model are calibrated and tuned considering a traditional division between training and testing data set and also with the help of nested cross validation. The results of the single asset trading model are compared to the benchmark strategy, which consists of buying the underlying asset and hold it for a long period of time regardless of the asset volatility. The proposed model outperforms the buy and hold strategy on three out of four stocks selected for the experiment. Additionally, returns of the model are sensitive to changes in epoch, m, learning rate and training/test ratio.
169

Omni SCADA intrusion detection

Gao, Jun 11 May 2020 (has links)
We investigate deep learning based omni intrusion detection system (IDS) for supervisory control and data acquisition (SCADA) networks that are capable of detecting both temporally uncorrelated and correlated attacks. Regarding the IDSs developed in this paper, a feedforward neural network (FNN) can detect temporally uncorrelated attacks at an F1 of 99.967±0.005% but correlated attacks as low as 58±2%. In contrast, long-short term memory (LSTM) detects correlated attacks at 99.56±0.01% while uncorrelated attacks at 99.3±0.1%. Combining LSTM and FNN through an ensemble approach further improves the IDS performance with F1 of 99.68±0.04% regardless the temporal correlations among the data packets. / Graduate
170

Learning Long Temporal Sequences in Spiking Networks by Multiplexing Neural Oscillations

Vincent-Lamarre, Philippe 17 December 2019 (has links)
Many living organisms have the ability to execute complex behaviors and cognitive processes that are reliable. In many cases, such tasks are generated in the absence of an ongoing external input that could drive the activity on their underlying neural populations. For instance, writing the word "time" requires a precise sequence of muscle contraction in the hand and wrist. There has to be some patterns of activity in the areas of the brain responsible for this behaviour that are endogenously generated every time an individual performs this action. Whereas the question of how such neural code is transformed in the target motor sequence is a question of its own, their origin is perhaps even more puzzling. Most models of cortical and sub-cortical circuits suggest that many of their neural populations are chaotic. This means that very small amounts of noise, such as an additional action potential in a neuron of a network, can lead to completely different patterns of activity. Reservoir computing is one of the first frameworks that provided an efficient solution for biologically relevant neural networks to learn complex temporal tasks in the presence of chaos. We showed that although reservoirs (i.e. recurrent neural networks) are robust to noise, they are extremely sensitive to some forms of structural perturbations, such as removing one neuron out of thousands. We proposed an alternative to these models, where the source of autonomous activity is no longer originating from the reservoir, but from a set of oscillating networks projecting to the reservoir. In our simulations, we show that this solution produce rich patterns of activity and lead to networks that are both resistant to noise and structural perturbations. The model can learn a wide variety of temporal tasks such as interval timing, motor control, speech production and spatial navigation.

Page generated in 0.0879 seconds