• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 308
  • 96
  • 41
  • 24
  • 17
  • 11
  • 9
  • 6
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 617
  • 321
  • 207
  • 173
  • 140
  • 115
  • 105
  • 101
  • 89
  • 78
  • 67
  • 56
  • 55
  • 55
  • 55
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Algoritmy pro rozpoznávání pojmenovaných entit / Algorithms for named entities recognition

Winter, Luca January 2017 (has links)
The aim of this work is to find out which algorithm is the best at recognizing named entities in e-mail messages. The theoretical part explains the existing tools in this field. The practical part describes the design of two tools specifically designed to create new models capable of recognizing named entities in e-mail messages. The first tool is based on a neural network and the second tool uses a CRF graph model. The existing and newly created tools and their ability to generalize are compared on a subset of e-mail messages provided by Kiwi.com.
162

Komprese obrazu pomocí neuronových sítí / Image Compression with Neural Networks

Teuer, Lukáš January 2018 (has links)
This document describes image compression using different types of neural networks. Features of neural networks like convolutional and recurrent networks are also discussed here. The document contains detailed description of various neural network architectures and their inner workings. In addition, experiments are carried out on various neural network structures and parameters in order to find the most appropriate properties for image compression. Also, there are proposed new concepts for image compression using neural networks that are also immediately tested. Finally, a network of the best concepts and parts discovered during experimentation is designed.
163

Detekce ohně a kouře z obrazového signálu / Image based smoke and fire detection

Ďuriš, Denis January 2020 (has links)
This diploma thesis deals with the detection of fire and smoke from the image signal. The approach of this work uses a combination of convolutional and recurrent neural network. Machine learning models created in this work contain inception modules and blocks of long short-term memory. The research part describes selected models of machine learning used in solving the problem of fire detection in static and dynamic image data. As part of the solution, a data set containing videos and still images used to train the designed neural networks was created. The results of this approach are evaluated in conclusion.
164

Segmentace obrazových dat pomocí grafových neuronových sítí / Image segmentation using graph neural networks

Boszorád, Matej January 2020 (has links)
This diploma thesis describes and implements the design of a graph neural network usedfor 2D segmentation of neural structure. The first chapter of the thesis briefly introduces the problem of segmentation. In this chapter, segmentation techniques are divided according to the principles of the methods they use. Each type of technique contains the essence of this category as well as a description of one representative. The second chapter of the diploma thesis explains graph neural networks (GNN for short). Here, the thesis divides graph neural networks in general and describes recurrent graph neural networks(RGNN for short) and graph autoencoders, that can be used for image segmentation, in more detail. The specific image segmentation solution is based on the message passing method in RGNN, which can replace convolution masks in convolutional neural networks.RGNN also provides a simpler multilayer perceptron topology. The second type of graph neural networks characterised in the thesis are graph autoencoders, which use various methods for better encoding of graph vertices into Euclidean space. The last part ofthe diploma thesis deals with the analysis of the problem, the proposal of its specific solution and the evaluation of results. The purpose of the practical part of the work was the implementation of GNN for image data segmentation. The advantage of using neural networks is the ability to solve different types of segmentation by changing training data. RGNN with messaging passing and node2vec were used as implementation GNNf or segmentation problem. RGNN training was performed on graphics cards provided bythe school and Google Colaboratory. Learning RGNN using node2vec was very memory intensive and therefore it was necessary to train on a processor with an operating memory larger than 12GB. As part of the RGNN optimization, learning was tested using various loss functions, changing topology and learning parameters. A tree structure method was developed to use node2vec to improve segmentation, but the results did not confirman improvement for a small number of iterations. The best outcomes of the practical implementation were evaluated by comparing the tested data with the convolutional neural network U-Net. It is possible to state comparable results to the U-Net network, but further testing is needed to compare these neural networks. The result of the thesisis the use of RGNN as a modern solution to the problem of image segmentation and providing a foundation for further research.
165

Forecasting Atmospheric Turbulence Conditions From Prior Environmental Parameters Using Artificial Neural Networks: An Ensemble Study

Grose, Mitchell 18 May 2021 (has links)
No description available.
166

Réseaux de neurones récurrents pour le traitement automatique de la parole / Speech processing using recurrent neural networks

Gelly, Grégory 22 September 2017 (has links)
Le domaine du traitement automatique de la parole regroupe un très grand nombre de tâches parmi lesquelles on trouve la reconnaissance de la parole, l'identification de la langue ou l'identification du locuteur. Ce domaine de recherche fait l'objet d'études depuis le milieu du vingtième siècle mais la dernière rupture technologique marquante est relativement récente et date du début des années 2010. C'est en effet à ce moment qu'apparaissent des systèmes hybrides utilisant des réseaux de neurones profonds (DNN) qui améliorent très notablement l'état de l'art. Inspirés par le gain de performance apporté par les DNN et par les travaux d'Alex Graves sur les réseaux de neurones récurrents (RNN), nous souhaitions explorer les capacités de ces derniers. En effet, les RNN nous semblaient plus adaptés que les DNN pour traiter au mieux les séquences temporelles du signal de parole. Dans cette thèse, nous nous intéressons tout particulièrement aux RNN à mémoire court-terme persistante (Long Short Term Memory (LSTM) qui permettent de s'affranchir d'un certain nombre de difficultés rencontrées avec des RNN standards. Nous augmentons ce modèle et nous proposons des processus d'optimisation permettant d'améliorer les performances obtenues en segmentation parole/non-parole et en identification de la langue. En particulier, nous introduisons des fonctions de coût dédiées à chacune des deux tâches: un simili-WER pour la segmentation parole/non-parole dans le but de diminuer le taux d'erreur d'un système de reconnaissance de la parole et une fonction de coût dite de proximité angulaire pour les problèmes de classification multi-classes tels que l'identification de la langue parlée. / Automatic speech processing is an active field of research since the 1950s. Within this field the main area of research is automatic speech recognition but simpler tasks such as speech activity detection, language identification or speaker identification are also of great interest to the community. The most recent breakthrough in speech processing appeared around 2010 when speech recognition systems using deep neural networks drastically improved the state-of-the-art. Inspired by this gains and the work of Alex Graves on recurrent neural networks (RNN), we decided to explore the possibilities brought by these models on realistic data for two different tasks: speech activity detection and spoken language identification. In this work, we closely look at a specific model for the RNNs: the Long Short Term Memory (LSTM) which mitigates a lot of the difficulties that can arise when training an RNN. We augment this model and introduce optimization methods that lead to significant performance gains for speech activity detection and language identification. More specifically, we introduce a WER-like loss function to train a speech activity detection system so as to minimize the word error rate of a downstream speech recognition system. We also introduce two different methods to successfully train a multiclass classifier based on neural networks for tasks such as LID. The first one is based on a divide-and-conquer approach and the second one is based on an angular proximity loss function. Both yield performance gains but also speed up the training process.
167

Single asset trading: a recurrent reinforcement learning approach

Nikolic, Marko January 2020 (has links)
Asset trading using machine learning has become popular within the financial industry in the recent years. This can for instance be seen in the large number of daily trading volume which are defined by an automatic algorithm. This thesis presents a recurrent reinforcement learning model to trade an asset. The benefits, drawdowns and the derivations of the model are presented. Different parameters of the model are calibrated and tuned considering a traditional division between training and testing data set and also with the help of nested cross validation. The results of the single asset trading model are compared to the benchmark strategy, which consists of buying the underlying asset and hold it for a long period of time regardless of the asset volatility. The proposed model outperforms the buy and hold strategy on three out of four stocks selected for the experiment. Additionally, returns of the model are sensitive to changes in epoch, m, learning rate and training/test ratio.
168

Omni SCADA intrusion detection

Gao, Jun 11 May 2020 (has links)
We investigate deep learning based omni intrusion detection system (IDS) for supervisory control and data acquisition (SCADA) networks that are capable of detecting both temporally uncorrelated and correlated attacks. Regarding the IDSs developed in this paper, a feedforward neural network (FNN) can detect temporally uncorrelated attacks at an F1 of 99.967±0.005% but correlated attacks as low as 58±2%. In contrast, long-short term memory (LSTM) detects correlated attacks at 99.56±0.01% while uncorrelated attacks at 99.3±0.1%. Combining LSTM and FNN through an ensemble approach further improves the IDS performance with F1 of 99.68±0.04% regardless the temporal correlations among the data packets. / Graduate
169

Learning Long Temporal Sequences in Spiking Networks by Multiplexing Neural Oscillations

Vincent-Lamarre, Philippe 17 December 2019 (has links)
Many living organisms have the ability to execute complex behaviors and cognitive processes that are reliable. In many cases, such tasks are generated in the absence of an ongoing external input that could drive the activity on their underlying neural populations. For instance, writing the word "time" requires a precise sequence of muscle contraction in the hand and wrist. There has to be some patterns of activity in the areas of the brain responsible for this behaviour that are endogenously generated every time an individual performs this action. Whereas the question of how such neural code is transformed in the target motor sequence is a question of its own, their origin is perhaps even more puzzling. Most models of cortical and sub-cortical circuits suggest that many of their neural populations are chaotic. This means that very small amounts of noise, such as an additional action potential in a neuron of a network, can lead to completely different patterns of activity. Reservoir computing is one of the first frameworks that provided an efficient solution for biologically relevant neural networks to learn complex temporal tasks in the presence of chaos. We showed that although reservoirs (i.e. recurrent neural networks) are robust to noise, they are extremely sensitive to some forms of structural perturbations, such as removing one neuron out of thousands. We proposed an alternative to these models, where the source of autonomous activity is no longer originating from the reservoir, but from a set of oscillating networks projecting to the reservoir. In our simulations, we show that this solution produce rich patterns of activity and lead to networks that are both resistant to noise and structural perturbations. The model can learn a wide variety of temporal tasks such as interval timing, motor control, speech production and spatial navigation.
170

Machine Learning, Game Theory Algorithms, and Medium Access Protocols for 5G and Internet-of-Thing (IoT) Networks

Elkourdi, Mohamed 25 March 2019 (has links)
In the first part of this dissertation, a novel medium access protocol for the Internet of Thing (IoT) networks is introduced. The Internet of things (IoT), which is the network of physical devices embedded with sensors, actuators, and connectivity, is being accelerated into the mainstream by the emergence of 5G wireless networking. This work presents an uncoordinated non-orthogonal random-access protocol, which is an enhancement to the recently introduced slotted ALOHA- NOMA (SAN) protocol that provides high throughput, while being matched to the low complexity requirements and the sporadic traffic pattern of IoT devices. Under ideal conditions it has been shown that slotted ALOHA-NOMA (SAN), using power- domain orthogonality, can significantly increase the throughput using SIC (Successive Interference Cancellation) to enable correct reception of multiple simultaneous transmitted signals. For this ideal performance, the enhanced SAN receiver adaptively learns the number of active devices (which is not known a priori) using a form of multi-hypothesis testing. For small numbers of simultaneous transmissions, it is shown that there can be substantial throughput gain of 5.5 dB relative to slotted ALOHA (SA) for 0.07 probability of transmission and up to 3 active transmitters. As a further enhancement to SAN protocol, the SAN with beamforming (BF-SAN) protocol was proposed. The BF-SAN protocol uses beamforming to significantly improve the throughput to 1.31 compared with 0.36 in conventional slotted ALOHA when 6 active IoT devices can be successfully separated using 2×2 MIMO and a SIC (Successive Interference Cancellation) receiver with 3 optimum power levels. The simulation results further show that the proposed protocol achieves higher throughput than SAN with a lower average channel access delay. In the second part of this dissertation a novel Machine Learning (ML) approach was applied for proactive mobility management in 5G Virtual Cell (VC) wireless networks. Providing seamless mobility and a uniform user experience, independent of location, is an important challenge for 5G wireless networks. The combination of Coordinated Multipoint (CoMP) networks and Virtual- Cells (VCs) are expected to play an important role in achieving high throughput independent of the mobile’s location by mitigating inter-cell interference and enhancing the cell-edge user throughput. User- specific VCs will distinguish the physical cell from a broader area where the user can roam without the need for handoff, and may communicate with any Base Station (BS) in the VC area. However, this requires rapid decision making for the formation of VCs. In this work, a novel algorithm based on a form of Recurrent Neural Networks (RNNs) called Gated Recurrent Units (GRUs) is used for predicting the triggering condition for forming VCs via enabling Coordinated Multipoint (CoMP) transmission. Simulation results show that based on the sequences of Received Signal Strength (RSS) values of different mobile nodes used for training the RNN, the future RSS values from the closest three BSs can be accurately predicted using GRU, which is then used for making proactive decisions on enabling CoMP transmission and forming VCs. Finally, the work in the last part of this dissertation was directed towards applying Bayesian games for cell selection / user association in 5G Heterogenous networks to achieve the 5G goal of low latency communication. Expanding the cellular ecosystem to support an immense number of connected devices and creating a platform that accommodates a wide range of emerging services of different traffic types and Quality of Service (QoS) metrics are among the 5G’s headline features. One of the key 5G performance metrics is ultra-low latency to enable new delay-sensitive use cases. Some network architectural amendments are proposed to achieve the 5G ultra-low latency objective. With these paradigm shifts in system architecture, it is of cardinal importance to rethink the cell selection / user association process to achieve substantial improvement in system performance over conventional maximum signal-to- interference plus noise ratio (Max-SINR) and Cell Range Expansion (CRE) algorithms employed in Long Term Evolution- Advanced (LTE- Advanced). In this work, a novel Bayesian cell selection / user association algorithm, incorporating the access nodes capabilities and the user equipment (UE) traffic type, is proposed in order to maximize the probability of proper association and consequently enhance the system performance in terms of achieved latency. Simulation results show that Bayesian game approach attains the 5G low end-to-end latency target with a probability exceeding 80%.

Page generated in 0.0688 seconds