• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 87
  • 14
  • 13
  • 9
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 158
  • 158
  • 158
  • 85
  • 55
  • 52
  • 51
  • 45
  • 33
  • 29
  • 29
  • 27
  • 27
  • 26
  • 26
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Learning Sparse Recurrent Neural Networks in Language Modeling

Shao, Yuanlong 25 September 2014 (has links)
No description available.
2

Toward a Brain-like Memory with Recurrent Neural Networks

Salihoglu, Utku 12 November 2009 (has links)
For the last twenty years, several assumptions have been expressed in the fields of information processing, neurophysiology and cognitive sciences. First, neural networks and their dynamical behaviors in terms of attractors is the natural way adopted by the brain to encode information. Any information item to be stored in the neural network should be coded in some way or another in one of the dynamical attractors of the brain, and retrieved by stimulating the network to trap its dynamics in the desired item’s basin of attraction. The second view shared by neural network researchers is to base the learning of the synaptic matrix on a local Hebbian mechanism. The third assumption is the presence of chaos and the benefit gained by its presence. Chaos, although very simply produced, inherently possesses an infinite amount of cyclic regimes that can be exploited for coding information. Moreover, the network randomly wanders around these unstable regimes in a spontaneous way, thus rapidly proposing alternative responses to external stimuli, and being easily able to switch from one of these potential attractors to another in response to any incoming stimulus. Finally, since their introduction sixty years ago, cell assemblies have proved to be a powerful paradigm for brain information processing. After their introduction in artificial intelligence, cell assemblies became commonly used in computational neuroscience as a neural substrate for content addressable memories. Based on these assumptions, this thesis provides a computer model of neural network simulation of a brain-like memory. It first shows experimentally that the more information is to be stored in robust cyclic attractors, the more chaos appears as a regime in the background, erratically itinerating among brief appearances of these attractors. Chaos does not appear to be the cause, but the consequence of the learning. However, it appears as an helpful consequence that widens the network’s encoding capacity. To learn the information to be stored, two supervised iterative Hebbian learning algorithm are proposed. One leaves the semantics of the attractors to be associated with the feeding data unprescribed, while the other defines it a priori. Both algorithms show good results, even though the first one is more robust and has a greater storing capacity. Using these promising results, a biologically plausible alternative to these algorithms is proposed using cell assemblies as substrate for information. Even though this is not new, the mechanisms underlying their formation are poorly understood and, so far, there are no biologically plausible algorithms that can explain how external stimuli can be online stored in cell assemblies. This thesis provide such a solution combining a fast Hebbian/anti-Hebbian learning of the network's recurrent connections for the creation of new cell assemblies, and a slower feedback signal which stabilizes the cell assemblies by learning the feed forward input connections. This last mechanism is inspired by the retroaxonal hypothesis.
3

An explainable method for prediction of sepsis in ICUs using deep learning

Baghaei, Kourosh T 30 April 2021 (has links)
As a complicated lethal medical emergency, sepsis is not easy to be diagnosed until it is too late for taking any life saving actions. Early prediction of sepsis in ICUs may reduce inpatient mortality rate. Although deep learning models can make predictions on the outcome of ICU stays with high accuracies, the opacity of such neural networks decreases their reliability. Particularly, in the ICU settings where the time is not on doctors' side and every single mistake increase the chances of patient's mortality. Therefore, it is crucial for the predictive model to provide some sort of reasoning in addition to the prediction it provides, so that the medical staff could avoid actions based on false alarms. To address this problem, we propose to add an attention layer to a deep recurrent neural network that can learn the relative importance of each of the parameters of the multivariate data of the ICU stay. Our approach sheds light on providing explainability through attention mechanism. We compare our method with some of the state-of-the-art methods and show the superiority of our approach in terms of providing explanations.
4

MIMO Channel Prediction Using Recurrent Neural Networks

Potter, Chris, Kosbar, Kurt, Panagos, Adam 10 1900 (has links)
ITC/USA 2008 Conference Proceedings / The Forty-Fourth Annual International Telemetering Conference and Technical Exhibition / October 27-30, 2008 / Town and Country Resort & Convention Center, San Diego, California / Adaptive modulation is a communication technique capable of maximizing throughput while guaranteeing a fixed symbol error rate (SER). However, this technique requires instantaneous channel state information at the transmitter. This can be obtained by predicting channel states at the receiver and feeding them back to the transmitter. Existing algorithms used to predict single-input single-output (SISO) channels with recurrent neural networks (RNN) are extended to multiple-input multiple-output (MIMO) channels for use with adaptive modulation and their performance is demonstrated in several examples.
5

Multimodal Affective Computing Using Temporal Convolutional Neural Network and Deep Convolutional Neural Networks

Ayoub, Issa 24 June 2019 (has links)
Affective computing has gained significant attention from researchers in the last decade due to the wide variety of applications that can benefit from this technology. Often, researchers describe affect using emotional dimensions such as arousal and valence. Valence refers to the spectrum of negative to positive emotions while arousal determines the level of excitement. Describing emotions through continuous dimensions (e.g. valence and arousal) allows us to encode subtle and complex affects as opposed to discrete emotions, such as the basic six emotions: happy, anger, fear, disgust, sad and neutral. Recognizing spontaneous and subtle emotions remains a challenging problem for computers. In our work, we employ two modalities of information: video and audio. Hence, we extract visual and audio features using deep neural network models. Given that emotions are time-dependent, we apply the Temporal Convolutional Neural Network (TCN) to model the variations in emotions. Additionally, we investigate an alternative model that combines a Convolutional Neural Network (CNN) and a Recurrent Neural Network (RNN). Given our inability to fit the latter deep model into the main memory, we divide the RNN into smaller segments and propose a scheme to back-propagate gradients across all segments. We configure the hyperparameters of all models using Gaussian processes to obtain a fair comparison between the proposed models. Our results show that TCN outperforms RNN for the recognition of the arousal and valence emotional dimensions. Therefore, we propose the adoption of TCN for emotion detection problems as a baseline method for future work. Our experimental results show that TCN outperforms all RNN based models yielding a concordance correlation coefficient of 0.7895 (vs. 0.7544) on valence and 0.8207 (vs. 0.7357) on arousal on the validation dataset of SEWA dataset for emotion prediction.
6

Redes neurais recorrentes para produção de sequências temporais / Recurrent neural networks for production of temporal sequences

D\'Arbo Junior, Hélio 20 March 1998 (has links)
Dois problemas de planejamento de trajetórias são tratados nesta dissertação, sendo um discreto e outro contínuo. O problema discreto consiste em estabelecer todos os estados intermediários de uma trajetória para levar um conjunto de quatro blocos de uma posição inicial à uma posição meta. O problema contínuo consiste em planejar e controlar a trajetória do braço mecânico PUMA 560. A classe de modelos que se utilizou nesta dissertação foram os modelos parcialmente recorrentes. O problema discreto foi utilizado com a finalidade de comparar os seis modelos propostos, buscando obter um modelo com bom desempenho para resolução de problemas de produção de seqüências temporais. Para o problema contínuo aplicou-se apenas o modelo que apresentou melhor desempenho na resolução do problema discreto. Em ambos os casos são apresentados como entrada para a rede, o ponto inicial e o ponto meta. Dois tipos de testes foram aplicados as arquiteturas: teste de produção e de generalização de seqüências temporais. Para cada problema foram criados quatro tipos distintos de trajetórias, com graus de complexidades diferentes. Para o problema discreto, em média, a arquitetura com realimentação da camada de saída para a camada de entrada e da camada de entrada para ela mesma, todos-para-todos, foi a que apresentou menor número de épocas e também os menores valores de erro durante o treinamento. Foi o único que conseguiu recuperar todos os padrões treinados e de forma geral apresentou melhor capacidade de generalização. Por isto, este modelo foi escolhido para ser aplicado na resolução do problema contínuo, tendo bom desempenho, conseguindo reproduzir as trajetórias treinadas com grande precisão. Para o problema discreto todos os modelos apresentaram baixa capacidade de generalização. Para o problema contínuo o modelo abordado apresentou-se de forma satisfatória mediante o acréscimo de ruído. / Two trajectory planning problems are discussed in this work, one of them being discrete and the other continuous. The discrete problem consists in establishing all the intermediate states o f a trajectory to move a set of four blocks from a initial to a goal position. The continuous problem consists in planning and controlling the trajectory of the PUMA 560 mechanical arm. The class of models utilized in this work were the partially recurrent models. The discrete problem was used in order to compare the six proposed models, aiming at the acquisition of a model with a good performance for the resolution of production of temporal sequence problems. For the continuous problem, only the model that presented better performance in solving the discrete problem was applied. The initial and goal point are presented as input for the network in both problems. Two types of tests were applied to the architectures: production and generalization of temporal sequence tests. Four distinct types of trajectories with different complexity levels were created for each problem. In average, for the discrete problem, the architecture with feedback from the output to the input layer and from input layer to itself all-to-all presented the lowest epoch number in addition to the lowest error values during the training. This was the only model that managed to recover all the patterns trained and in general presented better generalization capacity. For this reason, this model was chosen to be applied in the resolution of the continuous problem. It presented a good performance to the production of mechanical arm trajectories, managing to reproduce the trained trajectories with great accuracy. For the discrete problem, all the models presented low generalization capacity. For the continuous problem, the approached model presented itself in a satisfactmy manner by means of noise addition.
7

Recurrent neural networks for time-series prediction.

Brax, Christoffer January 2000 (has links)
<p>Recurrent neural networks have been used for time-series prediction with good results. In this dissertation recurrent neural networks are compared with time-delayed feed forward networks, feed forward networks and linear regression models on a prediction task. The data used in all experiments is real-world sales data containing two kinds of segments: campaign segments and non-campaign segments. The task is to make predictions of sales under campaigns. It is evaluated if more accurate predictions can be made when only using the campaign segments of the data.</p><p>Throughout the entire project a knowledge discovery process, identified in the literature has been used to give a structured work-process. The results show that the recurrent network is not better than the other evaluated algorithms, in fact, the time-delayed feed forward neural network showed to give the best predictions. The results also show that more accurate predictions could be made when only using information from campaign segments.</p>
8

Αυτόματος έλεγχος συστημάτων με ανατροφοδοτούμενα νευρωνικά δίκτυα

Γιαννόπουλος, Σπυρίδων 21 January 2009 (has links)
Σήμερα η μελέτη των τεχνητών νευρωνικών δικτύων είναι ένα ώριμο επιστημονικό πεδίο. Τα πρώτα μοντέλα νευρωνικών δικτύων έκαναν την εμφάνιση τους την δεκαετία 1940 έως 1950, ξεκινώντας από το βασικό μοντέλο του νευρώνα του Mc Culloch-Pitls και τον πρώτο αλγόριθμο εκπαίδευσης ενός νευρώνα, τον γνωστό Perceptron του Frank Rosenblatt. Σήμερα υπάρχουν πληθώρα νευρωνικών μοντέλων που ακολουθούν διάφορα πρότυπα μάθησης όπως εκπαίδευση με εποπτεία (επίβλεψη) εκπαίδευση χωρίς εποπτεία κ.α. Η εργασία αυτή αποτελείται από 6 κεφάλαια ξεκινώντας από τις βασικές έννοιες των τεχνητών νευρωνικών δικτύων και συνεχίζοντας μέχρι την ανάλυση των ανατροφοδοτούμενων νευρωνικών δικτύων καθώς και την χρήση τους στον έλεγχο συστημάτων παρουσιάζοντας και διάφορες εφαρμογές τους. Στο πρώτο εισαγωγικό κεφάλαιο αναφέρουμε τις βασικές αρχές των τεχνητών νευρωνικών δικτύων και την αντιστοιχία τους με τον φυσικό νευρώνα του ανθρώπου. Παραθέτουμε επίσης μια σύντομη ιστορική αναδρομή Στην συνέχεια στο κεφάλαιο 2 ασχολούμαστε με τα ανατροφοδοτούμενα νευρωνικά δίκτυα. Δίνεται ένας ορισμός τον ανατροφοδοτούμενων νευρωνικών δικτύων (recurrent neural networks RNN) και αναφέρονται τα κυριότερα και δημοφιλέστερα είδη αυτών. Δίνοντας μια σύντομη ανάλυση της λειτουργίας τους. Το τρίτο κεφάλαιο ασχολείται με την εκπαίδευση των νευρωνικών δικτύων και τους διάφορους αλγόριθμους εκπαίδευσης. Ξεκινώντας από τον αλγόριθμο εκπαίδευσης του πιο απλού νευρωνικού δικτύου του Perceptron και καταλήγοντας στον αλγόριθμο Back-Propagation. Το τέταρτο κεφάλαιο αναφέρεται στον έλεγχο συστημάτων και την χρήση των νευρωνικών και ανατροφοδοτούμενων νευρωνικών δικτύων σε αυτόν. Αναλύονται οι διάφορες μοντελοποιήσεις καθώς και οι δομές ελέγχου (υπό επίβλεψη , αντίστροφος έλεγχος, προσαρμοστικός γραμμικός έλεγχος κ.α.) Στα δύο τελευταία κεφάλαια παραθέτουμε ένα παράδειγμα χρήσης ενός απλού ανατροφοδοτούμενου νευρωνικού δικτύου (simple recurrent network SRN) στον έλεγχο και τέλος εφαρμογές των νευρωνικών δικτύων σε διάφορους τομείς. / -
9

Dynamical systems theory for transparent symbolic computation in neuronal networks

Carmantini, Giovanni Sirio January 2017 (has links)
In this thesis, we explore the interface between symbolic and dynamical system computation, with particular regard to dynamical system models of neuronal networks. In doing so, we adhere to a definition of computation as the physical realization of a formal system, where we say that a dynamical system performs a computation if a correspondence can be found between its dynamics on a vectorial space and the formal system’s dynamics on a symbolic space. Guided by this definition, we characterize computation in a range of neuronal network models. We first present a constructive mapping between a range of formal systems and Recurrent Neural Networks (RNNs), through the introduction of a Versatile Shift and a modular network architecture supporting its real-time simulation. We then move on to more detailed models of neural dynamics, characterizing the computation performed by networks of delay-pulse-coupled oscillators supporting the emergence of heteroclinic dynamics. We show that a correspondence can be found between these networks and Finite-State Transducers, and use the derived abstraction to investigate how noise affects computation in this class of systems, unveiling a surprising facilitatory effect on information transmission. Finally, we present a new dynamical framework for computation in neuronal networks based on the slow-fast dynamics paradigm, and discuss the consequences of our results for future work, specifically for what concerns the fields of interactive computation and Artificial Intelligence.
10

Redes neurais recorrentes para produção de sequências temporais / Recurrent neural networks for production of temporal sequences

Hélio D\'Arbo Junior 20 March 1998 (has links)
Dois problemas de planejamento de trajetórias são tratados nesta dissertação, sendo um discreto e outro contínuo. O problema discreto consiste em estabelecer todos os estados intermediários de uma trajetória para levar um conjunto de quatro blocos de uma posição inicial à uma posição meta. O problema contínuo consiste em planejar e controlar a trajetória do braço mecânico PUMA 560. A classe de modelos que se utilizou nesta dissertação foram os modelos parcialmente recorrentes. O problema discreto foi utilizado com a finalidade de comparar os seis modelos propostos, buscando obter um modelo com bom desempenho para resolução de problemas de produção de seqüências temporais. Para o problema contínuo aplicou-se apenas o modelo que apresentou melhor desempenho na resolução do problema discreto. Em ambos os casos são apresentados como entrada para a rede, o ponto inicial e o ponto meta. Dois tipos de testes foram aplicados as arquiteturas: teste de produção e de generalização de seqüências temporais. Para cada problema foram criados quatro tipos distintos de trajetórias, com graus de complexidades diferentes. Para o problema discreto, em média, a arquitetura com realimentação da camada de saída para a camada de entrada e da camada de entrada para ela mesma, todos-para-todos, foi a que apresentou menor número de épocas e também os menores valores de erro durante o treinamento. Foi o único que conseguiu recuperar todos os padrões treinados e de forma geral apresentou melhor capacidade de generalização. Por isto, este modelo foi escolhido para ser aplicado na resolução do problema contínuo, tendo bom desempenho, conseguindo reproduzir as trajetórias treinadas com grande precisão. Para o problema discreto todos os modelos apresentaram baixa capacidade de generalização. Para o problema contínuo o modelo abordado apresentou-se de forma satisfatória mediante o acréscimo de ruído. / Two trajectory planning problems are discussed in this work, one of them being discrete and the other continuous. The discrete problem consists in establishing all the intermediate states o f a trajectory to move a set of four blocks from a initial to a goal position. The continuous problem consists in planning and controlling the trajectory of the PUMA 560 mechanical arm. The class of models utilized in this work were the partially recurrent models. The discrete problem was used in order to compare the six proposed models, aiming at the acquisition of a model with a good performance for the resolution of production of temporal sequence problems. For the continuous problem, only the model that presented better performance in solving the discrete problem was applied. The initial and goal point are presented as input for the network in both problems. Two types of tests were applied to the architectures: production and generalization of temporal sequence tests. Four distinct types of trajectories with different complexity levels were created for each problem. In average, for the discrete problem, the architecture with feedback from the output to the input layer and from input layer to itself all-to-all presented the lowest epoch number in addition to the lowest error values during the training. This was the only model that managed to recover all the patterns trained and in general presented better generalization capacity. For this reason, this model was chosen to be applied in the resolution of the continuous problem. It presented a good performance to the production of mechanical arm trajectories, managing to reproduce the trained trajectories with great accuracy. For the discrete problem, all the models presented low generalization capacity. For the continuous problem, the approached model presented itself in a satisfactmy manner by means of noise addition.

Page generated in 0.1071 seconds