• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 25
  • 4
  • 1
  • 1
  • Tagged with
  • 38
  • 38
  • 22
  • 19
  • 11
  • 11
  • 10
  • 9
  • 9
  • 8
  • 8
  • 8
  • 8
  • 8
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Optical Wavefront Prediction with Reservoir Computing

Weddell, Stephen John January 2010 (has links)
Over the last four decades there has been considerable research in the improvement of imaging exo-atmospheric objects through air turbulence from ground-based instruments. Whilst such research was initially motivated for military purposes, the benefits to the astronomical community have been significant. A key topic in this research is isoplanatism. The isoplanatic angle is an angular limit that separates two point-source objects, where if independent measurements of wavefront perturbations were obtained from each source, the wavefront distortion would be considered equivalent. In classical adaptive optics, perturbations from a point-source reference, such as a bright, natural guide star, are used to partially negate perturbations distorting an image of a fainter, nearby science object. Various techniques, such as atmospheric tomography, maximum a posteriori (MAP), and parameterised modelling, have been used to estimate wavefront perturbations when the distortion function is spatially variant, i.e., angular separations exceed the isoplanatic angle, θ₀, where θ₀ ≈ 10 μrad for mild distortion at visual wavelengths. However, the effectiveness of such techniques is also dependent on knowledge a priori of turbulence profiles and configuration data. This dissertation describes a new method used to estimate the eigenvalues that comprise wavefront perturbations over a wide, spatial field. To help reduce dependency on prior knowledge for specific configurations, machine learning is used with a recurrent neural network trained using a posteriori wavefront ensembles from multiple point-source objects. Using a spatiotemporal framework for prediction, the eigenvalues, in terms of Zernike polynomials, are used to reconstruct the spatially-variant, point spread function (SVPSF) for image restoration. The overall requirement is to counter the adverse effects of atmospheric turbulence on the images of extended astronomical objects. The method outlined in this thesis combines optical wavefront sensing using multiple natural guide stars, with a reservoir-based, artificial neural network. The network is used to predict aberrations caused by atmospheric turbulence that degrade the images of faint science objects. A modified geometric wavefront sensor was used to simultaneously measure phase perturbations from multiple, point-source reference objects in the pupil. A specialised recurrent neural network (RNN) was used to learn the spatiotemporal effects of phase perturbations measured from several source references. Modal expansions, in terms of Zernike coefficients, were used to build time-series ensembles that defined wavefront maps of point-source reference objects. The ensembles were used to firstly train an RNN by applying a spatiotemporal training algorithm, and secondly, new data ensembles presented to the trained RNN were used to estimate the wavefront map of science objects over a wide field. Both simulations and experiments were used to evaluate this method. The results of this study showed that by employing three or more source references over an angular separation of 24 μrad from a target, and given mild turbulence with Fried coherence length of 20 cm, the normalised mean squared error of low-order Zernike modes could be estimated to within 0.086. A key benefit in estimating phase perturbations using a time-series of short exposure point-spread functions (PSFs) is that it is then possible to determine the long exposure PSF. Based on the summation of successive, corrected, short-exposure frames, high resolution images of the science object can be obtained. The method was shown to predict a contiguous series of short exposure aberrations, as a phase screen was moved over a simulated aperture. By qualifying temporal decorrelation of atmospheric turbulence, in terms of Taylor's hypothesis, long exposure estimates of the PSF were obtained.
2

Otimização de Reservoir Computing com PSO

Sergio, Anderson Tenório 07 March 2013 (has links)
Submitted by Daniella Sodre (daniella.sodre@ufpe.br) on 2015-03-09T14:34:23Z No. of bitstreams: 2 Dissertaçao Anderson Sergio.pdf: 1358589 bytes, checksum: fdd2a84a1ce8a69596fa45676bc522e4 (MD5) license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) / Made available in DSpace on 2015-03-09T14:34:23Z (GMT). No. of bitstreams: 2 Dissertaçao Anderson Sergio.pdf: 1358589 bytes, checksum: fdd2a84a1ce8a69596fa45676bc522e4 (MD5) license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Previous issue date: 2013-03-07 / Reservoir Computing (RC) é um paradigma de Redes Neurais Artificiais com aplicações importantes no mundo real. RC utiliza arquitetura similar às Redes Neurais Recorrentes para processamento temporal, com a vantagem de não necessitar treinar os pesos da camada intermediária. De uma forma geral, o conceito de RC é baseado na construção de uma rede recorrente de maneira randômica (reservoir), sem alteração dos pesos. Após essa fase, uma função de regressão linear é utilizada para treinar a saída do sistema. A transformação dinâmica não-linear oferecida pelo reservoir é suficiente para que a camada de saída consiga extrair os sinais de saída utilizando um mapeamento linear simples, fazendo com que o treinamento seja consideravelmente mais rápido. Entretanto, assim como as redes neurais convencionais, Reservoir Computing possui alguns problemas. Sua utilização pode ser computacionalmente onerosa, diversos parâmetros influenciam sua eficiência e é improvável que a geração aleatória dos pesos e o treinamento da camada de saída com uma função de regressão linear simples seja a solução ideal para generalizar os dados. O PSO é um algoritmo de otimização que possui algumas vantagens sobre outras técnicas de busca global. Ele possui implementação simples e, em alguns casos, convergência mais rápida e custo computacional menor. Esta dissertação teve o objetivo de investigar a utilização do PSO (e duas de suas extensões – EPUS-PSO e APSO) na tarefa de otimizar os parâmetros globais, arquitetura e pesos do reservoir de um RC, aplicada ao problema de previsão de séries temporais. Os resultados alcançados mostraram que a otimização de Reservoir Computing com PSO, bem como com as suas extensões selecionadas, apresentaram desempenho satisfatório para todas as bases de dados estudadas – séries temporais de benchmark e bases de dados com aplicação em energia eólica. A otimização superou o desempenho de diversos trabalhos na literatura, apresentando-se como uma solução importante para o problema de previsão de séries temporais.
3

Conception et fabrication de micro-résonateurs pour la réalisation d'une puce neuromorphique

Mejaouri, Salim January 2018 (has links)
La miniaturisation des transistors ayant atteint ses limites, des technologies alternatives capables de traiter les données sont aujourd’hui beaucoup étudiées. Dans ce contexte, nous développons une architecture de réseau de neurones mécaniques, capable de résoudre effi- cacement des problèmes non-triviaux comme la classification ou la prédiction de fonctions chaotiques. Cette architecture est inspirée des travaux sur les réseaux de neurones récur- rents (RNN), et plus particulièrement du reservoir computing. Le dispositif est un réseau d’oscillateurs MEMS anharmoniques, lui permettant ainsi d’être compact et de consom- mer peu d’énergie. Les poutres en silicium bi-encastrées ont été choisies pour réaliser le dispositif, sachant qu’elles ont été largement étudiées et sont simples à implémenter. Nous présentons ici le travail expérimental sur les MEMS non linéaires qui seront utilisés par la suite pour réaliser le dispositif. Des simulations numériques du réseau ont permis, dans un premier temps, d’identifier les requis sur la dynamique des résonateurs. Ceux-ci ont été par la suite conçus de manière à répondre le mieux possible à ces requis. Un couplage méca- nique efficace a été élaboré pour relier chacun des oscillateurs. Afin de prédire précisément le comportement des résonateurs couplés dans le régime linéaire et non linéaire, des ana- lyses par éléments finis ont été réalisées. Un procédé de micro fabrication rapide et simple a été développé. Enfin, les structures ont été caractérisées optiquement et électriquement. Les résultats expérimentaux sont en accord avec les simulations ce qui suggère que notre approche convient à la conception et à la fabrication d’un dispositif neuromorphique.
4

Reservoir computing based on delay-dynamical systems

Appeltant, Lennert 22 May 2012 (has links)
Today, except for mathematical operations, our brain functions much faster and more efficient than any supercomputer. It is precisely this form of information processing in neural networks that inspires researchers to create systems that mimic the brain’s information processing capabilities. In this thesis we propose a novel approach to implement these alternative computer architectures, based on delayed feedback. We show that one single nonlinear node with delayed feedback can replace a large network of nonlinear nodes. First we numerically investigate the architecture and performance of delayed feedback systems as information processing units. Then we elaborate on electronic and opto-electronic implementations of the concept. Next to evaluating their performance for standard benchmarks, we also study task independent properties of the system, extracting information on how to further improve the initial scheme. Finally, some simple modifications are suggested, yielding improvements in terms of speed or performance.
5

High performance optical reservoir computing based on spatially extended systems

Pauwels, Jaël 08 September 2021 (has links) (PDF)
In this thesis we study photonic computation within the framework of reservoir computing. Inspired by the insight that the human brain processes information by generating patterns of transient neuronal activity excited by input sensory signals, reservoir computing exploits the transient dynamics of an analogue nonlinear dynamical system to solve tasks that are hard to solve by algorithmic approaches. Harnessing the massive parallelism offered by optics, we consider a generic class of nonlinear dynamical systems which are suitable for reservoir computing and which we label photonic computing liquids. These are spatially extended systems which exhibit dispersive or diffractive signal coupling and nonlinear signal distortion. We demonstrate that a wide range of optical systems meet these requirements and allow for elegant and performant imple- mentations of optical reservoirs. These advances address the limitations of current photonic reservoirs in terms of scalability, ease of implementation and the transition towards truly all-optical computing systems.We start with an abstract presentation of a photonic computing liquid and an in-depth analysis of what makes these kinds of systems function as potent reservoir computers. We then present an experimental study of two photonic reservoir computers, the first based on a diffractive free-space cavity, the second based on a fiber-loop cavity. These systems allow us to validate the promising concept of photonic computing liquids, to investigate the effects of symme- tries in the neural interconnectivity and to demonstrate the effectiveness of weak and distributed optical nonlinearities. We also investigate the ability to recover performance lost due to uncontrolled parameters variations in unstable operating environments by introducing an easily scalable way to expand a reservoir’s output layer. Finally, we show how to exploit random diffraction in a strongly dispersive optical system, including applications in optical telecom- munications. In the conclusion we discuss future perspectives and identify the characteristic of the optical systems that we consider most promising for the future of photonic reservoir computing. / Doctorat en Sciences / info:eu-repo/semantics/nonPublished
6

Development and Analysis of non-standard Echo State Networks

Steiner, Peter 14 March 2024 (has links)
Deep Learning hat in den letzten Jahren mit der Entwicklung leistungsfähigerer Hardware und neuer Architekturen wie dem Convolutional Neural Network (CNN), Transformer, und Netzwerken aus Long-Short Term Memory (LSTM)-Zellen ein rasantes Wachstum erlebt. Modelle für viele verschiedene Anwendungsfälle wurden erfolgreich veröffentlicht, und Deep Learning hat Einzug in viele alltägliche Anwendungen gehalten. Einer der größten Nachteile komplexer Modelle wie den CNNs oder LSTMs ist jedoch ihr hoher Energieverbrauch und der Bedarf an großen Mengen annotierter Trainingsdaten. Zumindest letzteres Problem wird teilweise durch die Einführung von neuen Methoden gelöst, die mit nicht-annotierten Daten umgehen können. In dieser Arbeit werden Echo State Networks (ESNs), eine Variante der rekurrenten neuronalen Netze (RNN), betrachtet, da sie eine Möglichkeit bieten, die betrachteten Probleme vieler Deep-Learning Architekturen zu lösen. Einerseits können sie mit linearer Regression trainiert werden, einer relativ einfachen, effizienten und gut etablierten Trainingsmethode. Andererseits sind ESN-Modelle interessante Kandidaten für die Erforschung neuer Trainingsmethoden, insbesondere unüberwachter Lerntechniken, die später in Deep-Learning-Methoden integriert werden können und diese effizienter und leichter trainierbar machen, da sie in ihrer Grundform relativ einfach zu erzeugen sind. Zunächst wird ein allgemeines ESN-Modell in einzelne Bausteine zerlegt, die flexibel zu neuen Architekturen kombiniert werden können. Anhand eines Beispieldatensatzes werden zunächst Basis-ESN-Modelle mit zufällig initialisierten Gewichten vorgestellt, optimiert und evaluiert. Anschließend werden deterministische ESN-Modelle betrachtet, bei denen der Einfluss unterschiedlicher zufälliger Initialisierungen reduziert ist. Es wird gezeigt, dass diese Architekturen recheneffizienter sind, aber dennoch eine vergleichbare Leistungsfähigkeit wie die Basis-ESN-Modelle aufweisen. Es wird auch gezeigt, dass deterministische ESN-Modelle verwendet werden können, um hierarchische ESN-Architekturen zu bilden. Anschließend werden unüberwachte Trainingsmethoden für die verschiedenen Bausteine des ESN-Modells eingeführt, illustriert und in einer vergleichenden Studie mit Basis- und deterministischen ESN-Architekturen als Basis evaluiert. Anhand einer Vielzahl von Benchmark-Datensätzen für die Zeitreihenklassifikation und verschiedene Audioverarbeitungsaufgaben wird gezeigt, dass die in dieser Arbeit entwickelten ESN-Modelle in der Lage sind, ähnliche Ergebnisse wie der Stand der Technik in den jeweiligen Bereichen zu erzielen. Darüber hinaus werden Anwendungsfälle identifiziert, für die bestimmte ESN-Modelle bevorzugt werden sollten, und es werden die Grenzen der verschiedenen Trainingsmethoden diskutiert. Abschließend wird gezeigt, dass zwischen dem übergeordneten Thema Reservoir Computing und Deep Learning eine Forschungslücke existiert, die in Zukunft zu schließen ist.:Statement of authorship vii Abstract ix Zusammenfassung xi Acknowledgments xiii Contents xv Acronyms xix List of Publications xxiii 1 Introduction 1 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Reservoir Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.3 Objective and Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.4 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2 Echo State Network 5 2.1 Artificial neuron model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.2 The basic Echo State Network . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.3 Advanced Echo State Network structures . . . . . . . . . . . . . . . . . . . . 15 2.4 Hyper-parameter optimization of Echo State Networks . . . . . . . . . . . . . 22 3 Building blocks of Echo State Networks 25 3.1 Toolboxes for Reservoir Computing Networks . . . . . . . . . . . . . . . . . . 25 3.2 Building blocks of Echo State Networks . . . . . . . . . . . . . . . . . . . . . 26 3.3 Define Extreme LearningMachines . . . . . . . . . . . . . . . . . . . . . . . . 30 3.4 Define Echo State Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.5 Sequential hyper-parameter optimization . . . . . . . . . . . . . . . . . . . . . 32 4 Basic, deterministic and hierarchical Echo State Networks 35 4.1 Running example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 4.2 Performance of a basic Echo State Network . . . . . . . . . . . . . . . . . . . 37 4.3 Performance of hierarchical Echo State Networks . . . . . . . . . . . . . . . . 42 4.4 Performance of deterministic Echo State Network architectures . . . . . . . . 44 4.5 Performance of hierarchical deterministic Echo State Networks . . . . . . . . 50 4.6 Comparison of the considered ESN architectures . . . . . . . . . . . . . . . . 52 4.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 5 Unsupervised Training of the Input Weights in Echo State Networks 57 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 5.2 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 5.3 Optimization of the KM-ESN model . . . . . . . . . . . . . . . . . . . . . . . 63 5.4 Performance of the KM-ESN . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 5.5 Combination of the KM-ESN and deterministic architectures . . . . . . . . . 74 5.6 Hierarchical (determinstic) KM-ESN architectures . . . . . . . . . . . . . . . 77 5.7 Comparison of the considered KM-ESN architectures . . . . . . . . . . . . . . 80 5.8 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 6 Unsupervised Training of the Recurrent Weights in Echo State Networks 85 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 6.2 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 6.3 Optimization of the pre-trained models . . . . . . . . . . . . . . . . . . . . . . 88 6.4 Performance of the KM-ESN-based models . . . . . . . . . . . . . . . . . . . 93 6.5 Comparison of all considered ESN architectures . . . . . . . . . . . . . . . . . 95 6.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 7 Multivariate time series classification with non-standard Echo State Networks 101 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 7.2 Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 7.3 Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 7.4 Optimization of the hyper-parameters . . . . . . . . . . . . . . . . . . . . . . 105 7.5 Comparison of different ESN architectures . . . . . . . . . . . . . . . . . . . . 107 7.6 Overall results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 7.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 8 Application of Echo State Networks to audio signals 123 8.1 Acoustic Feature Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 8.2 Phoneme Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 8.3 Musical Onset Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 8.4 Multipitch tracking in audio signals . . . . . . . . . . . . . . . . . . . . . . . . 157 8.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 9 Conclusion and Future Work 165 9.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 9.2 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 Bibliography 169 / The field of deep learning has experienced rapid growth in recent years with the development of more powerful hardware and new architectures such as the Convolutional Neural Network (CNN), transformer, and Long-Short Term Memory (LSTM) cells. Models for many different use cases have been successfully published, and deep learning has found its way into many everyday applications. However, one of the major drawbacks of complex models based on CNNs or LSTMs is their resource hungry nature such as the need for large amounts of labeled data and excessive energy consumption. This is partially addressed by introducing more and more methods that can deal with unlabeled data. In this thesis, Echo State Network (ESN) models, a variant of a Recurrent Neural Network (RNN), are studied because they offer a way to address the aforementioned problems of many deep learning architectures. On the one hand, they can easily be trained using linear regression, which is a simple, efficient, and well-established training method. On the other hand, since they are relatively easy to generate in their basic form, ESN models are interesting candidates for investigating new training methods, especially unsupervised learning techniques, which can later find their way into deep learning methods, making them more efficient and easier to train. First, a general ESN model is decomposed into building blocks that can be flexibly combined to form new architectures. Using an example dataset, basic ESN models with randomly initialized weights are first introduced, optimized, and evaluated. Then, deterministic ESN models are considered, where the influence of random initialization is reduced. It is shown that these architectures have a lower computational complexity but that they still show a comparable performance to the basic ESN models. It is also shown that deterministic ESN models can be used to build hierarchical ESN architectures. Then, unsupervised training methods for the different building blocks of the ESN model are introduced, illustrated, and evaluated in a comparative study with basic and deterministic ESN architectures as a baseline. Based on a broad variety of benchmark datasets for time-series classification and various audio processing tasks, it is shown that the ESN models proposed in this thesis can achieve results similar to the state-of-the-art approaches in the respective field. Furthermore, use cases are identified, for which specific models should be preferred, and limitations of the different training methods are discussed. It is also shown that there is a research gap between the umbrella topics of Reservoir Computing and Deep Learning that needs to be filled in the future.:Statement of authorship vii Abstract ix Zusammenfassung xi Acknowledgments xiii Contents xv Acronyms xix List of Publications xxiii 1 Introduction 1 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Reservoir Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.3 Objective and Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.4 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2 Echo State Network 5 2.1 Artificial neuron model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.2 The basic Echo State Network . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.3 Advanced Echo State Network structures . . . . . . . . . . . . . . . . . . . . 15 2.4 Hyper-parameter optimization of Echo State Networks . . . . . . . . . . . . . 22 3 Building blocks of Echo State Networks 25 3.1 Toolboxes for Reservoir Computing Networks . . . . . . . . . . . . . . . . . . 25 3.2 Building blocks of Echo State Networks . . . . . . . . . . . . . . . . . . . . . 26 3.3 Define Extreme LearningMachines . . . . . . . . . . . . . . . . . . . . . . . . 30 3.4 Define Echo State Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.5 Sequential hyper-parameter optimization . . . . . . . . . . . . . . . . . . . . . 32 4 Basic, deterministic and hierarchical Echo State Networks 35 4.1 Running example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 4.2 Performance of a basic Echo State Network . . . . . . . . . . . . . . . . . . . 37 4.3 Performance of hierarchical Echo State Networks . . . . . . . . . . . . . . . . 42 4.4 Performance of deterministic Echo State Network architectures . . . . . . . . 44 4.5 Performance of hierarchical deterministic Echo State Networks . . . . . . . . 50 4.6 Comparison of the considered ESN architectures . . . . . . . . . . . . . . . . 52 4.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 5 Unsupervised Training of the Input Weights in Echo State Networks 57 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 5.2 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 5.3 Optimization of the KM-ESN model . . . . . . . . . . . . . . . . . . . . . . . 63 5.4 Performance of the KM-ESN . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 5.5 Combination of the KM-ESN and deterministic architectures . . . . . . . . . 74 5.6 Hierarchical (determinstic) KM-ESN architectures . . . . . . . . . . . . . . . 77 5.7 Comparison of the considered KM-ESN architectures . . . . . . . . . . . . . . 80 5.8 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 6 Unsupervised Training of the Recurrent Weights in Echo State Networks 85 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 6.2 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 6.3 Optimization of the pre-trained models . . . . . . . . . . . . . . . . . . . . . . 88 6.4 Performance of the KM-ESN-based models . . . . . . . . . . . . . . . . . . . 93 6.5 Comparison of all considered ESN architectures . . . . . . . . . . . . . . . . . 95 6.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 7 Multivariate time series classification with non-standard Echo State Networks 101 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 7.2 Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 7.3 Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 7.4 Optimization of the hyper-parameters . . . . . . . . . . . . . . . . . . . . . . 105 7.5 Comparison of different ESN architectures . . . . . . . . . . . . . . . . . . . . 107 7.6 Overall results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 7.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 8 Application of Echo State Networks to audio signals 123 8.1 Acoustic Feature Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 8.2 Phoneme Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 8.3 Musical Onset Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 8.4 Multipitch tracking in audio signals . . . . . . . . . . . . . . . . . . . . . . . . 157 8.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 9 Conclusion and Future Work 165 9.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 9.2 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 Bibliography 169
7

Theory and modeling of complex nonlinear delay dynamics applied to neuromorphic computing / Théorie et modélisation de la complexité des dynamiques non linéaires à retard : application au calcul neuromorphique.

Penkovsky, Bogdan 21 June 2017 (has links)
Cette thèse développe une nouvelle approche pour la conception d'un reservoir computer, l'un des défis de la science et de la technologie modernes. La thèse se compose de deux parties, toutes deux s'appuyant sur l'analogie entre les systèmes optoelectroniques à retard et les dynamiques spatio-temporelles non linéaires. Dans la première partie (Chapitres 1 et 2) cette analogie est utilisée dans une perspective fondamentale afin d'étudier les formes auto-organisées connues sous le nom d'états Chimère, mis en évidence une première fois comme une conséquence de ces travaux. Dans la deuxième partie (Chapitres 3 et 4) la même analogie est exploitée dans une perspective appliquée afin de concevoir et mettre en oeuvre un concept de traitement de l'information inspiré par le cerveau: un réservoir computer fonctionnant en temps réel est construit dans une puce FPGA, grâce à la mise en oeuvre d'une dynamique à retard et de ses couches d'entrée et de sortie, pour obtenir un système traitement d'information autonome intelligent. / The thesis develops a novel approach to design of a reservoir computer, one of the challenges of modern Science and Technology. It consists of two parts, both connected by the correspondence between optoelectronic delayed-feedback systems and spatio-temporal nonlinear dynamics. In the first part (Chapters 1 and 2), this correspondence is used in a fundamental perspective, studying self-organized patterns known as chimera states, discovered for the first time in purely temporal systems. Study of chimera states may shed light on mechanisms occurring in many structurally similar high-dimensional systems such as neural systems or power grids. In the second part (Chapters 3 and 4), the same spatio-temporal analogy is exploited from an applied perspective, designing and implementing a brain-inspired information processing device: a real-time digital reservoir computer is constructed in FPGA hardware. The implementation utilizes delay dynamics and realizes input as well as output layers for an autonomous cognitive computing system.
8

Um Método para Design e Treinamento de Reservoir Computing Aplicado à Previsão de Séries Temporais

FERREIRA, Aida Araújo 31 January 2011 (has links)
Made available in DSpace on 2014-06-12T15:52:23Z (GMT). No. of bitstreams: 2 arquivo3249_1.pdf: 1966741 bytes, checksum: f61bcb05fd026755e0cb70974b156e7d (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2011 / Instituto Federal de Educação, Ciência e Tecnologia de Pernambuco / Reservoir Computing é um tipo de rede neural recorrente que permite uma modelagem caixa-preta para sistemas dinâmicos (não-lineares). Em contraste com outras abordagens de redes neurais recorrentes, com Reservoir Computing não existe a necessidade de treinamento dos pesos da camada de entrada e nem dos pesos internos da rede (reservoir), apenas os pesos da camada de saída (readout) são treinados. No entanto, é necessário ajustar os parâmetros e a topologia da rede para a criação de um reservoir ótimo que seja adequado a uma determinada aplicação. Neste trabalho, foi criado um método, chamado RCDESIGN, para encontrar o melhor reservoir aplicado à tarefa de previsão de séries temporais. O método desenvolvido combina um algoritmo evolucionário com Reservoir Computing e busca simultaneamente pelos melhores valores dos parâmetros, da topologia da rede e dos pesos, sem reescalar a matriz de pesos do reservoir pelo raio espectral. A ideia do ajuste do raio espectral dentro de um círculo unitário no plano complexo, vem da teoria dos sistemas lineares que mostra claramente que a estabilidade é necessária para a obtenção de respostas úteis em sistemas lineares. Contudo, este argumento não se aplica necessariamente aos sistemas não-lineares, que é o caso de Reservoir Computing. O método criado considera também o Reservoir Computing em toda a sua não linearidade, pois permite a utilização de todas as suas possíveis conexões, em vez de usar apenas as conexões obrigatórias. Os resultados obtidos com o método proposto são comparados com dois métodos diferentes. O primeiro, chamado neste trabalho de Busca RS, utiliza algoritmo genético para otimizar os principais parâmetros de Reservoir Computing, que são: tamanho do reservoir, raio espectral e densidade de conexão. O segundo, chamado neste trabalho de Busca TR, utiliza algoritmo genético para otimizar a topologia e pesos de Reservoir Computing baseado no raio espectral. Foram utilizadas sete séries clássicas para realizar a validação acadêmica da aplicação do método proposto à tarefa de previsão de séries temporais. Um estudo de caso foi desenvolvido para verificar a adequação do método proposto ao problema de previsão da velocidade horária dos ventos na região nordeste do Brasil. A geração eólica é uma das fontes renováveis de energia com o menor custo de produção e com a maior quantidade de recursos disponíveis. Dessa forma, a utilização de modelos eficientes de previsão da velocidade dos ventos e da geração eólica pode reduzir as dificuldades de operação de um sistema elétrico composto por fontes tradicionais de energia e pela fonte eólica
9

Predicting and Controlling Complex Dynamical Systems

January 2020 (has links)
abstract: Complex dynamical systems are the kind of systems with many interacting components that usually have nonlinear dynamics. Those systems exist in a wide range of disciplines, such as physical, biological, and social fields. Those systems, due to a large amount of interacting components, tend to possess very high dimensionality. Additionally, due to the intrinsic nonlinear dynamics, they have tremendous rich system behavior, such as bifurcation, synchronization, chaos, solitons. To develop methods to predict and control those systems has always been a challenge and an active research area. My research mainly concentrates on predicting and controlling tipping points (saddle-node bifurcation) in complex ecological systems, comparing linear and nonlinear control methods in complex dynamical systems. Moreover, I use advanced artificial neural networks to predict chaotic spatiotemporal dynamical systems. Complex networked systems can exhibit a tipping point (a “point of no return”) at which a total collapse occurs. Using complex mutualistic networks in ecology as a prototype class of systems, I carry out a dimension reduction process to arrive at an effective two-dimensional (2D) system with the two dynamical variables corresponding to the average pollinator and plant abundances, respectively. I demonstrate that, using 59 empirical mutualistic networks extracted from real data, our 2D model can accurately predict the occurrence of a tipping point even in the presence of stochastic disturbances. I also develop an ecologically feasible strategy to manage/control the tipping point by maintaining the abundance of a particular pollinator species at a constant level, which essentially removes the hysteresis associated with tipping points. Besides, I also find that the nodal importance ranking for nonlinear and linear control exhibits opposite trends: for the former, large degree nodes are more important but for the latter, the importance scale is tilted towards the small-degree nodes, suggesting strongly irrelevance of linear controllability to these systems. Focusing on a class of recurrent neural networks - reservoir computing systems that have recently been exploited for model-free prediction of nonlinear dynamical systems, I uncover a surprising phenomenon: the emergence of an interval in the spectral radius of the neural network in which the prediction error is minimized. / Dissertation/Thesis / Doctoral Dissertation Electrical Engineering 2020
10

Learning Long Temporal Sequences in Spiking Networks by Multiplexing Neural Oscillations

Vincent-Lamarre, Philippe 17 December 2019 (has links)
Many living organisms have the ability to execute complex behaviors and cognitive processes that are reliable. In many cases, such tasks are generated in the absence of an ongoing external input that could drive the activity on their underlying neural populations. For instance, writing the word "time" requires a precise sequence of muscle contraction in the hand and wrist. There has to be some patterns of activity in the areas of the brain responsible for this behaviour that are endogenously generated every time an individual performs this action. Whereas the question of how such neural code is transformed in the target motor sequence is a question of its own, their origin is perhaps even more puzzling. Most models of cortical and sub-cortical circuits suggest that many of their neural populations are chaotic. This means that very small amounts of noise, such as an additional action potential in a neuron of a network, can lead to completely different patterns of activity. Reservoir computing is one of the first frameworks that provided an efficient solution for biologically relevant neural networks to learn complex temporal tasks in the presence of chaos. We showed that although reservoirs (i.e. recurrent neural networks) are robust to noise, they are extremely sensitive to some forms of structural perturbations, such as removing one neuron out of thousands. We proposed an alternative to these models, where the source of autonomous activity is no longer originating from the reservoir, but from a set of oscillating networks projecting to the reservoir. In our simulations, we show that this solution produce rich patterns of activity and lead to networks that are both resistant to noise and structural perturbations. The model can learn a wide variety of temporal tasks such as interval timing, motor control, speech production and spatial navigation.

Page generated in 0.116 seconds