• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 4
  • 2
  • 1
  • Tagged with
  • 15
  • 15
  • 15
  • 7
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Development and Analysis of non-standard Echo State Networks

Steiner, Peter 14 March 2024 (has links)
Deep Learning hat in den letzten Jahren mit der Entwicklung leistungsfähigerer Hardware und neuer Architekturen wie dem Convolutional Neural Network (CNN), Transformer, und Netzwerken aus Long-Short Term Memory (LSTM)-Zellen ein rasantes Wachstum erlebt. Modelle für viele verschiedene Anwendungsfälle wurden erfolgreich veröffentlicht, und Deep Learning hat Einzug in viele alltägliche Anwendungen gehalten. Einer der größten Nachteile komplexer Modelle wie den CNNs oder LSTMs ist jedoch ihr hoher Energieverbrauch und der Bedarf an großen Mengen annotierter Trainingsdaten. Zumindest letzteres Problem wird teilweise durch die Einführung von neuen Methoden gelöst, die mit nicht-annotierten Daten umgehen können. In dieser Arbeit werden Echo State Networks (ESNs), eine Variante der rekurrenten neuronalen Netze (RNN), betrachtet, da sie eine Möglichkeit bieten, die betrachteten Probleme vieler Deep-Learning Architekturen zu lösen. Einerseits können sie mit linearer Regression trainiert werden, einer relativ einfachen, effizienten und gut etablierten Trainingsmethode. Andererseits sind ESN-Modelle interessante Kandidaten für die Erforschung neuer Trainingsmethoden, insbesondere unüberwachter Lerntechniken, die später in Deep-Learning-Methoden integriert werden können und diese effizienter und leichter trainierbar machen, da sie in ihrer Grundform relativ einfach zu erzeugen sind. Zunächst wird ein allgemeines ESN-Modell in einzelne Bausteine zerlegt, die flexibel zu neuen Architekturen kombiniert werden können. Anhand eines Beispieldatensatzes werden zunächst Basis-ESN-Modelle mit zufällig initialisierten Gewichten vorgestellt, optimiert und evaluiert. Anschließend werden deterministische ESN-Modelle betrachtet, bei denen der Einfluss unterschiedlicher zufälliger Initialisierungen reduziert ist. Es wird gezeigt, dass diese Architekturen recheneffizienter sind, aber dennoch eine vergleichbare Leistungsfähigkeit wie die Basis-ESN-Modelle aufweisen. Es wird auch gezeigt, dass deterministische ESN-Modelle verwendet werden können, um hierarchische ESN-Architekturen zu bilden. Anschließend werden unüberwachte Trainingsmethoden für die verschiedenen Bausteine des ESN-Modells eingeführt, illustriert und in einer vergleichenden Studie mit Basis- und deterministischen ESN-Architekturen als Basis evaluiert. Anhand einer Vielzahl von Benchmark-Datensätzen für die Zeitreihenklassifikation und verschiedene Audioverarbeitungsaufgaben wird gezeigt, dass die in dieser Arbeit entwickelten ESN-Modelle in der Lage sind, ähnliche Ergebnisse wie der Stand der Technik in den jeweiligen Bereichen zu erzielen. Darüber hinaus werden Anwendungsfälle identifiziert, für die bestimmte ESN-Modelle bevorzugt werden sollten, und es werden die Grenzen der verschiedenen Trainingsmethoden diskutiert. Abschließend wird gezeigt, dass zwischen dem übergeordneten Thema Reservoir Computing und Deep Learning eine Forschungslücke existiert, die in Zukunft zu schließen ist.:Statement of authorship vii Abstract ix Zusammenfassung xi Acknowledgments xiii Contents xv Acronyms xix List of Publications xxiii 1 Introduction 1 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Reservoir Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.3 Objective and Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.4 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2 Echo State Network 5 2.1 Artificial neuron model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.2 The basic Echo State Network . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.3 Advanced Echo State Network structures . . . . . . . . . . . . . . . . . . . . 15 2.4 Hyper-parameter optimization of Echo State Networks . . . . . . . . . . . . . 22 3 Building blocks of Echo State Networks 25 3.1 Toolboxes for Reservoir Computing Networks . . . . . . . . . . . . . . . . . . 25 3.2 Building blocks of Echo State Networks . . . . . . . . . . . . . . . . . . . . . 26 3.3 Define Extreme LearningMachines . . . . . . . . . . . . . . . . . . . . . . . . 30 3.4 Define Echo State Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.5 Sequential hyper-parameter optimization . . . . . . . . . . . . . . . . . . . . . 32 4 Basic, deterministic and hierarchical Echo State Networks 35 4.1 Running example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 4.2 Performance of a basic Echo State Network . . . . . . . . . . . . . . . . . . . 37 4.3 Performance of hierarchical Echo State Networks . . . . . . . . . . . . . . . . 42 4.4 Performance of deterministic Echo State Network architectures . . . . . . . . 44 4.5 Performance of hierarchical deterministic Echo State Networks . . . . . . . . 50 4.6 Comparison of the considered ESN architectures . . . . . . . . . . . . . . . . 52 4.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 5 Unsupervised Training of the Input Weights in Echo State Networks 57 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 5.2 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 5.3 Optimization of the KM-ESN model . . . . . . . . . . . . . . . . . . . . . . . 63 5.4 Performance of the KM-ESN . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 5.5 Combination of the KM-ESN and deterministic architectures . . . . . . . . . 74 5.6 Hierarchical (determinstic) KM-ESN architectures . . . . . . . . . . . . . . . 77 5.7 Comparison of the considered KM-ESN architectures . . . . . . . . . . . . . . 80 5.8 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 6 Unsupervised Training of the Recurrent Weights in Echo State Networks 85 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 6.2 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 6.3 Optimization of the pre-trained models . . . . . . . . . . . . . . . . . . . . . . 88 6.4 Performance of the KM-ESN-based models . . . . . . . . . . . . . . . . . . . 93 6.5 Comparison of all considered ESN architectures . . . . . . . . . . . . . . . . . 95 6.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 7 Multivariate time series classification with non-standard Echo State Networks 101 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 7.2 Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 7.3 Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 7.4 Optimization of the hyper-parameters . . . . . . . . . . . . . . . . . . . . . . 105 7.5 Comparison of different ESN architectures . . . . . . . . . . . . . . . . . . . . 107 7.6 Overall results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 7.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 8 Application of Echo State Networks to audio signals 123 8.1 Acoustic Feature Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 8.2 Phoneme Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 8.3 Musical Onset Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 8.4 Multipitch tracking in audio signals . . . . . . . . . . . . . . . . . . . . . . . . 157 8.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 9 Conclusion and Future Work 165 9.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 9.2 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 Bibliography 169 / The field of deep learning has experienced rapid growth in recent years with the development of more powerful hardware and new architectures such as the Convolutional Neural Network (CNN), transformer, and Long-Short Term Memory (LSTM) cells. Models for many different use cases have been successfully published, and deep learning has found its way into many everyday applications. However, one of the major drawbacks of complex models based on CNNs or LSTMs is their resource hungry nature such as the need for large amounts of labeled data and excessive energy consumption. This is partially addressed by introducing more and more methods that can deal with unlabeled data. In this thesis, Echo State Network (ESN) models, a variant of a Recurrent Neural Network (RNN), are studied because they offer a way to address the aforementioned problems of many deep learning architectures. On the one hand, they can easily be trained using linear regression, which is a simple, efficient, and well-established training method. On the other hand, since they are relatively easy to generate in their basic form, ESN models are interesting candidates for investigating new training methods, especially unsupervised learning techniques, which can later find their way into deep learning methods, making them more efficient and easier to train. First, a general ESN model is decomposed into building blocks that can be flexibly combined to form new architectures. Using an example dataset, basic ESN models with randomly initialized weights are first introduced, optimized, and evaluated. Then, deterministic ESN models are considered, where the influence of random initialization is reduced. It is shown that these architectures have a lower computational complexity but that they still show a comparable performance to the basic ESN models. It is also shown that deterministic ESN models can be used to build hierarchical ESN architectures. Then, unsupervised training methods for the different building blocks of the ESN model are introduced, illustrated, and evaluated in a comparative study with basic and deterministic ESN architectures as a baseline. Based on a broad variety of benchmark datasets for time-series classification and various audio processing tasks, it is shown that the ESN models proposed in this thesis can achieve results similar to the state-of-the-art approaches in the respective field. Furthermore, use cases are identified, for which specific models should be preferred, and limitations of the different training methods are discussed. It is also shown that there is a research gap between the umbrella topics of Reservoir Computing and Deep Learning that needs to be filled in the future.:Statement of authorship vii Abstract ix Zusammenfassung xi Acknowledgments xiii Contents xv Acronyms xix List of Publications xxiii 1 Introduction 1 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Reservoir Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.3 Objective and Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.4 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2 Echo State Network 5 2.1 Artificial neuron model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.2 The basic Echo State Network . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.3 Advanced Echo State Network structures . . . . . . . . . . . . . . . . . . . . 15 2.4 Hyper-parameter optimization of Echo State Networks . . . . . . . . . . . . . 22 3 Building blocks of Echo State Networks 25 3.1 Toolboxes for Reservoir Computing Networks . . . . . . . . . . . . . . . . . . 25 3.2 Building blocks of Echo State Networks . . . . . . . . . . . . . . . . . . . . . 26 3.3 Define Extreme LearningMachines . . . . . . . . . . . . . . . . . . . . . . . . 30 3.4 Define Echo State Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.5 Sequential hyper-parameter optimization . . . . . . . . . . . . . . . . . . . . . 32 4 Basic, deterministic and hierarchical Echo State Networks 35 4.1 Running example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 4.2 Performance of a basic Echo State Network . . . . . . . . . . . . . . . . . . . 37 4.3 Performance of hierarchical Echo State Networks . . . . . . . . . . . . . . . . 42 4.4 Performance of deterministic Echo State Network architectures . . . . . . . . 44 4.5 Performance of hierarchical deterministic Echo State Networks . . . . . . . . 50 4.6 Comparison of the considered ESN architectures . . . . . . . . . . . . . . . . 52 4.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 5 Unsupervised Training of the Input Weights in Echo State Networks 57 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 5.2 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 5.3 Optimization of the KM-ESN model . . . . . . . . . . . . . . . . . . . . . . . 63 5.4 Performance of the KM-ESN . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 5.5 Combination of the KM-ESN and deterministic architectures . . . . . . . . . 74 5.6 Hierarchical (determinstic) KM-ESN architectures . . . . . . . . . . . . . . . 77 5.7 Comparison of the considered KM-ESN architectures . . . . . . . . . . . . . . 80 5.8 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 6 Unsupervised Training of the Recurrent Weights in Echo State Networks 85 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 6.2 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 6.3 Optimization of the pre-trained models . . . . . . . . . . . . . . . . . . . . . . 88 6.4 Performance of the KM-ESN-based models . . . . . . . . . . . . . . . . . . . 93 6.5 Comparison of all considered ESN architectures . . . . . . . . . . . . . . . . . 95 6.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 7 Multivariate time series classification with non-standard Echo State Networks 101 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 7.2 Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 7.3 Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 7.4 Optimization of the hyper-parameters . . . . . . . . . . . . . . . . . . . . . . 105 7.5 Comparison of different ESN architectures . . . . . . . . . . . . . . . . . . . . 107 7.6 Overall results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 7.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 8 Application of Echo State Networks to audio signals 123 8.1 Acoustic Feature Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 8.2 Phoneme Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 8.3 Musical Onset Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 8.4 Multipitch tracking in audio signals . . . . . . . . . . . . . . . . . . . . . . . . 157 8.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 9 Conclusion and Future Work 165 9.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 9.2 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 Bibliography 169
2

Modeling and Characterization of Dynamic Changes in Biological Systems from Multi-platform Genomic Data

Zhang, Bai 30 September 2011 (has links)
Biological systems constantly evolve and adapt in response to changed environment and external stimuli at the molecular and genomic levels. Building statistical models that characterize such dynamic changes in biological systems is one of the key objectives in bioinformatics and computational biology. Recent advances in high-throughput genomic and molecular profiling technologies such as gene expression and and copy number microarrays provide ample opportunities to study cellular activities at the individual gene and network levels. The aim of this dissertation is to formulate mathematically dynamic changes in biological networks and DNA copy numbers, to develop machine learning algorithms to learn these statistical models from high-throughput biological data, and to demonstrate their applications in systems biological studies. The first part (Chapters 2-4) of the dissertation focuses on the dynamic changes taking placing at the biological network level. Biological networks are context-specific and dynamic in nature. Under different conditions, different regulatory components and mechanisms are activated and the topology of the underlying gene regulatory network changes. We report a differential dependency network (DDN) analysis to detect statistically significant topological changes in the transcriptional networks between two biological conditions. Further, we formalize and extend the DDN approach to an effective learning strategy to extract structural changes in graphical models using l1-regularization based convex optimization. We discuss the key properties of this formulation and introduce an efficient implementation by the block coordinate descent algorithm. Another type of dynamic changes in biological networks is the observation that a group of genes involved in certain biological functions or processes coordinate to response to outside stimuli, producing distinct time course patterns. We apply the echo stat network, a new architecture of recurrent neural networks, to model temporal gene expression patterns and analyze the theoretical properties of echo state networks with random matrix theory. The second part (Chapter 5) of the dissertation focuses on the changes at the DNA copy number level, especially in cancer cells. Somatic DNA copy number alterations (CNAs) are key genetic events in the development and progression of human cancers, and frequently contribute to tumorigenesis. We propose a statistically-principled in silico approach, Bayesian Analysis of COpy number Mixtures (BACOM), to accurately detect genomic deletion type, estimate normal tissue contamination, and accordingly recover the true copy number profile in cancer cells. / Ph. D.
3

Spectrum Management in Dynamic Spectrum Access: A Deep Reinforcement Learning Approach

Song, Hao January 2019 (has links)
Dynamic spectrum access (DSA) is a promising technology to mitigate spectrum shortage and improve spectrum utilization. However, DSA users have to face two fundamental issues, interference coordination between DSA users and protections to primary users (PUs). These two issues are very challenging, since generally there is no powerful infrastructure in DSA networks to support centralized control. As a result, DSA users have to perform spectrum managements, including spectrum access and power allocations, independently without accurate channel state information. In this thesis, a novel spectrum management approach is proposed, in which Q-learning, a type of reinforcement learning, is utilized to enable DSA users to carry out effective spectrum managements individually and intelligently. For more efficient processes, powerful neural networks (NNs) are employed to implement Q-learning processes, so-called deep Q-network (DQN). Furthermore, I also investigate the optimal way to construct DQN considering both the performance of wireless communications and the difficulty of NN training. Finally, extensive simulation studies are conducted to demonstrate the effectiveness of the proposed spectrum management approach. / Generally, in dynamic spectrum access (DSA) networks, co-operations and centralized control are unavailable and DSA users have to carry out wireless transmissions individually. DSA users have to know other users’ behaviors by sensing and analyzing wireless environments, so that DSA users can adjust their parameters properly and carry out effective wireless transmissions. In this thesis, machine learning and deep learning technologies are leveraged in DSA network to enable appropriate and intelligent spectrum managements, including both spectrum access and power allocations. Accordingly, a novel spectrum management framework utilizing deep reinforcement learning is proposed, in which deep reinforcement learning is employed to accurately learn wireless environments and generate optimal spectrum management strategies to adapt to the variations of wireless environments. Due to the model-free nature of reinforcement learning, DSA users only need to directly interact with environments to obtain optimal strategies rather than relying on accurate channel estimations. In this thesis, Q-learning, a type of reinforcement learning, is adopted to design the spectrum management framework. For more efficient and accurate learning, powerful neural networks (NN) is employed to combine Q-learning and deep learning, also referred to as deep Q-network (DQN). The selection of NNs is crucial for the performance of DQN, since different types of NNs possess various properties and are applicable for different application scenarios. Therefore, in this thesis, the optimal way to construct DQN is also analyzed and studied. Finally, the extensive simulation studies demonstrate that the proposed spectrum management framework could enable users to perform proper spectrum managements and achieve better performance.
4

[en] USE OF ARTIFICIAL NEURAL NETWORK MODELS FOR FAULT DETECTION AND DIAGNOSIS OF TENNESSEE EASTMAN PROCESS / [pt] USO DE MODELOS DE REDES NEURAIS ARTIFICIAIS PARA DETECÇÃO DE FALHAS NO PROCESSO TENNESSEE EASTMAN

DANIEL LERNER 18 March 2019 (has links)
[pt] A humanidade está vivenciando a Quarta Revolução Industrial, caracterizada pela implementação global da internet, utilização de inteligência artificial e automatização dos processos. Este último é de grande importância para indústria química, uma vez que seu desenvolvimento possibilitou um aumento significativo da quantidade de dados armazenados diariamente, o que gerou uma demanda para análise desses dados. Este enorme fluxo de informações tornou o sistema cada vez mais complexo com uma aleatoriedade de falhas no processo que se identificadas poderiam ajudar a melhorar o processo e evitar acidentes. Uma solução ainda pouco comum na indústria, porém com grande potencial para identificar estas falhas de processo com excelência, é a emergente inteligência artificial. Para lidar com esta questão, o presente trabalho realiza a detecção e identificação de falhas em processos industriais através da modelagem de redes neurais artificias. O banco de dados foi obtido através do uso do benchmark de processo Tennessee Eastman, implementado no Software Matlab 2017b, o qual foi projetado para simular uma planta química completa. A enorme quantidade de dados gerados pelo processo tornou possível a simulação em um contexto de Big Data. Para modelagem dos dados, foram tanto aplicadas redes neurais tradicionais feedforward, quanto redes recorrentes: Rede de Elman e Echo State Network. Os resultados apontaram que as redes feedforward e de Elman obtiveram melhores desempenhos analisados pelo coeficiente de determinação (R2). Assim, o primeiro modelo obteve melhor topologia com 37x60x70x1, algoritmo de treinamento trainlm, funções de ativação tansig para as duas camadas intermediárias e camada de saída ativada pela purelin com R2 de 88,69 por cento. O modelo da rede de Elman apresentou sua melhor topologia com 37x45x55x1, algoritmo de treinamento trainlm, funções de ativação tansig para as duas camadas intermediárias e camada de saída ativada pela função purelin com R2 de 83,63 por cento. Foi concluido que as redes analisadas podem ser usadas em controle preditivo de falhas em processos industriais, podendo ser aplicadas em plantas químicas no futuro. / [en] Humanity is experiencing the 4th Industrial Revolution, characterized by the global implementation of the internet, use of artificial intelligence and automation of processes. The last one is of great importance for the chemical industry, since its development allowed a significant increase in the amount of data stored daily, which generated a demand for the analysis of this data. This enormous flow of information made the system more and more complex with a randomness of process faults that if identified could help improve the process and prevent accidents. A solution not yet common in industry, but with great potential to identify these process faults with excellence, is the emergent artificial intelligence. To deal with this issue, the present work performs fault detection and diagnosis in industrial processes through artificial neural networks modeling. The database was obtained using the benchmark of processes Tennessee Eastman, implemented in Matlab 2017b Software, which is designed to simulate a complete chemical plant. The huge amount of data generated by the process made it possible to simulate in a Big Data context. For data modeling, were applied both traditional feedforward neural networks as well as recurrent networks: Elman Network and Echo State Network. The results indicated that the feedforward and Elman networks obtained better performances analyzed by the determination coefficient (R2). Thus, the first model obtained the best topology with 37x60x70x1, trainlm as training algorithm, tansig as activation functions for the two intermediate layers and output layer activated by the purelin function with R2 of 88.69 percent. The Elman network model presented its best topology with 37x45x55x1, trainlm as training algorithm, tansig as activation functions for the two intermediate layers and output layer activated by purelin function with R2 of 83.63 percent. It was concluded that the analyzed networks can be used in predictive control of fault in industrial processes and can be applied in chemical plants in the future.
5

[en] NEUROEVOLUTIONARY MODELS WITH ECHO STATE NETWORKS APPLIED TO SYSTEM IDENTIFICATION / [pt] MODELOS NEUROEVOLUCIONÁRIOS COM ECHO STATE NETWORKS APLICADOS À IDENTIFICAÇÃO DE SISTEMAS

PAULO ROBERTO MENESES DE PAIVA 11 January 2019 (has links)
[pt] Através das técnicas utilizadas em Identificação de Sistemas é possível obter um modelo matemático para um sistema dinâmico somente a partir de dados medidos de suas entradas e saídas. Por possuírem comportamento naturalmente dinâmico e um procedimento de treinamento simples e rápido, o uso de redes neurais do tipo Echo State Networks (ESNs) é vantajoso nesta área. Entretanto, as ESNs possuem hiperparâmetros que devem ser ajustados para que obtenham um bom desempenho em uma dada tarefa, além do fato de que a inicialização aleatória de pesos da camada interna destas redes (reservatório) nem sempre ser a ideal em termos de desempenho. Por teoricamente conseguirem obter boas soluções com poucas avaliações, o AEIQ-R (Algoritmo Evolutivo com Inspiração Quântica e Representação Real) e a estratégia evolucionária com adaptação da matriz de covariâncias (CMA-ES) representam alternativas de algoritmos evolutivos que permitem lidar de maneira eficiente com a otimização de hiperparâmetros e/ou pesos desta rede. Sendo assim, este trabalho propõe um modelo neuroevolucionário que define automaticamente uma ESN para aplicações de Identificação de Sistemas. O modelo inicialmente foca na otimização dos hiperparâmetros da ESN utilizando o AEIQ-R ou o CMA-ES, e, num segundo momento, seleciona o reservatório mais adequado para esta rede, o que pode ser feito através de uma segunda otimização focada no ajuste de alguns pesos do reservatório ou por uma escolha simples baseando-se em redes com reservatórios aleatórios. O método proposto foi aplicado a 9 problemas benchmark da área de Identificação de Sistemas, apresentando bons resultados quando comparados com modelos tradicionais. / [en] Through System Identification techniques is possible to obtain a mathematical model for a dynamic system from its input/output data. Due to their intrinsic dynamic behavior and simple and fast training procedure, the use of Echo State Networks, which are a kind of neural networks, for System Identification is advantageous. However, ESNs have global parameters that should be tuned in order to improve their performance in a determined task. Besides, a random reservoir may not be ideal in terms of performance. Due to their theoretical ability of obtaining good solutions with few evaluations, the Real Coded Quantum-Inspired Evolutionary Algorithm (QIEA-R) and the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) represent efficient alternatives of evolutionary algorithms for optimizing ESN global parameters and/or weights. Thus, this work proposes a neuro-evolutionary method that automatically defines an ESN for System Identification problems. The method initially focuses in finding the best ESN global parameters by using the QIEA-R or the CMA-ES, then, in a second moment, in selecting its best reservoir, which can be done by a second optimization focused on some reservoir weights or by doing a simple choice based on networks with random reservoirs. The method was applied to 9 benchmark problems in System Identification, showing good results when compared to traditional methods.
6

[pt] ESNPREDICTOR: FERRAMENTA DE PREVISÃO DE SÉRIES TEMPORAIS BASEADA EM ECHO STATE NETWORKS OTIMIZADAS POR ALGORITMOS GENÉTICOS E PARTICLE SWARM OPTIMIZATION / [en] ESNPREDICTOR: TIME SERIES FORECASTING APPLICATION BASED ON ECHO STATE NETWORKS OPTIMIZED BY GENETICS ALGORITHMS AND PARTICLE SWARM OPTIMIZATION

CAMILO VELASCO RUEDA 18 June 2015 (has links)
[pt] A previsão de séries temporais é fundamental na tomada de decisões de curto, médio e longo prazo, em diversas áreas como o setor elétrico, a bolsa de valores, a meteorologia, entre outros. Tem-se na atualidade uma diversidade de técnicas e modelos para realizar essas previsões, mas as ferramentas estatísticas são as mais utilizadas principalmente por apresentarem um maior grau de interpretabilidade. No entanto, as técnicas de inteligência computacional têm sido cada vez mais aplicadas em previsão de séries temporais, destacando-se as Redes Neurais Artificiais (RNA) e os Sistemas de Inferência Fuzzy (SIF). Recentemente foi criado um novo tipo de RNA, denominada Echo State Networks (ESN), as quais diferem das RNA clássicas por apresentarem uma camada escondida com conexões aleatórias, denominada de Reservoir (Reservatório). Este Reservoir é ativado pelas entradas da rede e pelos seus estados anteriores, gerando o efeito de Echo State (Eco), fornecendo assim um dinamismo e um desempenho melhor para tarefas de natureza temporal. Uma dificuldade dessas redes ESN é a presença de diversos parâmetros, tais como Raio Espectral, Tamanho do Reservoir e a Percentual de Conexão, que precisam ser calibrados para que a ESN forneça bons resultados. Portanto, este trabalho tem como principal objetivo o desenvolvimento de uma ferramenta computacional capaz de realizar previsões de séries temporais, baseada nas ESN, com ajuste automático de seus parâmetros por Particle Swarm Optimization (PSO) e Algoritmos Genéticos (GA), facilitando a sua utilização pelo usuário. A ferramenta computacional desenvolvida oferece uma interface gráfica intuitiva e amigável, tanto em termos da modelagem da ESN, quanto em termos de realização de eventuais pré-processamentos na série a ser prevista. / [en] The time series forecasting is critical to decision making in the short, medium and long term in several areas such as electrical, stock market, weather and industry. Today exist different techniques to model this forecast, but statistics are more used, because they have a bigger interpretability, due by the mathematic models created. However, intelligent techniques are being more applied in time series forecasting, where the principal models are the Artificial Neural Networks (ANN) and Fuzzy Inference Systems (FIS). A new type of ANN called Echo State Networks (ESN) was created recently, which differs from the classic ANN in a randomly connected hidden layer called Reservoir. This Reservoir is activated by the network inputs, and the historic of the reservoir activations generating so, the Echo State and giving to the network more dynamism and a better performance in temporal nature tasks. One problem with these networks is the presence of some parameters as, Spectral Radius, Reservoir Size and Connection Percent, which require calibration to make the network provide positive results. Therefore the aim of this work is to develop a computational application capable to do time series forecasting, based on ESN, with automatic parameters adjustment by Particle Swarm Optimization (PSO) and Genetic Algorithms (GA), facilitating its use by the user. The developed computational tool offers an intuitive and friendly interface, both in terms of modeling the ESN, and in terms of achievement of possible pre-process on the series to be forecasted.
7

[en] ESN-GA-SRG HYBRID MODEL: AN OPTIMIZATION AND TOPOLOGY SELECTION APPROACH IN ECHO STATE NETWORKS FOR TIME SERIES FORECASTING / [pt] MODELO HÍBRIDO ESN-GA-SRG: UMA ABORDAGEM DE OTIMIZAÇÃO E SELEÇÃO DE TOPOLOGIAS EM ECHO STATE NETWORKS PARA PREVISÃO DE SÉRIES TEMPORAIS

CESAR HERNANDO VALENCIA NINO 05 January 2023 (has links)
[pt] A utilização de modelos de inteligência computacional para tarefas de previsão Multi-Step de séries temporais tem apresentado resultados que permitem considerar estes modelos como alternativas viáveis para este tipo de problema. Baseados nos requerimentos computacionais e a melhora de desempenho, recentemente novas áreas de pesquisa têm sido apresentadas na comunidade científica. Este é o caso do Reservoir Computing, que apresenta novos campos de estudo para redes neurais do tipo recorrentes, as quais, no passado, não foram muito utilizados devido à complexidade de treinamento e ao alto custo computacional. Nesta nova área são apresentados modelos como Liquid State Machine e Echo State Networks, que proporcionam um novo entendimento no conceito de processamento dinâmico para redes recorrentes e propõem métodos de treinamento com baixo custo computacional. Neste trabalho determinou-se como foco de pesquisa a otimização de parâmetros globais para o projeto das Echo State Networks. Embora as Echo State Networks sejam objeto de estudo de pesquisadores reconhecidos, ainda apresentam comportamentos desconhecidos, em parte pela sua natureza dinâmica, mas também, pela falta de estudos que aprofundem o entendimento no comportamento dos estados gerados. Utilizando como fundamento o modelo Separation Ratio Graph para análise do desempenho, é proposto um novo modelo, denominado ESN-GA-SRG, que usa como base redes ESN com otimização de parâmetros globais utilizando GA e seleção de topologias para Reservoir por meio de análise de estados empregando SRG. O desempenho deste novo modelo é avaliado na previsão das 11 séries que compõem a versão reduzida do NN3 Forecasting Competition e em 36 séries da competição M3, selecionadas segundo características de periodicidade na amostragem, assimetria, sazonalidade e estacionaridade. O desempenho do modelo ESN-GA-SRG na previsão dessas séries temporais foi superior na maioria dos casos, com significância estatística, quando comparado com outros modelos da literatura. / [en] The use of computational intelligence models for Multi-Step time series prediction tasks has presented results that allow us to consider these models as viable alternatives for this type of problem. Based on computational requirements and performance improvement, new areas of research have recently been presented in the scientific community. This is the case of Reservoir Computing, which presents new fields of study for recurrent-type neural networks, which in the past were not widely used because of training complexity and high computational cost. In this new area are presented models such as Liquid State Machine and Echo State Networks, which provide a new understanding of the concept of dynamic processing for recurring networks and propose methods of training with low computational cost. In this work, we determined the optimization of global parameters for the Echo State Networks project. Although Echo State Networks are the object of study by recognized researchers, they still present unknown behavior, partly due to their dynamic nature, but also due to the lack of studies that deepen behavior understanding of the generated states. Based on the Separation Ratio Graph model for performance analysis, a new model, called ESN-GA-SRG, is proposed, which uses ESN networks with global parameter optimization using GA and selection of topologies for Reservoir through analysis of States employing SRG. The performance of this new model is evaluated to forecast the 11 series that made up the reduced version of the NN3 Forecasting Competition and for 36 series of the M3 competition, selected according to characteristics of periodicity in sampling, asymmetry, seasonality and stationary. The performance of the ESN-GA-SRG model in predicting these time series was superior in most cases, with statistical significance when compared with other models in the literature.
8

Dynamics and correlations in sparse signal acquisition

Charles, Adam Shabti 08 June 2015 (has links)
One of the most important parts of engineered and biological systems is the ability to acquire and interpret information from the surrounding world accurately and in time-scales relevant to the tasks critical to system performance. This classical concept of efficient signal acquisition has been a cornerstone of signal processing research, spawning traditional sampling theorems (e.g. Shannon-Nyquist sampling), efficient filter designs (e.g. the Parks-McClellan algorithm), novel VLSI chipsets for embedded systems, and optimal tracking algorithms (e.g. Kalman filtering). Traditional techniques have made minimal assumptions on the actual signals that were being measured and interpreted, essentially only assuming a limited bandwidth. While these assumptions have provided the foundational works in signal processing, recently the ability to collect and analyze large datasets have allowed researchers to see that many important signal classes have much more regularity than having finite bandwidth. One of the major advances of modern signal processing is to greatly improve on classical signal processing results by leveraging more specific signal statistics. By assuming even very broad classes of signals, signal acquisition and recovery can be greatly improved in regimes where classical techniques are extremely pessimistic. One of the most successful signal assumptions that has gained popularity in recet hears is notion of sparsity. Under the sparsity assumption, the signal is assumed to be composed of a small number of atomic signals from a potentially large dictionary. This limit in the underlying degrees of freedom (the number of atoms used) as opposed to the ambient dimension of the signal has allowed for improved signal acquisition, in particular when the number of measurements is severely limited. While techniques for leveraging sparsity have been explored extensively in many contexts, typically works in this regime concentrate on exploring static measurement systems which result in static measurements of static signals. Many systems, however, have non-trivial dynamic components, either in the measurement system's operation or in the nature of the signal being observed. Due to the promising prior work leveraging sparsity for signal acquisition and the large number of dynamical systems and signals in many important applications, it is critical to understand whether sparsity assumptions are compatible with dynamical systems. Therefore, this work seeks to understand how dynamics and sparsity can be used jointly in various aspects of signal measurement and inference. Specifically, this work looks at three different ways that dynamical systems and sparsity assumptions can interact. In terms of measurement systems, we analyze a dynamical neural network that accumulates signal information over time. We prove a series of bounds on the length of the input signal that drives the network that can be recovered from the values at the network nodes~[1--9]. We also analyze sparse signals that are generated via a dynamical system (i.e. a series of correlated, temporally ordered, sparse signals). For this class of signals, we present a series of inference algorithms that leverage both dynamics and sparsity information, improving the potential for signal recovery in a host of applications~[10--19]. As an extension of dynamical filtering, we show how these dynamic filtering ideas can be expanded to the broader class of spatially correlated signals. Specifically, explore how sparsity and spatial correlations can improve inference of material distributions and spectral super-resolution in hyperspectral imagery~[20--25]. Finally, we analyze dynamical systems that perform optimization routines for sparsity-based inference. We analyze a networked system driven by a continuous-time differential equation and show that such a system is capable of recovering a large variety of different sparse signal classes~[26--30].
9

Maximalizace výpočetní síly neuroevolucí / Maximizing Computational Power by Neuroevolution

Matzner, Filip January 2016 (has links)
Echo state networks represent a special type of recurrent neural networks. Recent papers stated that the echo state networks maximize their computational performance on the transition between order and chaos, the so-called edge of chaos. This work confirms this statement in a comprehensive set of experiments. Afterwards, the best performing echo state network is compared to a network evolved via neuroevolution. The evolved network outperforms the best echo state network, however, the evolution consumes significant computational resources. By combining the best of both worlds, the simplicity of echo state networks and the performance of evolved networks, a new model called locally connected echo state networks is proposed. The results of this thesis may have an impact on future designs of echo state networks and efficiency of their implementation. Furthermore, the findings may improve the understanding of biological brain tissue. 1
10

Applying Reservoir Computing for Driver Behavior Analysis and Traffic Flow Prediction in Intelligent Transportation Systems

Sethi, Sanchit 05 June 2024 (has links)
In the realm of autonomous vehicles, ensuring safety through advanced anomaly detection is crucial. This thesis integrates Reservoir Computing with temporal-aware data analysis to enhance driver behavior assessment and traffic flow prediction. Our approach combines Reservoir Computing with autoencoder-based feature extraction to analyze driving metrics from vehicle sensors, capturing complex temporal patterns efficiently. Additionally, we extend our analysis to forecast traffic flow dynamics within road networks using the same framework. We evaluate our model using the PEMS-BAY and METRA-LA datasets, encompassing diverse traffic scenarios, along with a GPS dataset of 10,000 taxis, providing real-world driving dynamics. Through a support vector machine (SVM) algorithm, we categorize drivers based on their performance, offering insights for tailored anomaly detection strategies. This research advances anomaly detection for autonomous vehicles, promoting safer driving experiences and the evolution of vehicle safety technologies. By integrating Reservoir Computing with temporal-aware data analysis, this thesis contributes to both driver behavior assessment and traffic flow prediction, addressing critical aspects of autonomous vehicle systems. / Master of Science / Our cities are constantly growing, and traffic congestion is a major challenge. This project explores how innovative technology can help us predict traffic patterns and develop smarter management strategies. Inspired by the rigorous safety systems being developed for self-driving cars, we'll delve into the world of machine learning. By combining advanced techniques for identifying unusual traffic patterns with tools that analyze data over time, we'll gain a deeper understanding of traffic flow and driver behavior. We'll utilize data collected by car sensors, such as speed and turning patterns, to not only predict traffic jams but also see how drivers react in different situations. However, our project has a broader scope than just traffic flow. We aim to leverage this framework to understand driver behavior in general, with a particular focus on its implications for self-driving vehicles. Through meticulous data analysis and sophisticated algorithms, we can categorize drivers based on their performance. This valuable information can be used to develop improved methods for detecting risky situations, ultimately leading to safer roads and smoother traffic flow for everyone. To ensure the effectiveness of our approach, we'll rigorously test it using real-world data from GPS data from taxi fleets and nationally recognized traffic datasets. By harnessing the power of machine learning and tools that can adapt to changing data patterns, this project has the potential to revolutionize traffic management in cities. This paves the way for a future with safer roads, less congestion, and a more positive experience for everyone who lives in and travels through our bustling urban centers.

Page generated in 0.0979 seconds