Spelling suggestions: "subject:"restricted boltzmann achine"" "subject:"restricted boltzmann cachine""
1 |
Probabilistic models for melodic sequencesSpiliopoulou, Athina January 2013 (has links)
Structure is one of the fundamentals of music, yet the complexity arising from the vast number of possible variations of musical elements such as rhythm, melody, harmony, key, texture and form, along with their combinations, makes music modelling a particularly challenging task for machine learning. The research presented in this thesis focuses on the problem of learning a generative model for melody directly from musical sequences belonging to the same genre. Our goal is to develop probabilistic models that can automatically capture the complex statistical dependencies evident in music without the need to incorporate significant domain-specifc knowledge. At all stages we avoid making assumptions explicit to music and consider models that can can be readily applied in different music genres and can easily be adapted for other sequential data domains. We develop the Dirichlet Variable-Length Markov Model (Dirichlet-VMM), a Bayesian formulation of the Variable-Length Markov Model (VMM), where smoothing is performed in a systematic probabilistic manner. The model is a general-purpose, dictionary-based predictor with a formal smoothing technique and is shown to perform significantly better than the standard VMM in melody modelling. Motivated by the ability of the Restricted Boltzmann Machine (RBM) to extract high quality latent features in an unsupervised manner, we next develop the Time-Convolutional Restricted Boltzmann Machine (TC-RBM), a novel adaptation of the Convolutional RBM for modelling sequential data. We show that the TC-RBM learns descriptive musical features such as chords, octaves and typical melody movement patterns. To deal with the non-stationarity of music, we develop the Variable-gram Topic model, which employs the Dirichlet-VMM for the parametrisation of the topic distributions. The Dirichlet-VMM models the local temporal structure, while the latent topics represent di erent music regimes. The model does not make any assumptions explicit to music, but it is particularly suitable in this context, as it couples the latent topic formalism with an expressive model of contextual information.
|
2 |
Interpreting Faces with Neurally Inspired Generative ModelsSusskind, Joshua Matthew 31 August 2011 (has links)
Becoming a face expert takes years of learning and development. Many research programs are devoted to studying face perception, particularly given its prerequisite role in social interaction, yet its fundamental neural operations are poorly understood. One reason is that there are many possible explanations for a change in facial appearance, such as lighting, expression, or identity. Despite general agreement that the brain extracts multiple layers of feature detectors arranged into hierarchies to interpret causes of sensory information, very little work has been done to develop computational models of these processes, especially for complex stimuli like faces. The studies presented in this thesis used nonlinear generative models developed within machine learning to solve several face perception problems. Applying a deep hierarchical neural network, we showed that it is possible to learn representations capable of perceiving facial actions, expressions, and identities, better than similar non-hierarchical architectures. We then demonstrated that a generative architecture can be used to interpret high-level neural activity by synthesizing images in a top-down pass. Using this approach we showed that deep layers of a network can be activated to generate faces corresponding to particular categories. To facilitate training models to learn rich and varied facial features, we introduced a new expression database with the largest number of labeled faces collected to date. We found that a model trained on these images learned to recognize expressions comparably to human observers. Next we considered models trained on pairs of images, making it possible to learn how faces change appearance to take on different expressions. Modeling higher-order associations between images allowed us to efficiently match images of the same type according to a learned pairwise similarity measure. These models performed well on several tasks, including matching expressions and identities, and demonstrated performance superior to competing models. In sum, these studies showed that neural networks that extract highly nonlinear features from images using architectures inspired by the brain can solve difficult face perception tasks with minimal guidance by human experts.
|
3 |
Interpreting Faces with Neurally Inspired Generative ModelsSusskind, Joshua Matthew 31 August 2011 (has links)
Becoming a face expert takes years of learning and development. Many research programs are devoted to studying face perception, particularly given its prerequisite role in social interaction, yet its fundamental neural operations are poorly understood. One reason is that there are many possible explanations for a change in facial appearance, such as lighting, expression, or identity. Despite general agreement that the brain extracts multiple layers of feature detectors arranged into hierarchies to interpret causes of sensory information, very little work has been done to develop computational models of these processes, especially for complex stimuli like faces. The studies presented in this thesis used nonlinear generative models developed within machine learning to solve several face perception problems. Applying a deep hierarchical neural network, we showed that it is possible to learn representations capable of perceiving facial actions, expressions, and identities, better than similar non-hierarchical architectures. We then demonstrated that a generative architecture can be used to interpret high-level neural activity by synthesizing images in a top-down pass. Using this approach we showed that deep layers of a network can be activated to generate faces corresponding to particular categories. To facilitate training models to learn rich and varied facial features, we introduced a new expression database with the largest number of labeled faces collected to date. We found that a model trained on these images learned to recognize expressions comparably to human observers. Next we considered models trained on pairs of images, making it possible to learn how faces change appearance to take on different expressions. Modeling higher-order associations between images allowed us to efficiently match images of the same type according to a learned pairwise similarity measure. These models performed well on several tasks, including matching expressions and identities, and demonstrated performance superior to competing models. In sum, these studies showed that neural networks that extract highly nonlinear features from images using architectures inspired by the brain can solve difficult face perception tasks with minimal guidance by human experts.
|
4 |
An intelligent search for feature interactions using Restricted Boltzmann MachinesBertholds, Alexander, Larsson, Emil January 2013 (has links)
Klarna uses a logistic regression to estimate the probability that an e-store customer will default on its given credit. The logistic regression is a linear statistical model which cannot detect non-linearities in the data. The aim of this project has been to develop a program which can be used to find suitable non-linear interaction-variables. This can be achieved using a Restricted Boltzmann Machine, an unsupervised neural network, whose hidden nodes can be used to model the distribution of the data. By using the hidden nodes as new variables in the logistic regression it is possible to see which nodes that have the greatest impact on the probability of default estimates. The contents of the hidden nodes, corresponding to different parts of the data distribution, can be used to find suitable interaction-variables which will allow the modelling of non-linearities. It was possible to find the data distribution using the Restricted Boltzmann Machine and adding its hidden nodes to the logistic regression improved the model's ability to predict the probability of default. The hidden nodes could be used to create interaction-variables which improve Klarna's internal models used for credit risk estimates. / Klarna använder en logistisk regression för att estimera sannolikheten att en e-handelskund inte kommer att betala sina fakturor efter att ha givits kredit. Den logistiska regressionen är en linjär modell och kan därför inte upptäcka icke-linjäriteter i datan. Målet med detta projekt har varit att utveckla ett program som kan användas för att hitta lämpliga icke-linjära interaktionsvariabler. Genom att införa dessa i den logistiska regressionen blir det möjligt att upptäcka icke-linjäriteter i datan och därmed förbättra sannolikhetsestimaten. Det utvecklade programmet använder Restricted Boltzmann Machines, en typ av oövervakat neuralt nätverk, vars dolda noder kan användas för att hitta datans distribution. Genom att använda de dolda noderna i den logistiska regressionen är det möjligt att se vilka delar av distributionen som är viktigast i sannolikhetsestimaten. Innehållet i de dolda noderna, som motsvarar olika delar av datadistributionen, kan användas för att hitta lämpliga interaktionsvariabler. Det var möjligt att hitta datans distribution genom att använda en Restricted Boltzmann Machine och dess dolda noder förbättrade sannolikhetsestimaten från den logistiska regressionen. De dolda noderna kunde användas för att skapa interaktionsvariabler som förbättrar Klarnas interna kreditriskmodeller.
|
5 |
An Evolutionary Approximation to Contrastive Divergence in Convolutional Restricted Boltzmann MachinesMcCoppin, Ryan R. January 2014 (has links)
No description available.
|
6 |
Study of Critical Phenomena with Monte Carlo and Machine Learning TechniquesAzizi, Ahmadreza 08 July 2020 (has links)
Dynamical properties of non-equilibrium systems, similar to equilibrium ones, have been shown to obey robust time scaling laws which have enriched the concept of physical universality classes. In the first part of this Dissertation, we present the results of our investigations of some of the critical dynamical properties of systems belonging to the Voter or the Directed Percolation (DP) universality class. To be more precise, we focus on the aging properties of two-state and three-state Potts models with absorbing states and we determine temporal scaling of autocorrelation and autoresponse functions.
We propose a novel microscopic model which exhibits non-equilibrium critical points belonging to the Voter, DP and Ising Universality classes. We argue that our model has properties similar to the Generalized Voter Model (GVM) in its Langevin description. Finally, we study the time evolution of the width of interfaces separating different absorbing states.
The second part of this Dissertation is devoted to the applications of Machine Learning models in physical systems. First, we show that a trained Convolutional Neural Network (CNN) using configurations from the Ising model with conserved magnetization is able to find the location of the critical point. Second, using as our training dataset configurations of Ising models with conserved or non-conserved magnetization obtained in importance sampling Monte Carlo simulations, we investigate the physical properties of configurations generated by the Restricted Boltzmann Machine (RBM) model.
The first part of this research was sponsored by the US Army Research Office and was accomplished under Grant Number W911NF-17-1-0156.
The second part of this work was supported by the United States National Science Foundation through grant DMR-1606814. / Doctor of Philosophy / Physical systems with equilibrium states contain common properties with which they are categorized in different universality classes. Similar to these equilibrium systems, non-equilibrium systems may obey robust scaling laws and lie in different dynamic universality classes. In the first part of this Dissertation, we investigate the dynamical properties of two important dynamic universality classes, the Directed Percolation universality class and the Generalized Voter universality class. These two universality classes include models with absorbing states. A good example of an absorbing state is found in the contact process for epidemic spreading when all individuals are infected. We also propose a microscopic model with tunable parameters which exhibits phase transitions belonging to the Voter, Directed Percolation and Ising universality classes. To identify these universality classes, we measure specific dynamic and static quantities, such as interface density at different values of the tunable parameters and show that the physical properties of these quantities are identical to what is expected for the different universal classes.
The second part of this Dissertation is devoted to the application of Machine Learning models in physical systems. Considering physical system configurations as input dataset for our machine learning pipeline, we extract properties of the input data through our machine learning models. As a supervised learning model, we use a deep neural network model and train it using configurations from the Ising model with conserved dynamics. Finally, we address the question whether generative models in machine learning (models that output objects that are similar to inputs) are able to produce new configurations with properties similar to those obtained from given physical models. To this end we train a well known generative model, the Restricted Boltzmann Machine (RBM), on Ising configurations with either conserved or non-conserved magnetization at different temperatures and study the properties of configurations generated by RBM.
The first part of this research was sponsored by the US Army Research Office and was accomplished under Grant Number W911NF-17-1-0156.
The second part of this work was supported by the United States National Science Foundation through grant DMR-1606814.
|
7 |
Learning Latent Temporal Manifolds for Recognition and Prediction of Multiple Actions in Streaming Videos using Deep NetworksNair, Binu Muraleedharan 03 June 2015 (has links)
No description available.
|
8 |
Réseaux de neurones génératifs avec structureCôté, Marc-Alexandre January 2017 (has links)
Cette thèse porte sur les modèles génératifs en apprentissage automatique. Deux nouveaux modèles basés sur les réseaux de neurones y sont proposés. Le premier modèle possède une représentation interne où une certaine structure a été imposée afin d’ordonner les caractéristiques apprises. Le deuxième modèle parvient à exploiter la structure topologique des données observées, et d’en tenir compte lors de la phase générative.
Cette thèse présente également une des premières applications de l’apprentissage automatique au problème de la tractographie du cerveau. Pour ce faire, un réseau de neurones récurrent est appliqué à des données de diffusion afin d’obtenir une représentation des fibres de la matière blanche sous forme de séquences de points en trois dimensions.
|
9 |
Neurocomputational model for learning, memory consolidation and schemasDupuy, Nathalie January 2018 (has links)
This thesis investigates how through experience the brain acquires and stores memories, and uses these to extract and modify knowledge. This question is being studied by both computational and experimental neuroscientists as it is of relevance for neuroscience, but also for artificial systems that need to develop knowledge about the world from limited, sequential data. It is widely assumed that new memories are initially stored in the hippocampus, and later are slowly reorganised into distributed cortical networks that represent knowledge. This memory reorganisation is called systems consolidation. In recent years, experimental studies have revealed complex hippocampal-neocortical interactions that have blurred the lines between the two memory systems, challenging the traditional understanding of memory processes. In particular, the prior existence of cortical knowledge frameworks (also known as schemas) was found to speed up learning and consolidation, which seemingly is at odds with previous models of systems consolidation. However, the underlying mechanisms of this effect are not known. In this work, we present a computational framework to explore potential interactions between the hippocampus, the prefrontal cortex, and associative cortical areas during learning as well as during sleep. To model the associative cortical areas, where the memories are gradually consolidated, we have implemented an artificial neural network (Restricted Boltzmann Machine) so as to get insight into potential neural mechanisms of memory acquisition, recall, and consolidation. We analyse the network's properties using two tasks inspired by neuroscience experiments. The network gradually built a semantic schema in the associative cortical areas through the consolidation of multiple related memories, a process promoted by hippocampal-driven replay during sleep. To explain the experimental data we suggest that, as the neocortical schema develops, the prefrontal cortex extracts characteristics shared across multiple memories. We call this information meta-schema. In our model, the semantic schema and meta-schema in the neocortex are used to compute consistency, conflict and novelty signals. We propose that the prefrontal cortex uses these signals to modulate memory formation in the hippocampus during learning, which in turn influences consolidation during sleep replay. Together, these results provide theoretical framework to explain experimental findings and produce predictions for hippocampal-neocortical interactions during learning and systems consolidation.
|
10 |
EXPLORATION OF NEURAL CODING IN RAT'S AGRANULAR MEDIAL AND AGRANULAR LATERAL CORTICES DURING LEARNING OF A DIRECTIONAL CHOICE TASKJanuary 2014 (has links)
abstract: Animals learn to choose a proper action among alternatives according to the circumstance. Through trial-and-error, animals improve their odds by making correct association between their behavioral choices and external stimuli. While there has been an extensive literature on the theory of learning, it is still unclear how individual neurons and a neural network adapt as learning progresses. In this dissertation, single units in the medial and lateral agranular (AGm and AGl) cortices were recorded as rats learned a directional choice task. The task required the rat to make a left/right side lever press if a light cue appeared on the left/right side of the interface panel. Behavior analysis showed that rat's movement parameters during performance of directional choices became stereotyped very quickly (2-3 days) while learning to solve the directional choice problem took weeks to occur. The entire learning process was further broken down to 3 stages, each having similar number of recording sessions (days). Single unit based firing rate analysis revealed that 1) directional rate modulation was observed in both cortices; 2) the averaged mean rate between left and right trials in the neural ensemble each day did not change significantly among the three learning stages; 3) the rate difference between left and right trials of the ensemble did not change significantly either. Besides, for either left or right trials, the trial-to-trial firing variability of single neurons did not change significantly over the three stages. To explore the spatiotemporal neural pattern of the recorded ensemble, support vector machines (SVMs) were constructed each day to decode the direction of choice in single trials. Improved classification accuracy indicated enhanced discriminability between neural patterns of left and right choices as learning progressed. When using a restricted Boltzmann machine (RBM) model to extract features from neural activity patterns, results further supported the idea that neural firing patterns adapted during the three learning stages to facilitate the neural codes of directional choices. Put together, these findings suggest a spatiotemporal neural coding scheme in a rat AGl and AGm neural ensemble that may be responsible for and contributing to learning the directional choice task. / Dissertation/Thesis / Ph.D. Electrical Engineering 2014
|
Page generated in 0.0678 seconds