• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • Tagged with
  • 6
  • 6
  • 6
  • 6
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Connectionist variable binding architectures

Stark, Randall J. January 1993 (has links)
No description available.
2

Adaptively-Halting RNN for Tunable Early Classification of Time Series

Hartvigsen, Thomas 11 November 2018 (has links)
Early time series classification is the task of predicting the class label of a time series before it is observed in its entirety. In time-sensitive domains where information is collected over time it is worth sacrificing some classification accuracy in favor of earlier predictions, ideally early enough for actions to be taken. However, since accuracy and earliness are contradictory objectives, a solution to this problem must find a task-dependent trade-off. There are two common state-of-the-art methods. The first involves an analyst selecting a timestep at which all predictions must be made. This does not capture earliness on a case-by-case basis, so if the selecting timestep is too early, all later signals are missed, and if a signal happens early, the classifier still waits to generate a prediction. The second method is the exhaustive search for signals, which encodes no timing information and is not scalable to high dimensions or long time series. We design the first early classification model called EARLIEST to tackle this multi-objective optimization problem, jointly learning (1) to decide at which time step to halt and generate predictions and (2) how to classify the time series. Each of these is learned based on the task and data features. We achieve an analyst-controlled balance between the goals of earliness and accuracy by pairing a recurrent neural network that learns to classify time series as a supervised learning task with a stochastic controller network that learns a halting-policy as a reinforcement learning task. The halting-policy dictates sequential decisions, one per timestep, of whether or not to halt the recurrent neural network and classify the time series early. This pairing of networks optimizes a global objective function that incorporates both earliness and accuracy. We validate our method via critical clinical prediction tasks in the MIMIC III database from the Beth Israel Deaconess Medical Center along with another publicly available time series classification dataset. We show that EARLIEST out-performs two state-of-the-art LSTM-based early classification methods. Additionally, we dig deeper into our model's performance using a synthetic dataset which shows that EARLIEST learns to halt when it observes signals without having explicit access to signal locations. The contributions of this work are three-fold. First, our method is the first neural network-based solution to early classification of time series, bringing the recent successes of deep learning to this problem. Second, we present the first reinforcement-learning based solution to the unsupervised nature of early classification, learning the underlying distributions of signals without access to this information through trial and error. Third, we propose the first joint-optimization of earliness and accuracy, allowing learning of complex relationships between these contradictory goals.
3

A developmental approach to the study of affective bonds for human-robot interaction

Hiolle, Antoine January 2015 (has links)
Robotics agents are meant to play an increasingly larger role in our everyday lives. To be successfully integrated in our environment, robots will need to develop and display adaptive, robust, and socially suitable behaviours. To tackle these issues, the robotics research community has invested a considerable amount of efforts in modelling robotic architectures inspired by research on living systems, from ethology to developmental psychology. Following a similar approach, this thesis presents the research results of the modelling and experimental testing of robotic architectures based on affective and attachment bonds between young infants and their primary caregiver. I follow a bottom-up approach to the modelling of such bonds, examining how they can promote the situated development of an autonomous robot. Specifically, the models used and the results from the experiments carried out in laboratory settings and with naive users demonstrate the impact such affective bonds have on the learning outcomes of an autonomous robot and on the perception and behaviour of humans. This research leads to the emphasis on the importance of the interplay between the dynamics of the regulatory behaviours performed by a robot and the responsiveness of the human partner. The coupling of such signals and behaviours in an attachment-like dyad determines the nature of the outcomes for the robot, in terms of learning or the satisfaction of other needs. The experiments carried out also demonstrate of the attachment system can help a robot adapt its own social behaviour to that of the human partners, as infants are thought to do during their development.
4

Functional Sensory Representations of Natural Stimuli: the Case of Spatial Hearing

Mlynarski, Wiktor 21 January 2015 (has links)
In this thesis I attempt to explain mechanisms of neuronal coding in the auditory system as a form of adaptation to statistics of natural stereo sounds. To this end I analyse recordings of real-world auditory environments and construct novel statistical models of these data. I further compare regularities present in natural stimuli with known, experimentally observed neuronal mechanisms of spatial hearing. In a more general perspective, I use binaural auditory system as a starting point to consider the notion of function implemented by sensory neurons. In particular I argue for two, closely-related tenets: 1. The function of sensory neurons can not be fully elucidated without understanding statistics of natural stimuli they process. 2. Function of sensory representations is determined by redundancies present in the natural sensory environment. I present the evidence in support of the first tenet by describing and analysing marginal statistics of natural binaural sound. I compare observed, empirical distributions with knowledge from reductionist experiments. Such comparison allows to argue that the complexity of the spatial hearing task in the natural environment is much higher than analytic, physics-based predictions. I discuss the possibility that early brain stem circuits such as LSO and MSO do not \"compute sound localization\" as is often being claimed in the experimental literature. I propose that instead they perform a signal transformation, which constitutes the first step of a complex inference process. To support the second tenet I develop a hierarchical statistical model, which learns a joint sparse representation of amplitude and phase information from natural stereo sounds. I demonstrate that learned higher order features reproduce properties of auditory cortical neurons, when probed with spatial sounds. Reproduced aspects were hypothesized to be a manifestation of a fine-tuned computation specific to the sound-localization task. Here it is demonstrated that they rather reflect redundancies present in the natural stimulus. Taken together, results presented in this thesis suggest that efficient coding is a strategy useful for discovering structures (redundancies) in the input data. Their meaning has to be determined by the organism via environmental feedback.
5

Structural priors in deep neural networks

Ioannou, Yani Andrew January 2018 (has links)
Deep learning has in recent years come to dominate the previously separate fields of research in machine learning, computer vision, natural language understanding and speech recognition. Despite breakthroughs in training deep networks, there remains a lack of understanding of both the optimization and structure of deep networks. The approach advocated by many researchers in the field has been to train monolithic networks with excess complexity, and strong regularization --- an approach that leaves much to desire in efficiency. Instead we propose that carefully designing networks in consideration of our prior knowledge of the task and learned representation can improve the memory and compute efficiency of state-of-the art networks, and even improve generalization --- what we propose to denote as structural priors. We present two such novel structural priors for convolutional neural networks, and evaluate them in state-of-the-art image classification CNN architectures. The first of these methods proposes to exploit our knowledge of the low-rank nature of most filters learned for natural images by structuring a deep network to learn a collection of mostly small, low-rank, filters. The second addresses the filter/channel extents of convolutional filters, by learning filters with limited channel extents. The size of these channel-wise basis filters increases with the depth of the model, giving a novel sparse connection structure that resembles a tree root. Both methods are found to improve the generalization of these architectures while also decreasing the size and increasing the efficiency of their training and test-time computation. Finally, we present work towards conditional computation in deep neural networks, moving towards a method of automatically learning structural priors in deep networks. We propose a new discriminative learning model, conditional networks, that jointly exploit the accurate representation learning capabilities of deep neural networks with the efficient conditional computation of decision trees. Conditional networks yield smaller models, and offer test-time flexibility in the trade-off of computation vs. accuracy.
6

Automatické rozpoznání akordů pomocí hlubokých neuronových sítí / Automatic Chord Recognition Using Deep Neural Networks

Nodžák, Petr January 2020 (has links)
This work deals with automatic chord recognition using neural networks. The problem was separated into two subproblems. The first subproblem aims to experimental finding of most suitable solution for a acoustic model and the second one aims to experimental finding of most suitable solution for a language model. The problem was solved by iterative method. First a suboptimal solution of the first subproblem was found and then the second one. A total of 19 acoustic and 12 language models were made. Ten training datasets was created for acoustic models and three for language models. In total, over 200 models were trained. The best results were achieved on acoustic models represented by convolutional networks together with language models represented by recurent networks with LSTM modules.

Page generated in 0.1254 seconds