• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 2
  • Tagged with
  • 31
  • 31
  • 20
  • 19
  • 12
  • 8
  • 7
  • 6
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Reconstruction of gene regulatory networks from postgenomic data

Werhli, Adriano Velasque January 2007 (has links)
An important problem in systems biology is the inference of biochemical pathways and regulatory networks from postgenomic data. The recent substantial increase in the availability of such data has stimulated the interest in inferring the networks and pathways from the data themselves. The main interests of this thesis are the application, evaluation and the improvement of machine learning methods applied to the reverse engineering of biochemical pathways and networks. The thesis starts with the application of an established method to newly available gene expression data related to the interferon pathway of the human immune system in order to identify active subpathways under di erent experimental conditions. The thesis continues with the comparative evaluation of various machine learning methods (Relevance networks, Graphical Gaussian Models, Bayesian networks) using observational and interventional data from cytometry experiments as well as simulated data from a gold-standard network. The thesis also extends and improves existing methods to include biological prior knowledge under the Bayesian approach in order to increase the accuracy of the predicted networks and it quanti es to what extent the reconstruction accuracy can be improved in this way.
12

Flexible cross layer design for improved quality of service in MANETs

Kiourktsidis, Ilias January 2011 (has links)
Mobile Ad hoc Networks (MANETs) are becoming increasingly important because of their unique characteristics of connectivity. Several delay sensitive applications are starting to appear in these kinds of networks. Therefore, an issue in concern is to guarantee Quality of Service (QoS) in such constantly changing communication environment. The classical QoS aware solutions that have been used till now in the wired and infrastructure wireless networks are unable to achieve the necessary performance in the MANETs. The specialized protocols designed for multihop ad hoc networks offer basic connectivity with limited delay awareness and the mobility factor in the MANETs makes them even more unsuitable for use. Several protocols and solutions have been emerging in almost every layer in the protocol stack. The majority of the research efforts agree on the fact that in such dynamic environment in order to optimize the performance of the protocols, there is the need for additional information about the status of the network to be available. Hence, many cross layer design approaches appeared in the scene. Cross layer design has major advantages and the necessity to utilize such a design is definite. However, cross layer design conceals risks like architecture instability and design inflexibility. The aggressive use of cross layer design results in excessive increase of the cost of deployment and complicates both maintenance and upgrade of the network. The use of autonomous protocols like bio-inspired mechanisms and algorithms that are resilient on cross layer information unavailability, are able to reduce the dependence on cross layer design. In addition, properties like the prediction of the dynamic conditions and the adaptation to them are quite important characteristics. The design of a routing decision algorithm based on Bayesian Inference for the prediction of the path quality is proposed here. The accurate prediction capabilities and the efficient use of the plethora of cross layer information are presented. Furthermore, an adaptive mechanism based on the Genetic Algorithm (GA) is used to control the flow of the data in the transport layer. The aforementioned flow control mechanism inherits GA’s optimization capabilities without the need of knowing any details about the network conditions, thus, reducing the cross layer information dependence. Finally, is illustrated how Bayesian Inference can be used to suggest configuration parameter values to the other protocols in different layers in order to improve their performance.
13

Adapting deep neural networks as models of human visual perception

McClure, Patrick January 2018 (has links)
Deep neural networks (DNNs) have recently been used to solve complex perceptual and decision tasks. In particular, convolutional neural networks (CNN) have been extremely successful for visual perception. In addition to performing well on the trained object recognition task, these CNNs also model brain data throughout the visual hierarchy better than previous models. However, these DNNs are still far from completely explaining visual perception in the human brain. In this thesis, we investigated two methods with the goal of improving DNNs’ capabilities to model human visual perception: (1) deep representational distance learning (RDL), a method for driving representational spaces in deep nets into alignment with other (e.g. brain) representational spaces and (2) variational DNNs that use sampling to perform approximate Bayesian inference. In the first investigation, RDL successfully transferred information from a teacher model to a student DNN. This was achieved by driving the student DNN’s representational distance matrix (RDM), which characterises the representational geometry, into alignment with that of the teacher. This led to a significant increase in test accuracy on machine learning benchmarks. In the future, we plan to use this method to simultaneously train DNNs to perform complex tasks and to predict neural data. In the second investigation, we showed that sampling during learning and inference using simple Bernoulli- and Gaussian-based noise improved a CNN’s representation of its own uncertainty for object recognition. We also found that sampling during learning and inference with Gaussian noise improved how well CNNs predict human behavioural data for image classification. While these methods alone do not fully explain human vision, they allow for training CNNs that better model several features of human visual perception.
14

Novel methods for biological network inference : an application to circadian Ca2+ signaling network

Jin, Junyang January 2018 (has links)
Biological processes involve complex biochemical interactions among a large number of species like cells, RNA, proteins and metabolites. Learning these interactions is essential to interfering artificially with biological processes in order to, for example, improve crop yield, develop new therapies, and predict new cell or organism behaviors to genetic or environmental perturbations. For a biological process, two pieces of information are of most interest. For a particular species, the first step is to learn which other species are regulating it. This reveals topology and causality. The second step involves learning the precise mechanisms of how this regulation occurs. This step reveals the dynamics of the system. Applying this process to all species leads to the complete dynamical network. Systems biology is making considerable efforts to learn biological networks at low experimental costs. The main goal of this thesis is to develop advanced methods to build models for biological networks, taking the circadian system of Arabidopsis thaliana as a case study. A variety of network inference approaches have been proposed in the literature to study dynamic biological networks. However, many successful methods either require prior knowledge of the system or focus more on topology. This thesis presents novel methods that identify both network topology and dynamics, and do not depend on prior knowledge. Hence, the proposed methods are applicable to general biological networks. These methods are initially developed for linear systems, and, at the cost of higher computational complexity, can also be applied to nonlinear systems. Overall, we propose four methods with increasing computational complexity: one-to-one, combined group and element sparse Bayesian learning (GESBL), the kernel method and reversible jump Markov chain Monte Carlo method (RJMCMC). All methods are tested with challenging dynamical network simulations (including feedback, random networks, different levels of noise and number of samples), and realistic models of circadian system of Arabidopsis thaliana. These simulations show that, while the one-to-one method scales to the whole genome, the kernel method and RJMCMC method are superior for smaller networks. They are robust to tuning variables and able to provide stable performance. The simulations also imply the advantage of GESBL and RJMCMC over the state-of-the-art method. We envision that the estimated models can benefit a wide range of research. For example, they can locate biological compounds responsible for human disease through mathematical analysis and help predict the effectiveness of new treatments.
15

Environmental effects on social learning and its feedback on individual and group level interactions

Smolla, Marco January 2017 (has links)
Through social learning, animals acquire information from others, such as skills and knowledge about the environment. High fidelity transmission of locally adaptive information can lead to population-specific traits, or cultural traits, which are fundamental to the emergence of culture. Despite social learning being widespread in the animal kingdom, culture is rare in nature. This thesis investigates the evolution, ecology, and dynamics of social learning, to increase our understanding why species differ in their ability to generate and accumulate cultural traits, and ultimately how complex human culture emerged. Chapter 2 introduces a novel computational model that explicitly incorporates competition into the social learning context. The model predicts that social learning is most adaptive where resources are unevenly distributed and stable through time, even if individuals compete for limited resources. The model provides an explanation for reports of animals disregarding social information, even if it is available. Testing these predictions Chapter 3 presents a bumblebee foraging experiment. The results support the theoretical predictions, showing that foragers use social information to find rewarding flowers, even if social cues indicate competition. Chapter 4 further examines the trade-off between access to social information and competition. Individuals that are central in a learning network have more opportunities to acquire information from others, but also face an increased likelihood to engage in competition. The results of this model suggest that across different learning contexts centrality is only beneficial for dominant individuals because dominance can mitigate the effect of competition. This also shows that individual phenotypic differences affect the utility of social information. Chapter 5 uses a dynamic network model approach to tests whether these differences modulate the structure of learning networks and by extension of the population. The model shows that this is the case and that where social learning is favoured by the environment networks are more structured. Chapter 6, studies the drivers behind individual differences in social learning. The chapter focusses on reports of sex differences in social information use and finds that they can be explained by differences in risk taking behaviour. The results highlight the importance of the feedback between learning individuals, and how this shapes social learning dynamics on an individual as well as on a population level.
16

Approximate inference : new visions

Li, Yingzhen January 2018 (has links)
Nowadays machine learning (especially deep learning) techniques are being incorporated to many intelligent systems affecting the quality of human life. The ultimate purpose of these systems is to perform automated decision making, and in order to achieve this, predictive systems need to return estimates of their confidence. Powered by the rules of probability, Bayesian inference is the gold standard method to perform coherent reasoning under uncertainty. It is generally believed that intelligent systems following the Bayesian approach can better incorporate uncertainty information for reliable decision making, and be less vulnerable to attacks such as data poisoning. Critically, the success of Bayesian methods in practice, including the recent resurgence of Bayesian deep learning, relies on fast and accurate approximate Bayesian inference applied to probabilistic models. These approximate inference methods perform (approximate) Bayesian reasoning at a relatively low cost in terms of time and memory, thus allowing the principles of Bayesian modelling to be applied to many practical settings. However, more work needs to be done to scale approximate Bayesian inference methods to big systems such as deep neural networks and large-scale dataset such as ImageNet. In this thesis we develop new algorithms towards addressing the open challenges in approximate inference. In the first part of the thesis we develop two new approximate inference algorithms, by drawing inspiration from the well known expectation propagation and message passing algorithms. Both approaches provide a unifying view of existing variational methods from different algorithmic perspectives. We also demonstrate that they lead to better calibrated inference results for complex models such as neural network classifiers and deep generative models, and scale to large datasets containing hundreds of thousands of data-points. In the second theme of the thesis we propose a new research direction for approximate inference: developing algorithms for fitting posterior approximations of arbitrary form, by rethinking the fundamental principles of Bayesian computation and the necessity of algorithmic constraints in traditional inference schemes. We specify four algorithmic options for the development of such new generation approximate inference methods, with one of them further investigated and applied to Bayesian deep learning tasks.
17

A developmental approach to the study of affective bonds for human-robot interaction

Hiolle, Antoine January 2015 (has links)
Robotics agents are meant to play an increasingly larger role in our everyday lives. To be successfully integrated in our environment, robots will need to develop and display adaptive, robust, and socially suitable behaviours. To tackle these issues, the robotics research community has invested a considerable amount of efforts in modelling robotic architectures inspired by research on living systems, from ethology to developmental psychology. Following a similar approach, this thesis presents the research results of the modelling and experimental testing of robotic architectures based on affective and attachment bonds between young infants and their primary caregiver. I follow a bottom-up approach to the modelling of such bonds, examining how they can promote the situated development of an autonomous robot. Specifically, the models used and the results from the experiments carried out in laboratory settings and with naive users demonstrate the impact such affective bonds have on the learning outcomes of an autonomous robot and on the perception and behaviour of humans. This research leads to the emphasis on the importance of the interplay between the dynamics of the regulatory behaviours performed by a robot and the responsiveness of the human partner. The coupling of such signals and behaviours in an attachment-like dyad determines the nature of the outcomes for the robot, in terms of learning or the satisfaction of other needs. The experiments carried out also demonstrate of the attachment system can help a robot adapt its own social behaviour to that of the human partners, as infants are thought to do during their development.
18

Functional Sensory Representations of Natural Stimuli: the Case of Spatial Hearing

Mlynarski, Wiktor 21 January 2015 (has links)
In this thesis I attempt to explain mechanisms of neuronal coding in the auditory system as a form of adaptation to statistics of natural stereo sounds. To this end I analyse recordings of real-world auditory environments and construct novel statistical models of these data. I further compare regularities present in natural stimuli with known, experimentally observed neuronal mechanisms of spatial hearing. In a more general perspective, I use binaural auditory system as a starting point to consider the notion of function implemented by sensory neurons. In particular I argue for two, closely-related tenets: 1. The function of sensory neurons can not be fully elucidated without understanding statistics of natural stimuli they process. 2. Function of sensory representations is determined by redundancies present in the natural sensory environment. I present the evidence in support of the first tenet by describing and analysing marginal statistics of natural binaural sound. I compare observed, empirical distributions with knowledge from reductionist experiments. Such comparison allows to argue that the complexity of the spatial hearing task in the natural environment is much higher than analytic, physics-based predictions. I discuss the possibility that early brain stem circuits such as LSO and MSO do not \"compute sound localization\" as is often being claimed in the experimental literature. I propose that instead they perform a signal transformation, which constitutes the first step of a complex inference process. To support the second tenet I develop a hierarchical statistical model, which learns a joint sparse representation of amplitude and phase information from natural stereo sounds. I demonstrate that learned higher order features reproduce properties of auditory cortical neurons, when probed with spatial sounds. Reproduced aspects were hypothesized to be a manifestation of a fine-tuned computation specific to the sound-localization task. Here it is demonstrated that they rather reflect redundancies present in the natural stimulus. Taken together, results presented in this thesis suggest that efficient coding is a strategy useful for discovering structures (redundancies) in the input data. Their meaning has to be determined by the organism via environmental feedback.
19

Structural priors in deep neural networks

Ioannou, Yani Andrew January 2018 (has links)
Deep learning has in recent years come to dominate the previously separate fields of research in machine learning, computer vision, natural language understanding and speech recognition. Despite breakthroughs in training deep networks, there remains a lack of understanding of both the optimization and structure of deep networks. The approach advocated by many researchers in the field has been to train monolithic networks with excess complexity, and strong regularization --- an approach that leaves much to desire in efficiency. Instead we propose that carefully designing networks in consideration of our prior knowledge of the task and learned representation can improve the memory and compute efficiency of state-of-the art networks, and even improve generalization --- what we propose to denote as structural priors. We present two such novel structural priors for convolutional neural networks, and evaluate them in state-of-the-art image classification CNN architectures. The first of these methods proposes to exploit our knowledge of the low-rank nature of most filters learned for natural images by structuring a deep network to learn a collection of mostly small, low-rank, filters. The second addresses the filter/channel extents of convolutional filters, by learning filters with limited channel extents. The size of these channel-wise basis filters increases with the depth of the model, giving a novel sparse connection structure that resembles a tree root. Both methods are found to improve the generalization of these architectures while also decreasing the size and increasing the efficiency of their training and test-time computation. Finally, we present work towards conditional computation in deep neural networks, moving towards a method of automatically learning structural priors in deep networks. We propose a new discriminative learning model, conditional networks, that jointly exploit the accurate representation learning capabilities of deep neural networks with the efficient conditional computation of decision trees. Conditional networks yield smaller models, and offer test-time flexibility in the trade-off of computation vs. accuracy.
20

Data-driven language understanding for spoken dialogue systems

Mrkšić, Nikola January 2018 (has links)
Spoken dialogue systems provide a natural conversational interface to computer applications. In recent years, the substantial improvements in the performance of speech recognition engines have helped shift the research focus to the next component of the dialogue system pipeline: the one in charge of language understanding. The role of this module is to translate user inputs into accurate representations of the user goal in the form that can be used by the system to interact with the underlying application. The challenges include the modelling of linguistic variation, speech recognition errors and the effects of dialogue context. Recently, the focus of language understanding research has moved to making use of word embeddings induced from large textual corpora using unsupervised methods. The work presented in this thesis demonstrates how these methods can be adapted to overcome the limitations of language understanding pipelines currently used in spoken dialogue systems. The thesis starts with a discussion of the pros and cons of language understanding models used in modern dialogue systems. Most models in use today are based on the delexicalisation paradigm, where exact string matching supplemented by a list of domain-specific rephrasings is used to recognise users' intents and update the system's internal belief state. This is followed by an attempt to use pretrained word vector collections to automatically induce domain-specific semantic lexicons, which are typically hand-crafted to handle lexical variation and account for a plethora of system failure modes. The results highlight the deficiencies of distributional word vectors which must be overcome to make them useful for downstream language understanding models. The thesis next shifts focus to overcoming the language understanding models' dependency on semantic lexicons. To achieve that, the proposed Neural Belief Tracking (NBT) model forsakes the use of standard one-hot n-gram representations used in Natural Language Processing in favour of distributed representations of user utterances, dialogue context and domain ontologies. The NBT model makes use of external lexical knowledge embedded in semantically specialised word vectors, obviating the need for domain-specific semantic lexicons. Subsequent work focuses on semantic specialisation, presenting an efficient method for injecting external lexical knowledge into word vector spaces. The proposed Attract-Repel algorithm boosts the semantic content of existing word vectors while simultaneously inducing high-quality cross-lingual word vector spaces. Finally, NBT models powered by specialised cross-lingual word vectors are used to train multilingual belief tracking models. These models operate across many languages at once, providing an efficient method for bootstrapping language understanding models for lower-resource languages with limited training data.

Page generated in 0.1197 seconds