• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • Tagged with
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Pattern Formation in Cellular Automaton Models - Characterisation, Examples and Analysis / Musterbildung in Zellulären Automaten Modellen - Charakterisierung, Beispiele und Analyse

Dormann, Sabine 26 October 2000 (has links)
Cellular automata (CA) are fully discrete dynamical systems. Space is represented by a regular lattice while time proceeds in finite steps. Each cell of the lattice is assigned a state, chosen from a finite set of "values". The states of the cells are updated synchronously according to a local interaction rule, whereby each cell obeys the same rule. Formal definitions of deterministic, probabilistic and lattice-gas CA are presented. With the so-called mean-field approximation any CA model can be transformed into a deterministic model with continuous state space. CA rules, which characterise movement, single-component growth and many-component interactions are designed and explored. It is demonstrated that lattice-gas CA offer a suitable tool for modelling such processes and for analysing them by means of the corresponding mean-field approximation. In particular two types of many-component interactions in lattice-gas CA models are introduced and studied. The first CA captures in abstract form the essential ideas of activator-inhibitor interactions of biological systems. Despite of the automaton´s simplicity, self-organised formation of stationary spatial patterns emerging from a randomly perturbed uniform state is observed (Turing pattern). In the second CA, rules are designed to mimick the dynamics of excitable systems. Spatial patterns produced by this automaton are the self-organised formation of spiral waves and target patterns. Properties of both pattern formation processes can be well captured by a linear stability analysis of the corresponding nonlinear mean-field (Boltzmann) equations.
2

Time Series Analysis informed by Dynamical Systems Theory

Schumacher, Johannes 11 June 2015 (has links)
This thesis investigates time series analysis tools for prediction, as well as detection and characterization of dependencies, informed by dynamical systems theory. Emphasis is placed on the role of delays with respect to information processing in dynamical systems, as well as with respect to their effect in causal interactions between systems. The three main features that characterize this work are, first, the assumption that time series are measurements of complex deterministic systems. As a result, functional mappings for statistical models in all methods are justified by concepts from dynamical systems theory. To bridge the gap between dynamical systems theory and data, differential topology is employed in the analysis. Second, the Bayesian paradigm of statistical inference is used to formalize uncertainty by means of a consistent theoretical apparatus with axiomatic foundation. Third, the statistical models are strongly informed by modern nonlinear concepts from machine learning and nonparametric modeling approaches, such as Gaussian process theory. Consequently, unbiased approximations of the functional mappings implied by the prior system level analysis can be achieved. Applications are considered foremost with respect to computational neuroscience but extend to generic time series measurements.
3

Information Processing in Neural Networks: Learning of Structural Connectivity and Dynamics of Functional Activation

Finger, Holger Ewald 16 March 2017 (has links)
Adaptability and flexibility are some of the most important human characteristics. Learning based on new experiences enables adaptation by changing the structural connectivity of the brain through plasticity mechanisms. But the human brain can also adapt to new tasks and situations in a matter of milliseconds by dynamic coordination of functional activation. To understand how this flexibility can be achieved in the computations performed by neural networks, we have to understand how the relatively fixed structural backbone interacts with the functional dynamics. In this thesis, I will analyze these interactions between the structural network connectivity and functional activations and their dynamic interactions on different levels of abstraction and spatial and temporal scales. One of the big questions in neuroscience is how functional interactions in the brain can adapt instantly to different tasks while the brain structure remains almost static. To improve our knowledge of the neural mechanisms involved, I will first analyze how dynamics in functional brain activations can be simulated based on the structural brain connectivity obtained with diffusion tensor imaging. In particular, I will show that a dynamic model of functional connectivity in the human cortex is more predictive of empirically measured functional connectivity than a stationary model of functional dynamics. More specifically, the simulations of a coupled oscillator model predict 54\% of the variance in the empirically measured EEG functional connectivity. Hypotheses of temporal coding have been proposed for the computational role of these dynamic oscillatory interactions on fast timescales. These oscillatory interactions play a role in the dynamic coordination between brain areas as well as between cortical columns or individual cells. Here I will extend neural network models, which learn unsupervised from statistics of natural stimuli, with phase variables that allow temporal coding in distributed representations. The analysis shows that synchronization of these phase variables provides a useful mechanism for binding of activated neurons, contextual coding, and figure ground segregation. Importantly, these results could also provide new insights for improvements of deep learning methods for machine learning tasks. The dynamic coordination in neural networks has also large influences on behavior and cognition. In a behavioral experiment, we analyzed multisensory integration between a native and an augmented sense. The participants were blindfolded and had to estimate their rotation angle based on their native vestibular input and the augmented information. Our results show that subjects alternate in the use between these modalities, indicating that subjects dynamically coordinate the information transfer of the involved brain regions. Dynamic coordination is also highly relevant for the consolidation and retrieval of associative memories. In this regard, I investigated the beneficial effects of sleep for memory consolidation in an electroencephalography (EEG) study. Importantly, the results demonstrate that sleep leads to reduced event-related theta and gamma power in the cortical EEG during the retrieval of associative memories, which could indicate the consolidation of information from hippocampal to neocortical networks. This highlights that cognitive flexibility comprises both dynamic organization on fast timescales and structural changes on slow timescales. Overall, the computational and empirical experiments demonstrate how the brain evolved to a system that can flexibly adapt to any situation in a matter of milliseconds. This flexibility in information processing is enabled by an effective interplay between the structure of the neural network, the functional activations, and the dynamic interactions on fast time scales.

Page generated in 0.0462 seconds