• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 13
  • 13
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Visual multistability: influencing factors and analogies to auditory streaming

Wegner, Thomas 03 May 2023 (has links)
Sensory inputs can be ambiguous. A physically constant stimulus that induces several perceptual alternatives is called multistable. Many factors can influence perception. In this thesis I investigate factors that affect visual multistability. All presented studies use a pattern-component rivalry stimulus consisting of two gratings drifting in opposite directions (called the plaid stimulus). This induces an “integrated” perception of a moving plaid (the pattern) or a “segregated” perception of overlaid gratings (the components). One study (chapter 2) investigates parameter dependence of a plaid stimulus on perception, with particular emphasis on the first percept. Specifically, it addresses how the enclosed angle (opening angle) affects the perception at stimulus onset and during prolonged viewing. The effects that are shown persist even if the stimulus is rotated. On a more abstract level it is shown that percepts can influence each other over time (chapter 3) which emphasizes the importance of instructions and report mode. In particular, it relates to the decision which percepts are instructed to be reported at all as well as which percepts can be reported as separate entities and which are pooled into the same response option. A further abstract level (predictability of a stimulus change, chapter 5) shows that transferring effects from one modality to another modality (specifically from audition to vision) requires careful choice of stimulus parameters. In this context, we give considerations to the proposal for a wider usage of sequential stopping rules (SSR, chapter 4), especially in studies where effect sizes are hard to estimate a priori. This thesis contributes to the field of visual multistability by providing novel experimental insights into pattern-component rivalry and by linking these findings to data on sequential dependencies, to the optimization of experimental designs, and to models and results from another sensory modality.:Bibliographische Beschreibung 3 Acknowledgments 4 CONTENTS 5 Collaborations 7 List of Figures 8 List of Tables 8 1. Introduction 9 1.1. Tristability 10 1.2. Two or more interpretations? 11 1.3. Multistability in different modalities 12 1.3.1. Auditory multistability 12 1.3.2. Haptic multistability 13 1.3.3. Olfactory multistability 13 1.4. multistability with several interpretations 13 1.5. Measuring multistability 14 1.5.1. The optokinetic nystagmus 14 1.5.2. Pupillometry 15 1.5.3. Measuring auditory multistability 15 1.5.4. Crossmodal multistability 16 1.6. Factors governing multistability 16 1.6.1. Manipulations that do not involve the stimulus 16 1.6.2. Manipulation of the stimulus 17 1.6.2.1. Factors affecting the plaid stimulus 17 1.6.2.2. Factors affecting the auditory streaming stimulus 18 1.7. Goals of this thesis 18 1.7.1. Overview of the thesis 18 2. Parameter dependence in visual pattern-component rivalry at onset and during prolonged viewing 21 2.1. Introduction 21 2.2. Methods 24 2.2.1. Participants 24 2.2.2. Setup 24 2.2.3. Stimuli 25 2.2.4. Procedure 26 2.2.5. Analysis 27 2.2.6. (Generalized) linear mixed-effects models 30 2.3. Results 30 2.3.1. Experiment 1 30 2.3.1.1. Relative number of integrated percepts 31 2.3.1.2. Generalized linear mixed-effects model 32 2.3.1.3. Dominance durations 33 2.3.1.4. Linear mixed-effects models 33 2.3.1.5. Control: Disambiguated trials 33 2.3.1.6. Time course of percept reports at onset 34 2.3.1.7. Eye movements 35 2.3.2. Experiment 2 36 2.3.2.1. Relative number of percepts 36 2.3.2.2. Generalized linear mixed-effects model 37 2.3.2.3. Dominance durations 38 2.3.2.4. Linear mixed-effects model 38 2.3.2.5. Control: Disambiguated trials 40 2.3.2.6. Time course of percept reports at onset 42 2.3.2.7. Eye movements 44 2.4. Discussion 45 2.5. Appendix 49 2.5.1. Appendix A 49 3. Perceptual history 51 3.1. Markov chains 52 3.1.1. Markov chains of order 1 and 2 52 3.2. Testing for Markov chains 55 3.2.1. The method of Naber and colleagues (2010) 56 3.2.1.1. The method 56 3.2.1.2. Advantages and disadvantages of the method 56 3.2.2. Further methods for testing Markov chains 57 3.3. Summary and discussion 58 4. Sequential stopping rules 60 4.1. The COAST rule 61 4.2. The CLAST rule 61 4.3. The variable criteria sequential stopping rule 61 4.4. Discussion 62 4.5. Using the vcSSR when transferring an effect from audition to vision 64 5. Predictability in visual multistability 66 5.1. Pretests 66 5.2. Predictability effects in visual pattern-component rivalry 69 5.2.1. Introduction 69 5.2.2. Methods 71 5.2.2.1. Participants 71 5.2.2.2. Setup 72 5.2.2.3. Stimuli 73 5.2.2.4. Conditions 73 5.2.2.5. Design and procedure 73 5.2.2.6. Analysis 74 5.2.3. Results 75 5.2.3.1. Valid reports 75 5.2.3.2. Verification of reports by eye movements 76 5.2.3.3. Onset latency 76 5.2.3.4. Dominance durations 78 5.2.3.5. Relative dominance of the segregated percept 78 5.2.4. Discussion 78 6. General discussion 83 6.1. Reporting percepts 83 6.1.1. Providing two versus three response options 83 6.1.2. Stimuli with more than three percepts 84 6.1.3. When to pool percepts together and when not 84 6.1.4. Leaving out percepts 87 6.1.5. Measuring (unreported) percepts 88 6.2. Comparing influencing factors on different levels 88 6.3. The use of the vcSSR 90 6.4. Valid reports 90 6.5. Conclusion 93 References 94
12

Causal Models over Infinite Graphs and their Application to the Sensorimotor Loop / Kausale Modelle über unendlichen Grafen und deren Anwendung auf die sensomotorische Schleife - stochastische Aspekte und gradientenbasierte optimale Steuerung

Bernigau, Holger 27 April 2015 (has links) (PDF)
Motivation and background The enormous amount of capabilities that every human learns throughout his life, is probably among the most remarkable and fascinating aspects of life. Learning has therefore drawn lots of interest from scientists working in very different fields like philosophy, biology, sociology, educational sciences, computer sciences and mathematics. This thesis focuses on the information theoretical and mathematical aspects of learning. We are interested in the learning process of an agent (which can be for example a human, an animal, a robot, an economical institution or a state) that interacts with its environment. Common models for this interaction are Markov decision processes (MDPs) and partially observable Markov decision processes (POMDPs). Learning is then considered to be the maximization of the expectation of a predefined reward function. In order to formulate general principles (like a formal definition of curiosity-driven learning or avoidance of unpleasant situation) in a rigorous way, it might be desirable to have a theoretical framework for the optimization of more complex functionals of the underlying process law. This might include the entropy of certain sensor values or their mutual information. An optimization of the latter quantity (also known as predictive information) has been investigated intensively both theoretically and experimentally using computer simulations by N. Ay, R. Der, K Zahedi and G. Martius. In this thesis, we develop a mathematical theory for learning in the sensorimotor loop beyond expected reward maximization. Approaches and results This thesis covers four different topics related to the theory of learning in the sensorimotor loop. First of all, we need to specify the model of an agent interacting with the environment, either with learning or without learning. This interaction naturally results in complex causal dependencies. Since we are interested in asymptotic properties of learning algorithms, it is necessary to consider infinite time horizons. It turns out that the well-understood theory of causal networks known from the machine learning literature is not powerful enough for our purpose. Therefore we extend important theorems on causal networks to infinite graphs and general state spaces using analytical methods from measure theoretic probability theory and the theory of discrete time stochastic processes. Furthermore, we prove a generalization of the strong Markov property from Markov processes to infinite causal networks. Secondly, we develop a new idea for a projected stochastic constraint optimization algorithm. Generally a discrete gradient ascent algorithm can be used to generate an iterative sequence that converges to the stationary points of a given optimization problem. Whenever the optimization takes place over a compact subset of a vector space, it is possible that the iterative sequence leaves the constraint set. One possibility to cope with this problem is to project all points to the constraint set using Euclidean best-approximation. The latter is sometimes difficult to calculate. A concrete example is an optimization over the unit ball in a matrix space equipped with operator norm. Our idea consists of a back-projection using quasi-projectors different from the Euclidean best-approximation. In the matrix example, there is another canonical way to force the iterative sequence to stay in the constraint set: Whenever a point leaves the unit ball, it is divided by its norm. For a given target function, this procedure might introduce spurious stationary points on the boundary. We show that this problem can be circumvented by using a gradient that is tailored to the quasi-projector used for back-projection. We state a general technical compatibility condition between a quasi-projector and a metric used for gradient ascent, prove convergence of stochastic iterative sequences and provide an appropriate metric for the unit-ball example. Thirdly, a class of learning problems in the sensorimotor loop is defined and motivated. This class of problems is more general than the usual expected reward maximization and is illustrated by numerous examples (like expected reward maximization, maximization of the predictive information, maximization of the entropy and minimization of the variance of a given reward function). We also provide stationarity conditions together with appropriate gradient formulas. Last but not least, we prove convergence of a stochastic optimization algorithm (as considered in the second topic) applied to a general learning problem (as considered in the third topic). It is shown that the learning algorithm converges to the set of stationary points. Among others, the proof covers the convergence of an improved version of an algorithm for the maximization of the predictive information as proposed by N. Ay, R. Der and K. Zahedi. We also investigate an application to a linear Gaussian dynamic, where the policies are encoded by the unit-ball in a space of matrices equipped with operator norm.
13

Causal Models over Infinite Graphs and their Application to the Sensorimotor Loop: Causal Models over Infinite Graphs and their Application to theSensorimotor Loop: General Stochastic Aspects and GradientMethods for Optimal Control

Bernigau, Holger 04 July 2015 (has links)
Motivation and background The enormous amount of capabilities that every human learns throughout his life, is probably among the most remarkable and fascinating aspects of life. Learning has therefore drawn lots of interest from scientists working in very different fields like philosophy, biology, sociology, educational sciences, computer sciences and mathematics. This thesis focuses on the information theoretical and mathematical aspects of learning. We are interested in the learning process of an agent (which can be for example a human, an animal, a robot, an economical institution or a state) that interacts with its environment. Common models for this interaction are Markov decision processes (MDPs) and partially observable Markov decision processes (POMDPs). Learning is then considered to be the maximization of the expectation of a predefined reward function. In order to formulate general principles (like a formal definition of curiosity-driven learning or avoidance of unpleasant situation) in a rigorous way, it might be desirable to have a theoretical framework for the optimization of more complex functionals of the underlying process law. This might include the entropy of certain sensor values or their mutual information. An optimization of the latter quantity (also known as predictive information) has been investigated intensively both theoretically and experimentally using computer simulations by N. Ay, R. Der, K Zahedi and G. Martius. In this thesis, we develop a mathematical theory for learning in the sensorimotor loop beyond expected reward maximization. Approaches and results This thesis covers four different topics related to the theory of learning in the sensorimotor loop. First of all, we need to specify the model of an agent interacting with the environment, either with learning or without learning. This interaction naturally results in complex causal dependencies. Since we are interested in asymptotic properties of learning algorithms, it is necessary to consider infinite time horizons. It turns out that the well-understood theory of causal networks known from the machine learning literature is not powerful enough for our purpose. Therefore we extend important theorems on causal networks to infinite graphs and general state spaces using analytical methods from measure theoretic probability theory and the theory of discrete time stochastic processes. Furthermore, we prove a generalization of the strong Markov property from Markov processes to infinite causal networks. Secondly, we develop a new idea for a projected stochastic constraint optimization algorithm. Generally a discrete gradient ascent algorithm can be used to generate an iterative sequence that converges to the stationary points of a given optimization problem. Whenever the optimization takes place over a compact subset of a vector space, it is possible that the iterative sequence leaves the constraint set. One possibility to cope with this problem is to project all points to the constraint set using Euclidean best-approximation. The latter is sometimes difficult to calculate. A concrete example is an optimization over the unit ball in a matrix space equipped with operator norm. Our idea consists of a back-projection using quasi-projectors different from the Euclidean best-approximation. In the matrix example, there is another canonical way to force the iterative sequence to stay in the constraint set: Whenever a point leaves the unit ball, it is divided by its norm. For a given target function, this procedure might introduce spurious stationary points on the boundary. We show that this problem can be circumvented by using a gradient that is tailored to the quasi-projector used for back-projection. We state a general technical compatibility condition between a quasi-projector and a metric used for gradient ascent, prove convergence of stochastic iterative sequences and provide an appropriate metric for the unit-ball example. Thirdly, a class of learning problems in the sensorimotor loop is defined and motivated. This class of problems is more general than the usual expected reward maximization and is illustrated by numerous examples (like expected reward maximization, maximization of the predictive information, maximization of the entropy and minimization of the variance of a given reward function). We also provide stationarity conditions together with appropriate gradient formulas. Last but not least, we prove convergence of a stochastic optimization algorithm (as considered in the second topic) applied to a general learning problem (as considered in the third topic). It is shown that the learning algorithm converges to the set of stationary points. Among others, the proof covers the convergence of an improved version of an algorithm for the maximization of the predictive information as proposed by N. Ay, R. Der and K. Zahedi. We also investigate an application to a linear Gaussian dynamic, where the policies are encoded by the unit-ball in a space of matrices equipped with operator norm.

Page generated in 0.0474 seconds