• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • Tagged with
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Finding the Maximizers of the Information Divergence from an Exponential Family / Das Auffinden der Maximierer der Informationsdivergenz von einer Exponentialfamilie

Rauh, Johannes 19 October 2011 (has links) (PDF)
The subject of this thesis is the maximization of the information divergence from an exponential family on a finite set, a problem first formulated by Nihat Ay. A special case is the maximization of the mutual information or the multiinformation between different parts of a composite system. My thesis contributes mainly to the mathematical aspects of the optimization problem. A reformulation is found that relates the maximization of the information divergence with the maximization of an entropic quantity, defined on the normal space of the exponential family. This reformulation simplifies calculations in concrete cases and gives theoretical insight about the general problem. A second emphasis of the thesis is on examples that demonstrate how the theoretical results can be applied in particular cases. Third, my thesis contain first results on the characterization of exponential families with a small maximum value of the information divergence.
2

Local online learning of coherent information

Der, Ralf, Smyth, Darragh 11 December 2018 (has links)
One of the goals of perception is to learn to respond to coherence across space, time and modality. Here we present an abstract framework for the local online unsupervised learning of this coherent information using multi-stream neural networks. The processing units distinguish between feedforward inputs projected from the environment and the lateral, contextual inputs projected from the processing units of other streams. The contextual inputs are used to guide learning towards coherent cross-stream structure. The goal of all the learning algorithms described is to maximize the predictability between each unit output and its context. Many local cost functions may be applied: e.g. mutual information, relative entropy, squared error and covariance. Theoretical and simulation results indicate that, of these, the covariance rule (1) is the only rule that specifically links and learns only those streams with coherent information, (2) can be robustly approximated by a Hebbian rule, (3) is stable with input noise, no pairwise input correlations, and in the discovery of locally less informative components that are coherent globally. In accordance with the parallel nature of the biological substrate, we also show that all the rules scale up with the number of streams.
3

Finding the Maximizers of the Information Divergence from an Exponential Family: Finding the Maximizersof the Information Divergencefrom an Exponential Family

Rauh, Johannes 09 January 2011 (has links)
The subject of this thesis is the maximization of the information divergence from an exponential family on a finite set, a problem first formulated by Nihat Ay. A special case is the maximization of the mutual information or the multiinformation between different parts of a composite system. My thesis contributes mainly to the mathematical aspects of the optimization problem. A reformulation is found that relates the maximization of the information divergence with the maximization of an entropic quantity, defined on the normal space of the exponential family. This reformulation simplifies calculations in concrete cases and gives theoretical insight about the general problem. A second emphasis of the thesis is on examples that demonstrate how the theoretical results can be applied in particular cases. Third, my thesis contain first results on the characterization of exponential families with a small maximum value of the information divergence.:1. Introduction 2. Exponential families 2.1. Exponential families, the convex support and the moment map 2.2. The closure of an exponential family 2.3. Algebraic exponential families 2.4. Hierarchical models 3. Maximizing the information divergence from an exponential family 3.1. The directional derivatives of D(*|E ) 3.2. Projection points and kernel distributions 3.3. The function DE 3.4. The first order optimality conditions of DE 3.5. The relation between D(*|E) and DE 3.6. Computing the critical points 3.7. Computing the projection points 4. Examples 4.1. Low-dimensional exponential families 4.1.1. Zero-dimensional exponential families 4.1.2. One-dimensional exponential families 4.1.3. One-dimensional exponential families on four states 4.1.4. Other low-dimensional exponential families 4.2. Partition models 4.3. Exponential families with max D(*|E ) = log(2) 4.4. Binary i.i.d. models and binomial models 5. Applications and Outlook 5.1. Principles of learning, complexity measures and constraints 5.2. Optimally approximating exponential families 5.3. Asymptotic behaviour of the empirical information divergence A. Polytopes and oriented matroids A.1. Polytopes A.2. Oriented matroids Bibliography Index Glossary of notations

Page generated in 0.0463 seconds