• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 49
  • 15
  • 4
  • 1
  • Tagged with
  • 69
  • 25
  • 25
  • 24
  • 20
  • 17
  • 14
  • 12
  • 11
  • 11
  • 11
  • 10
  • 10
  • 8
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Uncertainty Assessment of Hydrogeological Models Based on Information Theory / Bewertung der Unsicherheit hydrogeologischer Modelle unter Verwendung informationstheoretischer Grundlagen

De Aguinaga, José Guillermo 17 August 2011 (has links) (PDF)
There is a great deal of uncertainty in hydrogeological modeling. Overparametrized models increase uncertainty since the information of the observations is distributed through all of the parameters. The present study proposes a new option to reduce this uncertainty. A way to achieve this goal is to select a model which provides good performance with as few calibrated parameters as possible (parsimonious model) and to calibrate it using many sources of information. Akaike’s Information Criterion (AIC), proposed by Hirotugu Akaike in 1973, is a statistic-probabilistic criterion based on the Information Theory, which allows us to select a parsimonious model. AIC formulates the problem of parsimonious model selection as an optimization problem across a set of proposed conceptual models. The AIC assessment is relatively new in groundwater modeling and it presents a challenge to apply it with different sources of observations. In this dissertation, important findings in the application of AIC in hydrogeological modeling using different sources of observations are discussed. AIC is tested on ground-water models using three sets of synthetic data: hydraulic pressure, horizontal hydraulic conductivity, and tracer concentration. In the present study, the impact of the following factors is analyzed: number of observations, types of observations and order of calibrated parameters. These analyses reveal not only that the number of observations determine how complex a model can be but also that its diversity allows for further complexity in the parsimonious model. However, a truly parsimonious model was only achieved when the order of calibrated parameters was properly considered. This means that parameters which provide bigger improvements in model fit should be considered first. The approach to obtain a parsimonious model applying AIC with different types of information was successfully applied to an unbiased lysimeter model using two different types of real data: evapotranspiration and seepage water. With this additional independent model assessment it was possible to underpin the general validity of this AIC approach. / Hydrogeologische Modellierung ist von erheblicher Unsicherheit geprägt. Überparametrisierte Modelle erhöhen die Unsicherheit, da gemessene Informationen auf alle Parameter verteilt sind. Die vorliegende Arbeit schlägt einen neuen Ansatz vor, um diese Unsicherheit zu reduzieren. Eine Möglichkeit, um dieses Ziel zu erreichen, besteht darin, ein Modell auszuwählen, das ein gutes Ergebnis mit möglichst wenigen Parametern liefert („parsimonious model“), und es zu kalibrieren, indem viele Informationsquellen genutzt werden. Das 1973 von Hirotugu Akaike vorgeschlagene Informationskriterium, bekannt als Akaike-Informationskriterium (engl. Akaike’s Information Criterion; AIC), ist ein statistisches Wahrscheinlichkeitskriterium basierend auf der Informationstheorie, welches die Auswahl eines Modells mit möglichst wenigen Parametern erlaubt. AIC formuliert das Problem der Entscheidung für ein gering parametrisiertes Modell als ein modellübergreifendes Optimierungsproblem. Die Anwendung von AIC in der Grundwassermodellierung ist relativ neu und stellt eine Herausforderung in der Anwendung verschiedener Messquellen dar. In der vorliegenden Dissertation werden maßgebliche Forschungsergebnisse in der Anwendung des AIC in hydrogeologischer Modellierung unter Anwendung unterschiedlicher Messquellen diskutiert. AIC wird an Grundwassermodellen getestet, bei denen drei synthetische Datensätze angewendet werden: Wasserstand, horizontale hydraulische Leitfähigkeit und Tracer-Konzentration. Die vorliegende Arbeit analysiert den Einfluss folgender Faktoren: Anzahl der Messungen, Arten der Messungen und Reihenfolge der kalibrierten Parameter. Diese Analysen machen nicht nur deutlich, dass die Anzahl der gemessenen Parameter die Komplexität eines Modells bestimmt, sondern auch, dass seine Diversität weitere Komplexität für gering parametrisierte Modelle erlaubt. Allerdings konnte ein solches Modell nur erreicht werden, wenn eine bestimmte Reihenfolge der kalibrierten Parameter berücksichtigt wurde. Folglich sollten zuerst jene Parameter in Betracht gezogen werden, die deutliche Verbesserungen in der Modellanpassung liefern. Der Ansatz, ein gering parametrisiertes Modell durch die Anwendung des AIC mit unterschiedlichen Informationsarten zu erhalten, wurde erfolgreich auf einen Lysimeterstandort übertragen. Dabei wurden zwei unterschiedliche reale Messwertarten genutzt: Evapotranspiration und Sickerwasser. Mit Hilfe dieser weiteren, unabhängigen Modellbewertung konnte die Gültigkeit dieses AIC-Ansatzes gezeigt werden.
52

Binaural signal processing for source localization and noise reduction with applications to mobile robotics

Schölling, Björn January 2009 (has links)
Zugl.: Darmstadt, Techn. Univ., Diss., 2009
53

Subgraph Covers- An Information Theoretic Approach to Motif Analysis in Networks

Wegner, Anatol Eugen 02 April 2015 (has links)
A large number of complex systems can be modelled as networks of interacting units. From a mathematical point of view the topology of such systems can be represented as graphs of which the nodes represent individual elements of the system and the edges interactions or relations between them. In recent years networks have become a principal tool for analyzing complex systems in many different fields. This thesis introduces an information theoretic approach for finding characteristic connectivity patterns of networks, also called network motifs. Network motifs are sometimes also referred to as basic building blocks of complex networks. Many real world networks contain a statistically surprising number of certain subgraph patterns called network motifs. In biological and technological networks motifs are thought to contribute to the overall function of the network by performing modular tasks such as information processing. Therefore, methods for identifying network motifs are of great scientific interest. In the prevalent approach to motif analysis network motifs are defined to be subgraphs that occur significantly more often in a network when compared to a null model that preserves certain features of the network. However, defining appropriate null models and sampling these has proven to be challenging. This thesis introduces an alternative approach to motif analysis which looks at motifs as regularities of a network that can be exploited to obtain a more efficient representation of the network. The approach is based on finding a subgraph cover that represents the network using minimal total information. Here, a subgraph cover is a set of subgraphs such that every edge of the graph is contained in at least one subgraph in the cover while the total information of a subgraph cover is the information required to specify the connectivity patterns occurring in the cover together with their position in the graph. The thesis also studies the connection between motif analysis and random graph models for networks. Developing random graph models that incorporate high densities of triangles and other motifs has long been a goal of network research. In recent years, two such model have been proposed . However, their applications have remained limited because of the lack of a method for fitting such models to networks. In this thesis, we address this problem by showing that these models can be formulated as ensembles of subgraph covers and that the total information optimal subgraph covers can be used to match networks with such models. Moreover, these models can be solved analytically for many of their properties allowing for more accurate modelling of networks in general. Finally, the thesis also analyzes the problem of finding a total information optimal subgraph cover with respect to its computational complexity. The problem turns out to be NP-hard hence, we propose a greedy heuristic for it. Empirical results for several real world networks from different fields are presented. In order to test the presented algorithm we also consider some synthetic networks with predetermined motif structure.
54

Uncertainty Assessment of Hydrogeological Models Based on Information Theory

De Aguinaga, José Guillermo 03 December 2010 (has links)
There is a great deal of uncertainty in hydrogeological modeling. Overparametrized models increase uncertainty since the information of the observations is distributed through all of the parameters. The present study proposes a new option to reduce this uncertainty. A way to achieve this goal is to select a model which provides good performance with as few calibrated parameters as possible (parsimonious model) and to calibrate it using many sources of information. Akaike’s Information Criterion (AIC), proposed by Hirotugu Akaike in 1973, is a statistic-probabilistic criterion based on the Information Theory, which allows us to select a parsimonious model. AIC formulates the problem of parsimonious model selection as an optimization problem across a set of proposed conceptual models. The AIC assessment is relatively new in groundwater modeling and it presents a challenge to apply it with different sources of observations. In this dissertation, important findings in the application of AIC in hydrogeological modeling using different sources of observations are discussed. AIC is tested on ground-water models using three sets of synthetic data: hydraulic pressure, horizontal hydraulic conductivity, and tracer concentration. In the present study, the impact of the following factors is analyzed: number of observations, types of observations and order of calibrated parameters. These analyses reveal not only that the number of observations determine how complex a model can be but also that its diversity allows for further complexity in the parsimonious model. However, a truly parsimonious model was only achieved when the order of calibrated parameters was properly considered. This means that parameters which provide bigger improvements in model fit should be considered first. The approach to obtain a parsimonious model applying AIC with different types of information was successfully applied to an unbiased lysimeter model using two different types of real data: evapotranspiration and seepage water. With this additional independent model assessment it was possible to underpin the general validity of this AIC approach. / Hydrogeologische Modellierung ist von erheblicher Unsicherheit geprägt. Überparametrisierte Modelle erhöhen die Unsicherheit, da gemessene Informationen auf alle Parameter verteilt sind. Die vorliegende Arbeit schlägt einen neuen Ansatz vor, um diese Unsicherheit zu reduzieren. Eine Möglichkeit, um dieses Ziel zu erreichen, besteht darin, ein Modell auszuwählen, das ein gutes Ergebnis mit möglichst wenigen Parametern liefert („parsimonious model“), und es zu kalibrieren, indem viele Informationsquellen genutzt werden. Das 1973 von Hirotugu Akaike vorgeschlagene Informationskriterium, bekannt als Akaike-Informationskriterium (engl. Akaike’s Information Criterion; AIC), ist ein statistisches Wahrscheinlichkeitskriterium basierend auf der Informationstheorie, welches die Auswahl eines Modells mit möglichst wenigen Parametern erlaubt. AIC formuliert das Problem der Entscheidung für ein gering parametrisiertes Modell als ein modellübergreifendes Optimierungsproblem. Die Anwendung von AIC in der Grundwassermodellierung ist relativ neu und stellt eine Herausforderung in der Anwendung verschiedener Messquellen dar. In der vorliegenden Dissertation werden maßgebliche Forschungsergebnisse in der Anwendung des AIC in hydrogeologischer Modellierung unter Anwendung unterschiedlicher Messquellen diskutiert. AIC wird an Grundwassermodellen getestet, bei denen drei synthetische Datensätze angewendet werden: Wasserstand, horizontale hydraulische Leitfähigkeit und Tracer-Konzentration. Die vorliegende Arbeit analysiert den Einfluss folgender Faktoren: Anzahl der Messungen, Arten der Messungen und Reihenfolge der kalibrierten Parameter. Diese Analysen machen nicht nur deutlich, dass die Anzahl der gemessenen Parameter die Komplexität eines Modells bestimmt, sondern auch, dass seine Diversität weitere Komplexität für gering parametrisierte Modelle erlaubt. Allerdings konnte ein solches Modell nur erreicht werden, wenn eine bestimmte Reihenfolge der kalibrierten Parameter berücksichtigt wurde. Folglich sollten zuerst jene Parameter in Betracht gezogen werden, die deutliche Verbesserungen in der Modellanpassung liefern. Der Ansatz, ein gering parametrisiertes Modell durch die Anwendung des AIC mit unterschiedlichen Informationsarten zu erhalten, wurde erfolgreich auf einen Lysimeterstandort übertragen. Dabei wurden zwei unterschiedliche reale Messwertarten genutzt: Evapotranspiration und Sickerwasser. Mit Hilfe dieser weiteren, unabhängigen Modellbewertung konnte die Gültigkeit dieses AIC-Ansatzes gezeigt werden.
55

Finding the Maximizers of the Information Divergence from an Exponential Family: Finding the Maximizersof the Information Divergencefrom an Exponential Family

Rauh, Johannes 09 January 2011 (has links)
The subject of this thesis is the maximization of the information divergence from an exponential family on a finite set, a problem first formulated by Nihat Ay. A special case is the maximization of the mutual information or the multiinformation between different parts of a composite system. My thesis contributes mainly to the mathematical aspects of the optimization problem. A reformulation is found that relates the maximization of the information divergence with the maximization of an entropic quantity, defined on the normal space of the exponential family. This reformulation simplifies calculations in concrete cases and gives theoretical insight about the general problem. A second emphasis of the thesis is on examples that demonstrate how the theoretical results can be applied in particular cases. Third, my thesis contain first results on the characterization of exponential families with a small maximum value of the information divergence.:1. Introduction 2. Exponential families 2.1. Exponential families, the convex support and the moment map 2.2. The closure of an exponential family 2.3. Algebraic exponential families 2.4. Hierarchical models 3. Maximizing the information divergence from an exponential family 3.1. The directional derivatives of D(*|E ) 3.2. Projection points and kernel distributions 3.3. The function DE 3.4. The first order optimality conditions of DE 3.5. The relation between D(*|E) and DE 3.6. Computing the critical points 3.7. Computing the projection points 4. Examples 4.1. Low-dimensional exponential families 4.1.1. Zero-dimensional exponential families 4.1.2. One-dimensional exponential families 4.1.3. One-dimensional exponential families on four states 4.1.4. Other low-dimensional exponential families 4.2. Partition models 4.3. Exponential families with max D(*|E ) = log(2) 4.4. Binary i.i.d. models and binomial models 5. Applications and Outlook 5.1. Principles of learning, complexity measures and constraints 5.2. Optimally approximating exponential families 5.3. Asymptotic behaviour of the empirical information divergence A. Polytopes and oriented matroids A.1. Polytopes A.2. Oriented matroids Bibliography Index Glossary of notations
56

Causal Models over Infinite Graphs and their Application to the Sensorimotor Loop / Kausale Modelle über unendlichen Grafen und deren Anwendung auf die sensomotorische Schleife - stochastische Aspekte und gradientenbasierte optimale Steuerung

Bernigau, Holger 27 April 2015 (has links) (PDF)
Motivation and background The enormous amount of capabilities that every human learns throughout his life, is probably among the most remarkable and fascinating aspects of life. Learning has therefore drawn lots of interest from scientists working in very different fields like philosophy, biology, sociology, educational sciences, computer sciences and mathematics. This thesis focuses on the information theoretical and mathematical aspects of learning. We are interested in the learning process of an agent (which can be for example a human, an animal, a robot, an economical institution or a state) that interacts with its environment. Common models for this interaction are Markov decision processes (MDPs) and partially observable Markov decision processes (POMDPs). Learning is then considered to be the maximization of the expectation of a predefined reward function. In order to formulate general principles (like a formal definition of curiosity-driven learning or avoidance of unpleasant situation) in a rigorous way, it might be desirable to have a theoretical framework for the optimization of more complex functionals of the underlying process law. This might include the entropy of certain sensor values or their mutual information. An optimization of the latter quantity (also known as predictive information) has been investigated intensively both theoretically and experimentally using computer simulations by N. Ay, R. Der, K Zahedi and G. Martius. In this thesis, we develop a mathematical theory for learning in the sensorimotor loop beyond expected reward maximization. Approaches and results This thesis covers four different topics related to the theory of learning in the sensorimotor loop. First of all, we need to specify the model of an agent interacting with the environment, either with learning or without learning. This interaction naturally results in complex causal dependencies. Since we are interested in asymptotic properties of learning algorithms, it is necessary to consider infinite time horizons. It turns out that the well-understood theory of causal networks known from the machine learning literature is not powerful enough for our purpose. Therefore we extend important theorems on causal networks to infinite graphs and general state spaces using analytical methods from measure theoretic probability theory and the theory of discrete time stochastic processes. Furthermore, we prove a generalization of the strong Markov property from Markov processes to infinite causal networks. Secondly, we develop a new idea for a projected stochastic constraint optimization algorithm. Generally a discrete gradient ascent algorithm can be used to generate an iterative sequence that converges to the stationary points of a given optimization problem. Whenever the optimization takes place over a compact subset of a vector space, it is possible that the iterative sequence leaves the constraint set. One possibility to cope with this problem is to project all points to the constraint set using Euclidean best-approximation. The latter is sometimes difficult to calculate. A concrete example is an optimization over the unit ball in a matrix space equipped with operator norm. Our idea consists of a back-projection using quasi-projectors different from the Euclidean best-approximation. In the matrix example, there is another canonical way to force the iterative sequence to stay in the constraint set: Whenever a point leaves the unit ball, it is divided by its norm. For a given target function, this procedure might introduce spurious stationary points on the boundary. We show that this problem can be circumvented by using a gradient that is tailored to the quasi-projector used for back-projection. We state a general technical compatibility condition between a quasi-projector and a metric used for gradient ascent, prove convergence of stochastic iterative sequences and provide an appropriate metric for the unit-ball example. Thirdly, a class of learning problems in the sensorimotor loop is defined and motivated. This class of problems is more general than the usual expected reward maximization and is illustrated by numerous examples (like expected reward maximization, maximization of the predictive information, maximization of the entropy and minimization of the variance of a given reward function). We also provide stationarity conditions together with appropriate gradient formulas. Last but not least, we prove convergence of a stochastic optimization algorithm (as considered in the second topic) applied to a general learning problem (as considered in the third topic). It is shown that the learning algorithm converges to the set of stationary points. Among others, the proof covers the convergence of an improved version of an algorithm for the maximization of the predictive information as proposed by N. Ay, R. Der and K. Zahedi. We also investigate an application to a linear Gaussian dynamic, where the policies are encoded by the unit-ball in a space of matrices equipped with operator norm.
57

Causal Models over Infinite Graphs and their Application to the Sensorimotor Loop: Causal Models over Infinite Graphs and their Application to theSensorimotor Loop: General Stochastic Aspects and GradientMethods for Optimal Control

Bernigau, Holger 04 July 2015 (has links)
Motivation and background The enormous amount of capabilities that every human learns throughout his life, is probably among the most remarkable and fascinating aspects of life. Learning has therefore drawn lots of interest from scientists working in very different fields like philosophy, biology, sociology, educational sciences, computer sciences and mathematics. This thesis focuses on the information theoretical and mathematical aspects of learning. We are interested in the learning process of an agent (which can be for example a human, an animal, a robot, an economical institution or a state) that interacts with its environment. Common models for this interaction are Markov decision processes (MDPs) and partially observable Markov decision processes (POMDPs). Learning is then considered to be the maximization of the expectation of a predefined reward function. In order to formulate general principles (like a formal definition of curiosity-driven learning or avoidance of unpleasant situation) in a rigorous way, it might be desirable to have a theoretical framework for the optimization of more complex functionals of the underlying process law. This might include the entropy of certain sensor values or their mutual information. An optimization of the latter quantity (also known as predictive information) has been investigated intensively both theoretically and experimentally using computer simulations by N. Ay, R. Der, K Zahedi and G. Martius. In this thesis, we develop a mathematical theory for learning in the sensorimotor loop beyond expected reward maximization. Approaches and results This thesis covers four different topics related to the theory of learning in the sensorimotor loop. First of all, we need to specify the model of an agent interacting with the environment, either with learning or without learning. This interaction naturally results in complex causal dependencies. Since we are interested in asymptotic properties of learning algorithms, it is necessary to consider infinite time horizons. It turns out that the well-understood theory of causal networks known from the machine learning literature is not powerful enough for our purpose. Therefore we extend important theorems on causal networks to infinite graphs and general state spaces using analytical methods from measure theoretic probability theory and the theory of discrete time stochastic processes. Furthermore, we prove a generalization of the strong Markov property from Markov processes to infinite causal networks. Secondly, we develop a new idea for a projected stochastic constraint optimization algorithm. Generally a discrete gradient ascent algorithm can be used to generate an iterative sequence that converges to the stationary points of a given optimization problem. Whenever the optimization takes place over a compact subset of a vector space, it is possible that the iterative sequence leaves the constraint set. One possibility to cope with this problem is to project all points to the constraint set using Euclidean best-approximation. The latter is sometimes difficult to calculate. A concrete example is an optimization over the unit ball in a matrix space equipped with operator norm. Our idea consists of a back-projection using quasi-projectors different from the Euclidean best-approximation. In the matrix example, there is another canonical way to force the iterative sequence to stay in the constraint set: Whenever a point leaves the unit ball, it is divided by its norm. For a given target function, this procedure might introduce spurious stationary points on the boundary. We show that this problem can be circumvented by using a gradient that is tailored to the quasi-projector used for back-projection. We state a general technical compatibility condition between a quasi-projector and a metric used for gradient ascent, prove convergence of stochastic iterative sequences and provide an appropriate metric for the unit-ball example. Thirdly, a class of learning problems in the sensorimotor loop is defined and motivated. This class of problems is more general than the usual expected reward maximization and is illustrated by numerous examples (like expected reward maximization, maximization of the predictive information, maximization of the entropy and minimization of the variance of a given reward function). We also provide stationarity conditions together with appropriate gradient formulas. Last but not least, we prove convergence of a stochastic optimization algorithm (as considered in the second topic) applied to a general learning problem (as considered in the third topic). It is shown that the learning algorithm converges to the set of stationary points. Among others, the proof covers the convergence of an improved version of an algorithm for the maximization of the predictive information as proposed by N. Ay, R. Der and K. Zahedi. We also investigate an application to a linear Gaussian dynamic, where the policies are encoded by the unit-ball in a space of matrices equipped with operator norm.
58

Detecting and quantifying causality from time series of complex systems

Runge, Jakob 18 August 2014 (has links)
Der technologische Fortschritt hat in jüngster Zeit zu einer großen Zahl von Zeitreihenmessdaten über komplexe dynamische Systeme wie das Klimasystem, das Gehirn oder das globale ökonomische System geführt. Beispielsweise treten im Klimasystem Prozesse wie El Nino-Southern Oscillation (ENSO) mit dem indischen Monsun auf komplexe Art und Weise durch Telekonnektionen und Rückkopplungen in Wechselwirkung miteinander. Die Analyse der Messdaten zur Rekonstruktion der diesen Wechselwirkungen zugrunde liegenden kausalen Mechanismen ist eine Möglichkeit komplexe Systeme zu verstehen, insbesondere angesichts der unendlich-dimensionalen Komplexität der physikalischen Prozesse. Diese Dissertation verfolgt zwei Hauptfragen: (i) Wie können, ausgehend von multivariaten Zeitreihen, kausale Wechselwirkungen praktisch detektiert werden? (ii) Wie kann die Stärke kausaler Wechselwirkungen zwischen mehreren Prozessen in klar interpretierbarer Weise quantifiziert werden? Im ersten Teil der Arbeit werden die Theorie zur Detektion und Quantifikation nichtlinearer kausaler Wechselwirkungen (weiter-)entwickelt und wichtige Aspekte der Schätztheorie untersucht. Zur Quantifikation kausaler Wechselwirkungen wird ein physikalisch motivierter, informationstheoretischer Ansatz vorgeschlagen, umfangreich numerisch untersucht und durch analytische Resultate untermauert. Im zweiten Teil der Arbeit werden die entwickelten Methoden angewandt, um Hypothesen über kausale Wechselwirkungen in Klimadaten der vergangenen hundert Jahre zu testen und zu generieren. In einem zweiten, eher explorativen Schritt wird ein globaler Luftdruck-Datensatz analysiert, um wichtige treibende Prozesse in der Atmosphäre zu identifizieren. Abschließend wird aufgezeigt, wie die Quantifizierung von Wechselwirkungen Aufschluss über mögliche qualitative Veränderungen in der Klimadynamik (Kipppunkte) geben kann und wie kausal treibende Prozesse zur optimalen Vorhersage von Zeitreihen genutzt werden können. / Today''s scientific world produces a vastly growing and technology-driven abundance of time series data of such complex dynamical systems as the Earth''s climate, the brain, or the global economy. In the climate system multiple processes (e.g., El Nino-Southern Oscillation (ENSO) or the Indian Monsoon) interact in a complex, intertwined way involving teleconnections and feedback loops. Using the data to reconstruct the causal mechanisms underlying these interactions is one way to better understand such complex systems, especially given the infinite-dimensional complexity of the underlying physical equations. In this thesis, two main research questions are addressed: (i) How can general causal interactions be practically detected from multivariate time series? (ii) How can the strength of causal interactions between multiple processes be quantified in a well-interpretable way? In the first part of this thesis, the theory of detecting and quantifying general (linear and nonlinear) causal interactions is developed alongside with the important practical issues of estimation. To quantify causal interactions, a physically motivated, information-theoretic formalism is introduced. The formalism is extensively tested numerically and substantiated by rigorous mathematical results. In the second part of this thesis, the novel methods are applied to test and generate hypotheses on causal interactions in climate time series covering the 20th century up to the present. The results yield insights on an understanding of the Walker circulation and teleconnections of the ENSO system, for example with the Indian Monsoon. Further, in an exploratory way, a global surface pressure dataset is analyzed to identify key processes that drive and govern interactions in the global atmosphere. Finally, it is shown how quantifying interactions can be used to determine possible structural changes, termed tipping points, and as optimal predictors, here applied to the prediction of ENSO.
59

Multiuser Transmission in Code Division Multiple Access Mobile Communications Systems

Irmer, Ralf 28 June 2005 (has links) (PDF)
Code Division Multiple Access (CDMA) is the technology used in all third generation cellular communications networks, and it is a promising candidate for the definition of fourth generation standards. The wireless mobile channel is usually frequency-selective causing interference among the users in one CDMA cell. Multiuser Transmission (MUT) algorithms for the downlink can increase the number of supportable users per cell, or decrease the necessary transmit power to guarantee a certain quality-of-service. Transmitter-based algorithms exploiting the channel knowledge in the transmitter are also motivated by information theoretic results like the Writing-on-Dirty-Paper theorem. The signal-to-noise ratio (SNR) is a reasonable performance criterion for noise-dominated scenarios. Using linear filters in the transmitter and the receiver, the SNR can be maximized with the proposed Eigenprecoder. Using multiple transmit and receive antennas, the performance can be significantly improved. The Generalized Selection Combining (GSC) MIMO Eigenprecoder concept enables reduced complexity transceivers. Methods eliminating the interference completely or minimizing the mean squared error exist for both the transmitter and the receiver. The maximum likelihood sequence detector in the receiver minimizes the bit error rate (BER), but it has no direct transmitter counterpart. The proposed Minimum Bit Error Rate Multiuser Transmission (TxMinBer) minimizes the BER at the detectors by transmit signal processing. This nonlinear approach uses the knowledge of the transmit data symbols and the wireless channel to calculate a transmit signal optimizing the BER with a transmit power constraint by nonlinear optimization methods like sequential quadratic programming (SQP). The performance of linear and nonlinear MUT algorithms with linear receivers is compared at the example of the TD-SCDMA standard. The interference problem can be solved with all MUT algorithms, but the TxMinBer approach requires less transmit power to support a certain number of users. The high computational complexity of MUT algorithms is also an important issue for their practical real-time application. The exploitation of structural properties of the system matrix reduces the complexity of the linear MUT mthods significantly. Several efficient methods to invert the ystem matrix are shown and compared. Proposals to reduce the omplexity of the Minimum Bit Error Rate Multiuser Transmission mehod are made, including a method avoiding the constraint by pase-only optimization. The complexity of the nonlinear methods i still some magnitudes higher than that of the linear MUT lgorithms, but further research on this topic and the increasing processing power of integrated circuits will eventually allow to exploit their better performance. / Der codegeteilte Mehrfachzugriff (CDMA) wird bei allen zellularen Mobilfunksystemen der dritten Generation verwendet und ist ein aussichtsreicher Kandidat für zukünftige Technologien. Die Netzkapazität, also die Anzahl der Nutzer je Funkzelle, ist durch auftretende Interferenzen zwischen den Nutzern begrenzt. Für die Aufwärtsstrecke von den mobilen Endgeräten zur Basisstation können die Interferenzen durch Verfahren der Mehrnutzerdetektion im Empfänger verringert werden. Für die Abwärtsstrecke, die höhere Datenraten bei Multimedia-Anwendungen transportiert, kann das Sendesignal im Sender so vorverzerrt werden, dass der Einfluß der Interferenzen minimiert wird. Die informationstheoretische Motivation liefert dazu das Writing-on-Dirty-Paper Theorem. Das Signal-zu-Rausch-Verhältnis ist ein geeignetes Kriterium für die Performanz in rauschdominierten Szenarien. Mit Sende- und Empfangsfiltern kann das SNR durch den vorgeschlagenen Eigenprecoder maximiert werden. Durch den Einsatz von Mehrfachantennen im Sender und Empfänger kann die Performanz signifikant erhöht werden. Mit dem Generalized Selection MIMO Eigenprecoder können Transceiver mit reduzierter Komplexität ermöglicht werden. Sowohl für den Empfänger als auch für den Sender existieren Methoden, die Interferenzen vollständig zu eliminieren, oder den mittleren quadratischen Fehler zu minimieren. Der Maximum-Likelihood-Empfänger minimiert die Bitfehlerwahrscheinlichkeit (BER), hat jedoch kein entsprechendes Gegenstück im Sender. Die in dieser Arbeit vorgeschlagene Minimum Bit Error Rate Multiuser Transmission (TxMinBer) minimiert die BER am Detektor durch Sendesignalverarbeitung. Dieses nichtlineare Verfahren nutzt die Kenntnis der Datensymbole und des Mobilfunkkanals, um ein Sendesignal zu generieren, dass die BER unter Berücksichtigung einer Sendeleistungsnebenbedingung minimiert. Dabei werden nichtlineare Optimierungsverfahren wie Sequentielle Quadratische Programmierung (SQP) verwendet. Die Performanz linearer und nichtlinearer MUT-Verfahren MUT-Algorithmen mit linearen Empfängern wird am Beispiel des TD-SCDMA-Standards verglichen. Das Problem der Interferenzen kann mit allen untersuchten Verfahren gelöst werden, die TxMinBer-Methode benötigt jedoch die geringste Sendeleistung, um eine bestimmt Anzahl von Nutzern zu unterstützen. Die hohe Rechenkomplexität der MUT-Algorithmen ist ein wichtiges Problem bei der Implementierung in Real-Zeit-Systemen. Durch die Ausnutzung von Struktureigenschaften der Systemmatrizen kann die Komplexität der linearen MUT-Verfahren signifikant reduziert werden. Verschiedene Verfahren zur Invertierung der Systemmatrizen werden aufgezeigt und verglichen. Es werden Vorschläge gemacht, die Komplexität der Minimum Bit Error Rate Multiuser Transmission zu reduzieren, u.a. durch Vermeidung der Sendeleistungsnebenbedingung durch eine Beschränkung der Optimierung auf die Phasen des Sendesignalvektors. Die Komplexität der nichtlinearen Methoden ist um einige Größenordungen höher als die der linearen Verfahren. Weitere Forschungsanstrengungen an diesem Thema sowie die wachsende Rechenleistung von integrierten Halbleitern werden künftig die Ausnutzung der besseren Leistungsfähigkeit der nichtlinearen MUT-Verfahren erlauben.
60

Coding Theorem and Memory Conditions for Abstract Channels with Time Structure / Kodierungstheorem und Gedächtniseigenschaften für abstrakte Kanäle mit Zeitstruktur

Mittelbach, Martin 02 June 2015 (has links) (PDF)
In the first part of this thesis, we generalize a coding theorem and a converse of Kadota and Wyner (1972) to abstract channels with time structure. As a main contribution we prove the coding theorem for a significantly weaker condition on the channel output memory, called total ergodicity for block-i.i.d. inputs. We achieve this result mainly by introducing an alternative characterization of information rate capacity. We show that the ψ-mixing condition (asymptotic output-memorylessness), used by Kadota and Wyner, is quite restrictive, in particular for the important class of Gaussian channels. In fact, we prove that for Gaussian channels the ψ-mixing condition is equivalent to finite output memory. Moreover, we derive a weak converse for all stationary channels with time structure. Intersymbol interference as well as input constraints are taken into account in a flexible way. Due to the direct use of outer measures and a derivation of an adequate version of Feinstein’s lemma we are able to avoid the standard extension of the channel input σ-algebra and obtain a more transparent derivation. We aim at a presentation from an operational perspective and consider an abstract framework, which enables us to treat discrete- and continuous-time channels in a unified way. In the second part, we systematically analyze infinite output memory conditions for abstract channels with time structure. We exploit the connections to the rich field of strongly mixing random processes to derive a hierarchy for the nonequivalent infinite channel output memory conditions in terms of a sequence of implications. The ergodic-theoretic memory condition used in the proof of the coding theorem and the ψ-mixing condition employed by Kadota and Wyner (1972) are shown to be part of this taxonomy. In addition, we specify conditions for the channel under which memory properties of a random process are invariant when the process is passed through the channel. In the last part, we investigate cascade and integration channels with regard to mixing conditions as well as properties required in the context of the coding theorem. The results are useful to study many physically relevant channel models and allow a component-based analysis of the overall channel. We consider a number of examples including composed models and deterministic as well as random filter channels. Finally, an application of strong mixing conditions from statistical signal processing involving the Fourier transform of stationary random sequences is discussed and a list of further applications is given. / Im ersten Teil der Arbeit wird ein Kodierungstheorem und ein dazugehöriges Umkehrtheorem von Kadota und Wyner (1972) für abstrakte Kanäle mit Zeitstruktur verallgemeinert. Als wesentlichster Beitrag wird das Kodierungstheorem für eine signifikant schwächere Bedingung an das Kanalausgangsgedächtnis bewiesen, die sogenannte totale Ergodizität für block-i.i.d. Eingaben. Dieses Ergebnis wird hauptsächlich durch eine alternative Charakterisierung der Informationsratenkapazität erreicht. Es wird gezeigt, dass die von Kadota und Wyner verwendete ψ-Mischungsbedingung (asymptotische Gedächtnislosigkeit am Kanalausgang) recht einschränkend ist, insbesondere für die wichtige Klasse der Gaußkanäle. In der Tat, für Gaußkanäle wird bewiesen, dass die ψ-Mischungsbedingung äquivalent zu endlichem Gedächtnis am Kanalausgang ist. Darüber hinaus wird eine schwache Umkehrung für alle stationären Kanäle mit Zeitstruktur bewiesen. Sowohl Intersymbolinterferenz als auch Eingabebeschränkungen werden in allgemeiner und flexibler Form berücksichtigt. Aufgrund der direkten Verwendung von äußeren Maßen und der Herleitung einer angepassten Version von Feinsteins Lemma ist es möglich, auf die Standarderweiterung der σ-Algebra am Kanaleingang zu verzichten, wodurch die Darstellungen transparenter und einfacher werden. Angestrebt wird eine operationelle Perspektive. Die Verwendung eines abstrakten Modells erlaubt dabei die einheitliche Betrachtung von zeitdiskreten und zeitstetigen Kanälen. Für abstrakte Kanäle mit Zeitstruktur werden im zweiten Teil der Arbeit Bedingungen für ein unendliches Gedächtnis am Kanalausgang systematisch analysiert. Unter Ausnutzung der Zusammenhänge zu dem umfassenden Gebiet der stark mischenden zufälligen Prozesse wird eine Hierarchie in Form einer Folge von Implikationen zwischen den verschiedenen Gedächtnisvarianten hergeleitet. Die im Beweis des Kodierungstheorems verwendete ergodentheoretische Gedächtniseigenschaft und die ψ-Mischungsbedingung von Kadota und Wyner (1972) sind dabei Bestandteil der hergeleiteten Systematik. Weiterhin werden Bedingungen für den Kanal spezifiziert, unter denen Eigenschaften von zufälligen Prozessen am Kanaleingang bei einer Transformation durch den Kanal erhalten bleiben. Im letzten Teil der Arbeit werden sowohl Integrationskanäle als auch Hintereinanderschaltungen von Kanälen in Bezug auf Mischungsbedingungen sowie weitere für das Kodierungstheorem relevante Kanaleigenschaften analysiert. Die erzielten Ergebnisse sind nützlich bei der Untersuchung vieler physikalisch relevanter Kanalmodelle und erlauben eine komponentenbasierte Betrachtung zusammengesetzter Kanäle. Es wird eine Reihe von Beispielen untersucht, einschließlich deterministischer Kanäle, zufälliger Filter und daraus zusammengesetzter Modelle. Abschließend werden Anwendungen aus weiteren Gebieten, beispielsweise der statistischen Signalverarbeitung, diskutiert. Insbesondere die Fourier-Transformation stationärer zufälliger Prozesse wird im Zusammenhang mit starken Mischungsbedingungen betrachtet.

Page generated in 0.0866 seconds