• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 17
  • 4
  • Tagged with
  • 21
  • 13
  • 11
  • 8
  • 8
  • 7
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

On the evolution of random discrete structures

Osthus, Deryk Simeon 26 April 2000 (has links)
Dies ist eine aktualisierte Version von einer gesperrten Publikation: 10.18452/14561. Grund der Sperrung: Persönliche Daten vom Deckblatt entfernt / Inhalt der Dissertation ist die Untersuchung der Evolutionsprozesse zufälliger diskreter Strukturen. Solche Evolutionsprozesse lassen sich üblicherweise wie folgt beschreiben. Anfangs beginnt man mit einer sehr einfachen Struktur (z.B. dem Graphen auf n Ecken, der keine Kanten hat) und einer Menge von "Bausteinen'' (z.B. der Kantenmenge des vollständigen Graphen auf n Ecken). Mit zunehmender Zeit werden zufällig mehr und mehr Bausteine eingefügt. Die grundlegende Frage, mit der sich diese Dissertation beschäftigt, ist die folgende: Wie sieht zu einem gegebenen Zeitpunkt die durch den Prozess erzeugte Struktur mit hoher Wahrscheinlichkeit aus? Obwohl das Hauptthema der Dissertation die Evolution zufälliger diskreter Strukturen ist, lassen sich die erzielten Ergebnisse auch unter den folgenden Gesichtspunkten zusammenfassen: Zufällige Greedy-Algorithmen: Es wird ein zufälliger Greedy-Algorithmus untersucht, der für einen gegebenen Graphen H einen zufälligen H-freien Graphen erzeugt. Extremale Ergebnisse: Es wird die Existenz von Graphen mit hoher Taillenweite und hoher chromatischer Zahl bewiesen, wobei bestehende Schranken verbessert werden. Asymptotische Enumeration: Es werden präzise asymptotische Schranken für die Anzahl dreiecksfreier Graphen mit n Ecken und m Kanten bewiesen. "Probabilistische'' Versionen klassischer Sätze: Es wird eine probabilistische Version des Satzes von Sperner bewiesen. / In this thesis, we study the evolution of random discrete structures. Such evolution processes usually fit into the following general framework. Initially (say at time 0), we start with a very simple structure (e.g. a graph on n vertices with no edges) and a set of "building blocks'' (e.g. the set of edges of the complete graph on n vertices). As time increases, we randomly add more and more elements from our set of building blocks. The basic question which we shall investigate is the following: what are the likely properties of the random structure produced by the process at any given time? Although this thesis is concerned with the evolution of random discrete structures, the results obtained can also be summarized according to the following keywords: Random greedy algorithms: we study the output of a random greedy algorithm which, for a given graph H, produces a random H-free graph. Extremal results: improving on previous bounds, we prove the existence of graphs with high girth and high chromatic number. Asymptotic enumeration: we prove sharp asymptotic bounds on the number of triangle-free graphs with n vertices and m edges for a large range of m. Probabilistic versions of "classical'' theorems: we prove a probabilistic version of Sperner's theorem on finite sets.
12

Ergodicity of PCA : equivalence between spatial and temporal mixing conditions

Louis, Pierre-Yves January 2004 (has links)
For a general attractive Probabilistic Cellular Automata on S-Zd, we prove that the (time-) convergence towards equilibrium of this Markovian parallel dynamics, exponentially fast in the uniform norm, is equivalent to a condition (A). This condition means the exponential decay of the influence from the boundary for the invariant measures of the system restricted to finite boxes. For a class of reversible PCA dynamics on {1,+1}(Zd), wit a naturally associated Gibbsian potential rho, we prove that a (spatial-) weak mixing condition (WM) for rho implies the validity of the assumption (A); thus exponential (time-) ergodicity of these dynamics towards the unique Gibbs measure associated to rho hods. On some particular examples we state that exponential ergodicity holds as soon as there is no phase transition.
13

Korrektes Schliessen bei unvollständiger Information : Anwendung des Prinzips der maximalen Entropie in einem probabilistischen Expertensystem /

Meyer, Carl-Heinz. January 1998 (has links)
Zugleich: Diss. Hagen, 1997. / Literaturverz.
14

Large Deviations for Brownian Intersection Measures

Mukherjee, Chiranjib 27 July 2011 (has links)
We consider p independent Brownian motions in ℝd. We assume that p ≥ 2 and p(d- 2) < d. Let ℓt denote the intersection measure of the p paths by time t, i.e., the random measure on ℝd that assigns to any measurable set A ⊂ ℝd the amount of intersection local time of the motions spent in A by time t. Earlier results of Chen derived the logarithmic asymptotics of the upper tails of the total mass ℓt(ℝd) as t →∞. In this paper, we derive a large-deviation principle for the normalised intersection measure t-pℓt on the set of positive measures on some open bounded set B ⊂ ℝd as t →∞ before exiting B. The rate function is explicit and gives some rigorous meaning, in this asymptotic regime, to the understanding that the intersection measure is the pointwise product of the densities of the normalised occupation times measures of the p motions. Our proof makes the classical Donsker-Varadhan principle for the latter applicable to the intersection measure. A second version of our principle is proved for the motions observed until the individual exit times from B, conditional on a large total mass in some compact set U ⊂ B. This extends earlier studies on the intersection measure by König and Mörters.
15

Probability and Heat Kernel Estimates for Lévy(-Type) Processes

Kühn, Franziska 05 December 2016 (has links) (PDF)
In this thesis, we present a new existence result for Lévy-type processes. Lévy-type processes behave locally like a Lévy process, but the Lévy triplet may depend on the current position of the process. They can be characterized by their so-called symbol; this is the analogue of the characteristic exponent in the Lévy case. Using a parametrix construction, we prove the existence of Lévy-type processes with a given symbol under weak regularity assumptions on the regularity of the symbol. Applications range from existence results for stable-like processes and mixed processes to uniqueness results for Lévy-driven stochastic differential equations. Moreover, we discuss sufficient conditions for the existence of moments of Lévy-type processes and derive estimates for fractional moments.
16

Probability and Heat Kernel Estimates for Lévy(-Type) Processes

Kühn, Franziska 25 November 2016 (has links)
In this thesis, we present a new existence result for Lévy-type processes. Lévy-type processes behave locally like a Lévy process, but the Lévy triplet may depend on the current position of the process. They can be characterized by their so-called symbol; this is the analogue of the characteristic exponent in the Lévy case. Using a parametrix construction, we prove the existence of Lévy-type processes with a given symbol under weak regularity assumptions on the regularity of the symbol. Applications range from existence results for stable-like processes and mixed processes to uniqueness results for Lévy-driven stochastic differential equations. Moreover, we discuss sufficient conditions for the existence of moments of Lévy-type processes and derive estimates for fractional moments.
17

Lévy-Type Processes under Uncertainty and Related Nonlocal Equations

Hollender, Julian 17 October 2016 (has links) (PDF)
The theoretical study of nonlinear expectations is the focus of attention for applications in a variety of different fields — often with the objective to model systems under incomplete information. Especially in mathematical finance, advances in the theory of sublinear expectations (also referred to as coherent risk measures) lay the theoretical foundation for modern approaches to evaluations under the presence of Knightian uncertainty. In this book, we introduce and study a large class of jump-type processes for sublinear expectations, which can be interpreted as Lévy-type processes under uncertainty in their characteristics. Moreover, we establish an existence and uniqueness theory for related nonlinear, nonlocal Hamilton-Jacobi-Bellman equations with non-dominated jump terms.
18

Causal Models over Infinite Graphs and their Application to the Sensorimotor Loop / Kausale Modelle über unendlichen Grafen und deren Anwendung auf die sensomotorische Schleife - stochastische Aspekte und gradientenbasierte optimale Steuerung

Bernigau, Holger 27 April 2015 (has links) (PDF)
Motivation and background The enormous amount of capabilities that every human learns throughout his life, is probably among the most remarkable and fascinating aspects of life. Learning has therefore drawn lots of interest from scientists working in very different fields like philosophy, biology, sociology, educational sciences, computer sciences and mathematics. This thesis focuses on the information theoretical and mathematical aspects of learning. We are interested in the learning process of an agent (which can be for example a human, an animal, a robot, an economical institution or a state) that interacts with its environment. Common models for this interaction are Markov decision processes (MDPs) and partially observable Markov decision processes (POMDPs). Learning is then considered to be the maximization of the expectation of a predefined reward function. In order to formulate general principles (like a formal definition of curiosity-driven learning or avoidance of unpleasant situation) in a rigorous way, it might be desirable to have a theoretical framework for the optimization of more complex functionals of the underlying process law. This might include the entropy of certain sensor values or their mutual information. An optimization of the latter quantity (also known as predictive information) has been investigated intensively both theoretically and experimentally using computer simulations by N. Ay, R. Der, K Zahedi and G. Martius. In this thesis, we develop a mathematical theory for learning in the sensorimotor loop beyond expected reward maximization. Approaches and results This thesis covers four different topics related to the theory of learning in the sensorimotor loop. First of all, we need to specify the model of an agent interacting with the environment, either with learning or without learning. This interaction naturally results in complex causal dependencies. Since we are interested in asymptotic properties of learning algorithms, it is necessary to consider infinite time horizons. It turns out that the well-understood theory of causal networks known from the machine learning literature is not powerful enough for our purpose. Therefore we extend important theorems on causal networks to infinite graphs and general state spaces using analytical methods from measure theoretic probability theory and the theory of discrete time stochastic processes. Furthermore, we prove a generalization of the strong Markov property from Markov processes to infinite causal networks. Secondly, we develop a new idea for a projected stochastic constraint optimization algorithm. Generally a discrete gradient ascent algorithm can be used to generate an iterative sequence that converges to the stationary points of a given optimization problem. Whenever the optimization takes place over a compact subset of a vector space, it is possible that the iterative sequence leaves the constraint set. One possibility to cope with this problem is to project all points to the constraint set using Euclidean best-approximation. The latter is sometimes difficult to calculate. A concrete example is an optimization over the unit ball in a matrix space equipped with operator norm. Our idea consists of a back-projection using quasi-projectors different from the Euclidean best-approximation. In the matrix example, there is another canonical way to force the iterative sequence to stay in the constraint set: Whenever a point leaves the unit ball, it is divided by its norm. For a given target function, this procedure might introduce spurious stationary points on the boundary. We show that this problem can be circumvented by using a gradient that is tailored to the quasi-projector used for back-projection. We state a general technical compatibility condition between a quasi-projector and a metric used for gradient ascent, prove convergence of stochastic iterative sequences and provide an appropriate metric for the unit-ball example. Thirdly, a class of learning problems in the sensorimotor loop is defined and motivated. This class of problems is more general than the usual expected reward maximization and is illustrated by numerous examples (like expected reward maximization, maximization of the predictive information, maximization of the entropy and minimization of the variance of a given reward function). We also provide stationarity conditions together with appropriate gradient formulas. Last but not least, we prove convergence of a stochastic optimization algorithm (as considered in the second topic) applied to a general learning problem (as considered in the third topic). It is shown that the learning algorithm converges to the set of stationary points. Among others, the proof covers the convergence of an improved version of an algorithm for the maximization of the predictive information as proposed by N. Ay, R. Der and K. Zahedi. We also investigate an application to a linear Gaussian dynamic, where the policies are encoded by the unit-ball in a space of matrices equipped with operator norm.
19

Causal Models over Infinite Graphs and their Application to the Sensorimotor Loop: Causal Models over Infinite Graphs and their Application to theSensorimotor Loop: General Stochastic Aspects and GradientMethods for Optimal Control

Bernigau, Holger 04 July 2015 (has links)
Motivation and background The enormous amount of capabilities that every human learns throughout his life, is probably among the most remarkable and fascinating aspects of life. Learning has therefore drawn lots of interest from scientists working in very different fields like philosophy, biology, sociology, educational sciences, computer sciences and mathematics. This thesis focuses on the information theoretical and mathematical aspects of learning. We are interested in the learning process of an agent (which can be for example a human, an animal, a robot, an economical institution or a state) that interacts with its environment. Common models for this interaction are Markov decision processes (MDPs) and partially observable Markov decision processes (POMDPs). Learning is then considered to be the maximization of the expectation of a predefined reward function. In order to formulate general principles (like a formal definition of curiosity-driven learning or avoidance of unpleasant situation) in a rigorous way, it might be desirable to have a theoretical framework for the optimization of more complex functionals of the underlying process law. This might include the entropy of certain sensor values or their mutual information. An optimization of the latter quantity (also known as predictive information) has been investigated intensively both theoretically and experimentally using computer simulations by N. Ay, R. Der, K Zahedi and G. Martius. In this thesis, we develop a mathematical theory for learning in the sensorimotor loop beyond expected reward maximization. Approaches and results This thesis covers four different topics related to the theory of learning in the sensorimotor loop. First of all, we need to specify the model of an agent interacting with the environment, either with learning or without learning. This interaction naturally results in complex causal dependencies. Since we are interested in asymptotic properties of learning algorithms, it is necessary to consider infinite time horizons. It turns out that the well-understood theory of causal networks known from the machine learning literature is not powerful enough for our purpose. Therefore we extend important theorems on causal networks to infinite graphs and general state spaces using analytical methods from measure theoretic probability theory and the theory of discrete time stochastic processes. Furthermore, we prove a generalization of the strong Markov property from Markov processes to infinite causal networks. Secondly, we develop a new idea for a projected stochastic constraint optimization algorithm. Generally a discrete gradient ascent algorithm can be used to generate an iterative sequence that converges to the stationary points of a given optimization problem. Whenever the optimization takes place over a compact subset of a vector space, it is possible that the iterative sequence leaves the constraint set. One possibility to cope with this problem is to project all points to the constraint set using Euclidean best-approximation. The latter is sometimes difficult to calculate. A concrete example is an optimization over the unit ball in a matrix space equipped with operator norm. Our idea consists of a back-projection using quasi-projectors different from the Euclidean best-approximation. In the matrix example, there is another canonical way to force the iterative sequence to stay in the constraint set: Whenever a point leaves the unit ball, it is divided by its norm. For a given target function, this procedure might introduce spurious stationary points on the boundary. We show that this problem can be circumvented by using a gradient that is tailored to the quasi-projector used for back-projection. We state a general technical compatibility condition between a quasi-projector and a metric used for gradient ascent, prove convergence of stochastic iterative sequences and provide an appropriate metric for the unit-ball example. Thirdly, a class of learning problems in the sensorimotor loop is defined and motivated. This class of problems is more general than the usual expected reward maximization and is illustrated by numerous examples (like expected reward maximization, maximization of the predictive information, maximization of the entropy and minimization of the variance of a given reward function). We also provide stationarity conditions together with appropriate gradient formulas. Last but not least, we prove convergence of a stochastic optimization algorithm (as considered in the second topic) applied to a general learning problem (as considered in the third topic). It is shown that the learning algorithm converges to the set of stationary points. Among others, the proof covers the convergence of an improved version of an algorithm for the maximization of the predictive information as proposed by N. Ay, R. Der and K. Zahedi. We also investigate an application to a linear Gaussian dynamic, where the policies are encoded by the unit-ball in a space of matrices equipped with operator norm.
20

Lévy-Type Processes under Uncertainty and Related Nonlocal Equations

Hollender, Julian 12 October 2016 (has links)
The theoretical study of nonlinear expectations is the focus of attention for applications in a variety of different fields — often with the objective to model systems under incomplete information. Especially in mathematical finance, advances in the theory of sublinear expectations (also referred to as coherent risk measures) lay the theoretical foundation for modern approaches to evaluations under the presence of Knightian uncertainty. In this book, we introduce and study a large class of jump-type processes for sublinear expectations, which can be interpreted as Lévy-type processes under uncertainty in their characteristics. Moreover, we establish an existence and uniqueness theory for related nonlinear, nonlocal Hamilton-Jacobi-Bellman equations with non-dominated jump terms.

Page generated in 0.0901 seconds