• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 1
  • Tagged with
  • 5
  • 5
  • 5
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Stability, dissipativity, and optimal control of discontinuous dynamical systems

Sadikhov, Teymur 06 April 2015 (has links)
Discontinuous dynamical systems and multiagent systems are encountered in numerous engineering applications. This dissertation develops stability and dissipativity of nonlinear dynamical systems with discontinuous right-hand sides, optimality of discontinuous feed-back controllers for Filippov dynamical systems, almost consensus protocols for multiagent systems with innaccurate sensor measurements, and adaptive estimation algorithms using multiagent network identifiers. In particular, we present stability results for discontinuous dynamical systems using nonsmooth Lyapunov theory. Then, we develop a constructive feedback control law for discontinuous dynamical systems based on the existence of a nonsmooth control Lyapunov function de fined in the sense of generalized Clarke gradients and set-valued Lie derivatives. Furthermore, we develop dissipativity notions and extended Kalman-Yakubovich-Popov conditions and apply these results to develop feedback interconnection stability results for discontinuous systems. In addition, we derive guaranteed gain, sector, and disk margins for nonlinear optimal and inverse optimal discontinuous feedback regulators that minimize a nonlinear-nonquadratic performance functional for Filippov dynamical systems. Then, we provide connections between dissipativity and optimality of nonlinear discontinuous controllers for Filippov dynamical systems. Furthermore, we address the consensus problem for a group of agent robots with uncertain interagent measurement data, and show that the agents reach an almost consensus state and converge to a set centered at the centroid of agents initial locations. Finally, we develop an adaptive estimation framework predicated on multiagent network identifiers with undirected and directed graph topologies that identifies the system state and plant parameters online.
2

Linear and Non-linear Deformations of Stochastic Processes

Strandell, Gustaf January 2003 (has links)
<p>This thesis consists of three papers on the following topics in functional analysis and probability theory: Riesz bases and frames, weakly stationary stochastic processes and analysis of set-valued stochastic processes. In the first paper we investigate Uniformly Bounded Linearly Stationary stochastic processes from the point of view of the theory of Riesz bases. By regarding these stochastic processes as generalized Riesz bases we are able to gain some new insight into there structure. Special attention is paid to regular UBLS processes as well as perturbations of weakly stationary processes. An infinite sequence of subspaces of a Hilbert space is called regular if it is decreasing and zero is the only element in its intersection. In the second paper we ask for conditions under which the regularity of a sequence of subspaces is preserved when the sequence undergoes a deformation by a linear and bounded operator. Linear, bounded and surjective operators are closely linked with frames and we also investigate when a frame is a regular sequence of vectors. A multiprocess is a stochastic process whose values are compact sets. As generalizations of the class of subharmonic processes and the class of subholomorphic processesas introduced by Thomas Ransford, in the third paper of this thesis we introduce the general notions of a gauge of processes and a multigauge of multiprocesses. Compositions of multiprocesses with multifunctions are discussed and the boundary crossing property, related to the intermediate-value property, is investigated for general multiprocesses. Time changes of multiprocesses are investigated in the environment of multigauges and we give a multiprocess version of the Dambis-Dubins-Schwarz Theorem.</p>
3

Linear and Non-linear Deformations of Stochastic Processes

Strandell, Gustaf January 2003 (has links)
This thesis consists of three papers on the following topics in functional analysis and probability theory: Riesz bases and frames, weakly stationary stochastic processes and analysis of set-valued stochastic processes. In the first paper we investigate Uniformly Bounded Linearly Stationary stochastic processes from the point of view of the theory of Riesz bases. By regarding these stochastic processes as generalized Riesz bases we are able to gain some new insight into there structure. Special attention is paid to regular UBLS processes as well as perturbations of weakly stationary processes. An infinite sequence of subspaces of a Hilbert space is called regular if it is decreasing and zero is the only element in its intersection. In the second paper we ask for conditions under which the regularity of a sequence of subspaces is preserved when the sequence undergoes a deformation by a linear and bounded operator. Linear, bounded and surjective operators are closely linked with frames and we also investigate when a frame is a regular sequence of vectors. A multiprocess is a stochastic process whose values are compact sets. As generalizations of the class of subharmonic processes and the class of subholomorphic processesas introduced by Thomas Ransford, in the third paper of this thesis we introduce the general notions of a gauge of processes and a multigauge of multiprocesses. Compositions of multiprocesses with multifunctions are discussed and the boundary crossing property, related to the intermediate-value property, is investigated for general multiprocesses. Time changes of multiprocesses are investigated in the environment of multigauges and we give a multiprocess version of the Dambis-Dubins-Schwarz Theorem.
4

Causal Models over Infinite Graphs and their Application to the Sensorimotor Loop / Kausale Modelle über unendlichen Grafen und deren Anwendung auf die sensomotorische Schleife - stochastische Aspekte und gradientenbasierte optimale Steuerung

Bernigau, Holger 27 April 2015 (has links) (PDF)
Motivation and background The enormous amount of capabilities that every human learns throughout his life, is probably among the most remarkable and fascinating aspects of life. Learning has therefore drawn lots of interest from scientists working in very different fields like philosophy, biology, sociology, educational sciences, computer sciences and mathematics. This thesis focuses on the information theoretical and mathematical aspects of learning. We are interested in the learning process of an agent (which can be for example a human, an animal, a robot, an economical institution or a state) that interacts with its environment. Common models for this interaction are Markov decision processes (MDPs) and partially observable Markov decision processes (POMDPs). Learning is then considered to be the maximization of the expectation of a predefined reward function. In order to formulate general principles (like a formal definition of curiosity-driven learning or avoidance of unpleasant situation) in a rigorous way, it might be desirable to have a theoretical framework for the optimization of more complex functionals of the underlying process law. This might include the entropy of certain sensor values or their mutual information. An optimization of the latter quantity (also known as predictive information) has been investigated intensively both theoretically and experimentally using computer simulations by N. Ay, R. Der, K Zahedi and G. Martius. In this thesis, we develop a mathematical theory for learning in the sensorimotor loop beyond expected reward maximization. Approaches and results This thesis covers four different topics related to the theory of learning in the sensorimotor loop. First of all, we need to specify the model of an agent interacting with the environment, either with learning or without learning. This interaction naturally results in complex causal dependencies. Since we are interested in asymptotic properties of learning algorithms, it is necessary to consider infinite time horizons. It turns out that the well-understood theory of causal networks known from the machine learning literature is not powerful enough for our purpose. Therefore we extend important theorems on causal networks to infinite graphs and general state spaces using analytical methods from measure theoretic probability theory and the theory of discrete time stochastic processes. Furthermore, we prove a generalization of the strong Markov property from Markov processes to infinite causal networks. Secondly, we develop a new idea for a projected stochastic constraint optimization algorithm. Generally a discrete gradient ascent algorithm can be used to generate an iterative sequence that converges to the stationary points of a given optimization problem. Whenever the optimization takes place over a compact subset of a vector space, it is possible that the iterative sequence leaves the constraint set. One possibility to cope with this problem is to project all points to the constraint set using Euclidean best-approximation. The latter is sometimes difficult to calculate. A concrete example is an optimization over the unit ball in a matrix space equipped with operator norm. Our idea consists of a back-projection using quasi-projectors different from the Euclidean best-approximation. In the matrix example, there is another canonical way to force the iterative sequence to stay in the constraint set: Whenever a point leaves the unit ball, it is divided by its norm. For a given target function, this procedure might introduce spurious stationary points on the boundary. We show that this problem can be circumvented by using a gradient that is tailored to the quasi-projector used for back-projection. We state a general technical compatibility condition between a quasi-projector and a metric used for gradient ascent, prove convergence of stochastic iterative sequences and provide an appropriate metric for the unit-ball example. Thirdly, a class of learning problems in the sensorimotor loop is defined and motivated. This class of problems is more general than the usual expected reward maximization and is illustrated by numerous examples (like expected reward maximization, maximization of the predictive information, maximization of the entropy and minimization of the variance of a given reward function). We also provide stationarity conditions together with appropriate gradient formulas. Last but not least, we prove convergence of a stochastic optimization algorithm (as considered in the second topic) applied to a general learning problem (as considered in the third topic). It is shown that the learning algorithm converges to the set of stationary points. Among others, the proof covers the convergence of an improved version of an algorithm for the maximization of the predictive information as proposed by N. Ay, R. Der and K. Zahedi. We also investigate an application to a linear Gaussian dynamic, where the policies are encoded by the unit-ball in a space of matrices equipped with operator norm.
5

Causal Models over Infinite Graphs and their Application to the Sensorimotor Loop: Causal Models over Infinite Graphs and their Application to theSensorimotor Loop: General Stochastic Aspects and GradientMethods for Optimal Control

Bernigau, Holger 04 July 2015 (has links)
Motivation and background The enormous amount of capabilities that every human learns throughout his life, is probably among the most remarkable and fascinating aspects of life. Learning has therefore drawn lots of interest from scientists working in very different fields like philosophy, biology, sociology, educational sciences, computer sciences and mathematics. This thesis focuses on the information theoretical and mathematical aspects of learning. We are interested in the learning process of an agent (which can be for example a human, an animal, a robot, an economical institution or a state) that interacts with its environment. Common models for this interaction are Markov decision processes (MDPs) and partially observable Markov decision processes (POMDPs). Learning is then considered to be the maximization of the expectation of a predefined reward function. In order to formulate general principles (like a formal definition of curiosity-driven learning or avoidance of unpleasant situation) in a rigorous way, it might be desirable to have a theoretical framework for the optimization of more complex functionals of the underlying process law. This might include the entropy of certain sensor values or their mutual information. An optimization of the latter quantity (also known as predictive information) has been investigated intensively both theoretically and experimentally using computer simulations by N. Ay, R. Der, K Zahedi and G. Martius. In this thesis, we develop a mathematical theory for learning in the sensorimotor loop beyond expected reward maximization. Approaches and results This thesis covers four different topics related to the theory of learning in the sensorimotor loop. First of all, we need to specify the model of an agent interacting with the environment, either with learning or without learning. This interaction naturally results in complex causal dependencies. Since we are interested in asymptotic properties of learning algorithms, it is necessary to consider infinite time horizons. It turns out that the well-understood theory of causal networks known from the machine learning literature is not powerful enough for our purpose. Therefore we extend important theorems on causal networks to infinite graphs and general state spaces using analytical methods from measure theoretic probability theory and the theory of discrete time stochastic processes. Furthermore, we prove a generalization of the strong Markov property from Markov processes to infinite causal networks. Secondly, we develop a new idea for a projected stochastic constraint optimization algorithm. Generally a discrete gradient ascent algorithm can be used to generate an iterative sequence that converges to the stationary points of a given optimization problem. Whenever the optimization takes place over a compact subset of a vector space, it is possible that the iterative sequence leaves the constraint set. One possibility to cope with this problem is to project all points to the constraint set using Euclidean best-approximation. The latter is sometimes difficult to calculate. A concrete example is an optimization over the unit ball in a matrix space equipped with operator norm. Our idea consists of a back-projection using quasi-projectors different from the Euclidean best-approximation. In the matrix example, there is another canonical way to force the iterative sequence to stay in the constraint set: Whenever a point leaves the unit ball, it is divided by its norm. For a given target function, this procedure might introduce spurious stationary points on the boundary. We show that this problem can be circumvented by using a gradient that is tailored to the quasi-projector used for back-projection. We state a general technical compatibility condition between a quasi-projector and a metric used for gradient ascent, prove convergence of stochastic iterative sequences and provide an appropriate metric for the unit-ball example. Thirdly, a class of learning problems in the sensorimotor loop is defined and motivated. This class of problems is more general than the usual expected reward maximization and is illustrated by numerous examples (like expected reward maximization, maximization of the predictive information, maximization of the entropy and minimization of the variance of a given reward function). We also provide stationarity conditions together with appropriate gradient formulas. Last but not least, we prove convergence of a stochastic optimization algorithm (as considered in the second topic) applied to a general learning problem (as considered in the third topic). It is shown that the learning algorithm converges to the set of stationary points. Among others, the proof covers the convergence of an improved version of an algorithm for the maximization of the predictive information as proposed by N. Ay, R. Der and K. Zahedi. We also investigate an application to a linear Gaussian dynamic, where the policies are encoded by the unit-ball in a space of matrices equipped with operator norm.

Page generated in 0.0641 seconds