• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 1
  • 1
  • 1
  • Tagged with
  • 12
  • 12
  • 9
  • 6
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The robust and typical behaviour of spatio-temporal dynamical systems

Campbell, Kevin Matthew January 1996 (has links)
No description available.
2

Design of Nonlinear Controllers for Systems with Mismatched Perturbations

Chang, Yaote 18 January 2007 (has links)
In this dissertation, four nonlinear controllers are proposed for different class of multi-input multi-output (MIMO) systems with matched and mismatched perturbations. All the plants to be controlled contains input uncertainty. The technique of the adaptive sliding mode control (ASMC) scheme is first introduced in order to solve the regulation or tracking problems. By applying adaptive techniques to the design of a novel sliding surface as well as to the design of sliding mode controller, one can not only enable the fulfillment of reaching mode in fi- nite time, but also suppress the mismatched perturbations when system is in the sliding mode. Secondly, the design methodology of block backstepping is proposed to solve the regulation problem in chapter 5. Some adaptive mechanisms are employed in the virtual input controller, so that the mismatched perturbations can be tackled and the proposed robust controller can guarantee stability of the controlled systems. All these control schemes are designed by means of Lyapunov stability theorem. Each robust controller contains two parts. The first part is for eliminating measurable feedback signals of the plant, and the second part is an adaptive control mechanism, which is capable of adapting some unknown constants embedded in the least upper bounds of perturbations, so that the knowledge of the least upper bounds of matched and mismatched perturbations is not required and can achieve asymptotic stability. Several numerical examples and industrial applications are demonstrated for showing the feasibility of the proposed control schemes.
3

Design of Adaptive Sliding Mode Controllers for Mismatched Perturbed Systems with Application to Underactuated Systems

Ho, Chao-Heng 25 July 2011 (has links)
A methodology of designing an adaptive sliding mode controller for a class of nonlinear systems with matched and mismatched perturbations is proposed in this thesis. A specific designed sliding surface function is presented first, whose coefficients are determined by using Lyapunov stability theorem and linear matrix inequality (LMI) optimization technique. Without requiring the upper bounds of matched perturbations, the controller with adaptive mechanisms embedded is also designed by using Lyapunov stability theorem. The proposed control scheme not only can drive the trajectories of the controlled systems reach sliding surface in finite time, but also is able to suppress the mismatched perturbations when the controlled systems are in the sliding mode, and achieve asymptotic stability. In addition, the proposed control scheme can be directly applied to a class of underactuated systems. A numerical example and a practical experiment are given for demonstrating the feasibility of the proposed control scheme.
4

Design of Adaptive Block Backstepping Controllers for Semi-Strict feedback Systems with Delays

Huang, Pei-Chia 19 January 2012 (has links)
In this thesis an adaptive backstepping control scheme is proposed for a class of multi-input perturbed systems with time-varying delays to solve regulation problems. The systems to be controlled contain n blocks¡¦ dynamic equations, hence n-1 virtual input controllers are designed from the first block to the (n-1)th block, and the backstepping controller is designed from the last block. In addition, adaptive mechanisms are embedded in each virtual input controllers and proposed controller, so that the least upper bounds of perturbations are not required to be known beforehand. Furthermore, the dynamic equations of the systems to be controlled need not satisfy strict-feedback form, and the upper bounds of the time delays as well as their derivatives need not to be known in advance either. The resultant controlled systems guarantee asymptotic stability in accordance with the Lyapunov stability theorem. Finally, a numerical example and a practical application are given for demonstrating the feasibility of the proposed control scheme.
5

Design of Decentralized Adaptive Backstepping Tracking Controllers for Large-Scale Uncertain Systems

Chang, Yu-Yi 01 February 2012 (has links)
Based on the Lyapunov stability theorem, a decentralized adaptive backstepping tracking control scheme for a class of perturbed large-scale systems with non-strict feedback form is presented in this thesis to solve tracking problems. First of all, the dynamic equations of the plant to be controlled are transformed into other equations with semi-strict feedback form. Then a decentralized tracking controller is designed based on the backstepping control methodology so that the outputs of controlled system are capable of tracking the desired signals generated from a reference model. In addition, by utilizing adaptive mechanisms embedded in the backstepping controller, one need not acquire the upper bounds of the perturbations and the interconnections in advance. The resultant control scheme is able to guarantee the stability of the whole large-scale systems, and the tracking precision may be adjusted through the design parameters. Finally, one numerical and one practical examples are demonstrated for showing the applicability of the proposed design technique.
6

Sliding Mode Control Design for Mismatched Uncertain Switched Systems

Liu, Hong-Yi 15 February 2012 (has links)
Based on the Lyapunov stability theorem, a sliding mode control design methodology is proposed in this thesis for a class of perturbed switched systems. The control of the systems is rest restricted to switching between two different constant values. New sliding mode reaching conditions are proposed for the controllers so that the controlled systems can enter the sliding mode in finite time. Once the switched control system is in the sliding mode, the stability of the system is guaranteed by choosing a suitable sliding surface. In addition, a method for alleviating the infinite switching phenomenon is also provided in this thesis. Finally, a numerical and a practical example with computer simulation results are given for demonstrating the feasibility of the proposed control scheme.
7

Design of the nth Order Adaptive Integral Variable Structure Derivative Estimator

Shih, Wei-Che 17 January 2009 (has links)
Based on the Lyapunov stability theorem, a methodology of designing an nth order adaptive integral variable structure derivative estimator (AIVSDE) is proposed in this thesis. The proposed derivative estimator not only is an improved version of the existing AIVSDE, but also can be used to estimate the nth derivative of a smooth signal which has continuous and bounded derivatives up to n+1. Analysis results show that adjusting some of the parameters can facilitate the derivative estimation of signals with higher frequency noise. The adaptive algorithm is incorporated in the estimation scheme for tracking the unknown upper bounded of the input signal as well as their's derivatives. The stability of the proposed derivative estimator is guaranteed, and the comparison between recently proposed derivative estimator of high-order sliding mode control and AIVSDE is also demonstrated.
8

Design of Adaptive Block Backstepping Controllers for Perturbed Nonlinear Systems with Input Nonlinearities

Chien, Chia-Wei 01 February 2012 (has links)
Based on the Lyapunov stability theorem, a design methodology of adaptive block backstepping control scheme is proposed in this thesis for a class of multi-input perturbed nonlinear systems with input nonlinearities to solve regulation problems. Fuzzy control method is utilized to estimate the unknown inverse input functions in order to facilitate the design of the proposed control scheme, so that the sector condition need not to be satisfied. According to the number of block m in the plant to be controlled, m−1 virtual input controllers are designed from the first block to the (m−1)th block. Then the proposed robust controller is designed from the last block. Adaptive mechanisms are also employed in the virtual input controllers as well as the robust controller, so that the least upper bounds of perturbations and estimation errors of inverse input functions are not required. The resultant control system is able to achieve asymptotic stability. Finally, a numerical example and a practical example are given for demonstrating the feasibility of the proposed control scheme.
9

Stochastic Approximation Algorithms with Set-valued Dynamics : Theory and Applications

Ramaswamy, Arunselvan January 2016 (has links) (PDF)
Stochastic approximation algorithms encompass a class of iterative schemes that converge to a sought value through a series of successive approximations. Such algorithms converge even when the observations are erroneous. Errors in observations may arise due to the stochastic nature of the problem at hand or due to extraneous noise. In other words, stochastic approximation algorithms are self-correcting schemes, in that the errors are wiped out in the limit and the algorithms still converge to the sought values. The rst stochastic approximation algorithm was developed by Robbins and Monro in 1951 to solve the root- nding problem. In 1977 Ljung showed that the asymptotic behavior of a stochastic approximation algorithm can be studied by associating a deterministic ODE, called the associated ODE, and studying it's asymptotic behavior instead. This is commonly referred to as the ODE method. In 1996 Bena•m and Bena•m and Hirsch [1] [2] used the dynamical systems approach in order to develop a framework to analyze generalized stochastic approximation algorithms, given by the following recursion: xn+1 = xn + a(n) [h(xn) + Mn+1] ; (1) where xn 2 Rd for all n; h : Rd ! Rd is Lipschitz continuous; fa(n)gn 0 is the given step-size sequence; fMn+1gn 0 is the Martingale difference noise. The assumptions of [1] later became the `standard assumptions for convergence'. One bottleneck in deploying this framework is the requirement on stability (almost sure boundedness) of the iterates. In 1999 Borkar and Meyn developed a unified set of assumptions that guaranteed both stability and convergence of stochastic approximations. However, the aforementioned frameworks did not account for scenarios with set-valued mean fields. In 2005 Bena•m, Hofbauer and Sorin [3] showed that the dynamical systems approach to stochastic approximations can be extended to scenarios with set-valued mean- fields. Again, stability of the fiterates was assumed. Note that stochastic approximation algorithms with set-valued mean- fields are also called stochastic recursive inclusions (SRIs). The Borkar-Meyn theorem for SRIs [10] As stated earlier, in many applications stability of the iterates is a hard assumption to verify. In Chapter 2 of the thesis, we present an extension of the original theorem of Borkar and Meyn to include SRIs. Specifically, we present two different (yet related) easily-verifiable sets of assumptions for both stability and convergence of SRIs. A SRI is given by the following recursion in Rd: xn+1 = xn + a(n) [yn + Mn+1] ; (2) where 8 n yn 2 H(xn) and H : Rd ! fsubsets of Rdg is a given Marchaud map. As a corollary to one of our main results, a natural generalization of the original Borkar and Meyn theorem is seen to follow. We also present two applications of our framework. First, we use our framework to provide a solution to the `approximate drift problem'. This problem can be stated as follows. When an experimenter runs a traditional stochastic approximation algorithm such as (1), the exact value of the drift h cannot be accurately calculated at every stage. In other words, the recursion run by the experimenter is given by (2), where yn is an approximation of h(xn) at stage n. A natural question arises: Do the errors due to approximations accumulate and wreak havoc with the long-term behavior (convergence) of the algorithm? Using our framework, we show the following: Suppose a stochastic approximation algorithm without errors can be guaranteed to be stable, then it's `approximate version' with errors is also stable, provided the errors are bounded at every stage. For the second application, we use our framework to relax the stability assumptions involved in the original Borkar-Meyn theorem, hence making the framework more applicable. It may be noted that the contents of Chapter 2 are based on [10]. Analysis of gradient descent methods with non-diminishing, bounded errors [9] Let us consider a continuously differentiable function f. Suppose we are interested in nding a minimizer of f, then a gradient descent (GD) scheme may be employed to nd a local minimum. Such a scheme is given by the following recursion in Rd: xn+1 = xn a(n)rf(xn): (3) GD is an important implementation tool for many machine learning algorithms, such as the backpropagation algorithm to train neural networks. For the sake of convenience, experimenters often employ gradient estimators such as Kiefer-Wolfowitz estimator, simultaneous perturbation stochastic approximation, etc. These estimators provide an estimate of the gradient rf(xn) at stage n. Since these estimators only provide an approximation of the true gradient, the experimenter is essentially running the recursion given by (2), where yn is a `gradient estimate' at stage n. Such gradient methods with errors have been previously studied by Bertsekas and Tsitsiklis [5]. However, the assumptions involved are rather restrictive and hard to verify. In particular, the gradient-errors are required to vanish asymptotically at a prescribed rate. This may not hold true in many scenarios. In Chapter 3 of the thesis, the results of [5] are extended to GD with bounded, non-diminishing errors, given by the following recursion in Rd: xn+1 = xn a(n) [rf(xn) + (n)] ; (4) where k (n)k for some fixed > 0. As stated earlier, previous literature required k (n)k ! 0, as n ! 1, at a `prescribed rate'. Sufficient conditions are presented for both stability and convergence of (4). In other words, the conditions presented in Chapter 3 ensure that the errors `do not accumulate' and wreak havoc with the stability or convergence of GD. Further, we show that (4) converges to a small neighborhood of the minimum set, which in turn depends on the error-bound . To the best of our knowledge this is the first time that GD with bounded non-diminishing errors has been analyzed. As an application, we use our framework to present a simplified implementation of simultaneous perturbation stochastic approximation (SPSA), a popular gradient descent method introduced by Spall [13]. Traditional convergence-analysis of SPSA involves assumptions that `couple' the `sensitivity parameters' of SPSA and the step-sizes. These assumptions restrict the choice of step-sizes available to the experimenter. In the context of machine learning, the learning rate may be adversely affected. We present an implementation of SPSA using `constant sensitivity parameters', thereby `decoupling' the step-sizes and sensitivity parameters. Further, we show that SPSA with constant sensitivity parameters can be analyzed using our framework. Finally, we present experimental results to support our theory. It may be noted that contents of Chapter 3 are based on [9]. b(n) a(n) Stochastic recursive inclusions with two timescales [12] There are many scenarios wherein the traditional single timescale framework cannot be used to analyze the algorithm at hand. Consider for example, the adaptive heuristic critic approach to reinforcement learning, which requires a stationary value iteration (for a fixed policy) to be executed between two policy iterations. To analyze such schemes Borkar [6] introduced the two timescale framework, along with a set of sufficient conditions which guarantee their convergence. Perkins and Leslie [8] extended the framework of Borkar to include set-valued mean- fields. However, the assumptions involved were still very restrictive and not easily verifiable. In Chapter 4 of the thesis, we present a generalization of the aforementioned frameworks. The framework presented is more general when compared to the frameworks of [6] and [8], and the assumptions involved are easily verifiable. A SRI with two timescales is given by the following coupled iteration: xn+1 = xn + a(n) un + Mn1+1 ; (5) yn+1 = yn + b(n) vn + Mn2+1 ; (6) where xn 2 R d and yn 2 R k for all n 0; un 2 h(xn; yn) and vn 2 g(xn; yn) for all n 0, where h : Rd Rk ! fsubsets of Rdg and g : Rd Rk ! fsubsets of Rkg are two given Marchaud maps; fa(n)gn 0 and fb(n)gn 0 are the step-size sequences satisfying ! 0 as n ! 1; fMn1+1gn 0 and fMn2+1 gn 0 constitute the Martingale noise terms. Our main contribution is in the weakening of the key assumption that `couples' the behavior of the x and y iterates. As an application of our framework we analyze the two timescale algorithm which solves the `constrained Lagrangian dual optimization problem'. The problem can be stated as thus: Given two functions f : Rd ! R and g : Rd ! Rk, we want to minimize f(x) subject to the condition that g(x) 0. This problem can be stated in the following primal form: inf sup f(x) + T g(x) : (7) 2R 2R0 x d k Under strong duality, solving the above equation is equivalent to solving it's dual: sup inf f(x) + T g(x) : (8) 2Rk x2Rd 0 The corresponding two timescale algorithm to solve the dual is given by: xn+1 = xn a(n) rx f(xn) + nT g(xn) + Mn2+1 ; (9) n+1 = n + b(n) f(xn) + nT g(xn) + Mn1+1 : r We use our framework to show that (9) converges to a solution of the dual given by (8). Further, as a consequence of our framework, the class of objective and constraint functions, for which (9) can be analyzed, is greatly enlarged. It may be noted that the contents of Chapter 4 are based on [12]. Stochastic approximation driven by `controlled Markov' process and temporal difference learning [11] In the field of reinforcement learning, one encounters stochastic approximation algorithms that are driven by Markov processes. The groundwork for analyzing the long-term behavior of such algorithms was laid by Benveniste et. al. [4]. Borkar [7] extended the results of [4] to include algorithms driven by `controlled Markov' processes i.e., algorithms where the `state process' was in turn driven by a time varying `control' process. Another important extension was that multiple stationary distributions were allowed, see [7] for details. The convergence analysis of [7] assumed that the iterates were stable. In reinforcement learning applications, stability is a hard assumption to verify. Hence, the stability assumption poses a bottleneck when deploying the aforementioned framework for the analysis of reinforcement algorithms. In Chapter 5 of the thesis we present sufficient conditions for both stability and convergence of stochastic approximations driven by `controlled Markov' processes. As an application of our framework, sufficient conditions for stability of temporal difference (T D) learning algorithm, an important policy-evaluation method, are presented that are compatible with existing conditions for convergence. The conditions are weakened two-fold in that (a) the Markov process is no longer required to evolve in a finite state space and (b) the state process is not required to be ergodic under a given stationary policy. It may be noted that the contents of Chapter 5 are based on [11].
10

Regelungstechnische Konzepte zur Integration alternativer Erzeugungsanlagen in lokale Energieversorgungsnetze unter besonderer Berücksichtigung der Systemstabilität

La Seta, Piergiovanni 22 November 2007 (has links) (PDF)
In zukünftigen Elektroenergiesystemen wird die dezentrale, häufig auf erneuerbaren Quellen basierende Energieversorgung eine große Bedeutung einnehmen. Die wachsende Präsenz der dezentralen Erzeugung in verschiedenen Spannungsebenen des elektrischen Netzes erfordert neue Konzepte zur Regelung des elektrischen Energieversorgungssystems. Insbesondere gibt es eine Tendenz auch kleine Netzbereiche autonome, d.h. vom Verbundnetz unabhängig zu betreiben. In diesem Zusammenhang müssen die Stabilitäts- und Regelungsaspekte immer stärker berücksichtigt werden. Die vorliegende Untersuchung konzentriert sich auf die Beurteilung und die Verbesserung der Stabilität von Windkraftanlagen (WKA). Die Integration von WKA zur dezentralen Energieversorgung in lokale Energienetze unter besonderer Berücksichtigung der Systemsstabilität ist das Ziel dieser Arbeit. Hierfür muss die Analyse der Stabilität einer WKA sowohl qualitativ als auch quantitativ durchgeführt werden, um die Faktoren zu ermitteln, die zur Verbesserung der Systemsstabilität beitragen. Der darauf basierte Entwurf von Regelstrategien für ein verbessertes dynamisches und transientes Verhalten wird theoretisch und durch numerische Simulationen validiert.

Page generated in 0.0633 seconds