In the study of multi-agent systems (MASs), cooperative control is one of the most fundamental issues. As it covers a broad spectrum of applications in many industrial areas, there is a desire to design cooperative control protocols for different system and network setups.
Motivated by this fact, in this thesis we focus on elaborating consensus protocol design, via model predictive control (MPC), under two different scenarios: (1) general constrained linear MASs with bounded additive disturbance; (2) linear MASs with input constraints underlying distributed communication networks.
In Chapter 2, a tube-based robust MPC consensus protocol for constrained linear MASs is proposed. For undisturbed linear MASs without constraints, the results on designing a centralized linear consensus protocol are first developed by a suboptimal linear quadratic approach. In order to evaluate the control performance of the suboptimal consensus protocol, we use an infinite horizon linear quadratic objective function to penalize the disagreement among agents and the size of control inputs. Due to the non-convexity of the performance function, an optimal controller gain is difficult or even impossible to find, thus a suboptimal consensus protocol is derived. In the presence of disturbance, the original MASs may not maintain certain properties such as stability and cooperative performance. To this end, a tube-based robust MPC framework is introduced. When disturbance is involved, the original constraints in nominal prediction should be tightened so as to achieve robust constraint satisfaction, as the predicted states and the actual states are not necessarily the same. Moreover, the corresponding robust constraint sets can be determined offline, requiring no extra iterative online computation in implementation.
In Chapter 3, a novel distributed MPC-based consensus protocol is proposed for general linear MASs with input constraints. For the linear MAS without constraints, a pre-stabilizing distributed linear consensus protocol is developed by an inverse optimal approach, such that the corresponding closed-loop system is asymptotically stable with respect to a consensus set. Implementing this pre-stabilizing controller in a distributed digital setting is however not possible, as it requires every local decision maker to continuously access the state of their neighbors simultaneously when updating the control input. To relax these requirements, the assumed neighboring state, instead of the actual state of neighbors, is used. In our distributed MPC scheme, each local controller minimizes a group of control variables to generate control input. Moreover, an additional state constraint is proposed to bound deviation between the actual and the assumed state. In this way, consistency is enforced between intended behaviors of an agent and what its neighbors believe it will behave. We later show that the closed-loop system converges to a neighboring set of the consensus set thanks to the bounded state deviation in prediction.
In Chapter 4, conclusions are made and some research topics for future exploring are presented. / Graduate / 2021-03-31
Identifer | oai:union.ndltd.org:uvic.ca/oai:dspace.library.uvic.ca:1828/11683 |
Date | 16 April 2020 |
Creators | Li, Zhuo |
Contributors | Shi, Yang |
Source Sets | University of Victoria |
Language | English, English |
Detected Language | English |
Type | Thesis |
Format | application/pdf |
Rights | Available to the World Wide Web |
Page generated in 0.0019 seconds