A cooperative multi-agent system is a collection of interacting agents deployed in a mission space where each agent is allowed to control its local state so that the fleet of agents collectively optimizes a common global objective. While optimization problems associated with multi-agent systems intend to determine the fixed set of globally optimal agent states, control problems aim to obtain the set of globally optimal agent controls. Associated non-convexities in these problems result in multiple local optima. This dissertation explores systematic techniques that can be deployed to either escape or avoid poor local optima while in search of provably better (still local) optima.
First, for multi-agent optimization problems with iterative gradient-based solutions, a distributed approach to escape local optima is proposed based on the concept of boosting functions. These functions temporarily transform gradient components at a local optimum into a set of boosted non-zero gradient components in a systematic manner so that it is more effective compared to the methods where gradient components are randomly perturbed. A novel variable step size adjustment scheme is also proposed to establish the convergence of this distributed boosting process. Developed boosting concepts are successfully applied to the class of coverage problems.
Second, as a means of avoiding convergence to poor local optima in multi-agent optimization, the use of greedy algorithms in generating effective initial conditions is explored. Such greedy methods are computationally cheap and can often exploit submodularity properties of the problem to provide performance bound guarantees to the obtained solutions. For the class of submodular maximization problems, two new performance bounds are proposed and their effectiveness is illustrated using the class of coverage problems.
Third, a class of multi-agent control problems termed Persistent Monitoring on Networks (PMN) is considered where a team of agents is traversing a set of nodes (targets) interconnected according to a network topology aiming to minimize a measure of overall node state. For this class of problems, a gradient-based parametric control solution developed in a prior work relies heavily on the initial selection of its `parameters' which often leads to poor local optima. To overcome this initialization challenge, the PMN system's asymptotic behavior is analyzed, and an off-line greedy algorithm is proposed to systematically generate an effective set of initial parameters.
Finally, for the same class of PMN problems, a computationally efficient distributed on-line Event-Driven Receding Horizon Control (RHC) solution is proposed as an alternative. This RHC solution is parameter-free as it automatically optimizes its planning horizon length and gradient-free as it uses explicitly derived solutions for each RHC problem invoked at each agent upon each event of interest. Hence, unlike the gradient-based parametric control solutions, the proposed RHC solution does not force the agents to converge to one particular behavior that is likely to be a poor local optimum. Instead, it keeps the agents actively searching for the optimum behavior.
In each of these four parts of the thesis, an interactive simulation platform is developed (and made available online) to generate extensive numerical examples that highlight the respective contributions made compared to the state of the art.
Identifer | oai:union.ndltd.org:bu.edu/oai:open.bu.edu:2144/42603 |
Date | 15 May 2021 |
Creators | Welikala, Shirantha |
Contributors | Cassandras, Christos G. |
Source Sets | Boston University |
Language | en_US |
Detected Language | English |
Type | Thesis/Dissertation |
Rights | Attribution 4.0 International, http://creativecommons.org/licenses/by/4.0/ |
Page generated in 0.0018 seconds