This report describes a method to control the density distribution of a large number of autonomous agents. The approach is based on the fact that there are a large number of agents in the system, and hence the time evolution of the probabilistic density distribution of agents can be described as a Markov chain. The main contribution of this paper is the synthesis of a Markov matrix which will guide the multi-agent system density to a desired steady-state density distribution, in a probabilistic sense, while satisfying some motion and safety constraints. Also, an adaptive density control method based on real time density feedback is introduced to synthesize a time-varying Markov ma- trix, which leads to better convergence to the desired density distribution. Finally, a decentralized density computation method is described. This method guarantees that all agents will have a best, and common, density estimate in a finite, with an explicit bound, number of communication updates. / text
Identifer | oai:union.ndltd.org:UTEXAS/oai:repositories.lib.utexas.edu:2152/26210 |
Date | 01 October 2014 |
Creators | Demir, Nazlı |
Source Sets | University of Texas |
Language | English |
Detected Language | English |
Type | Thesis |
Format | application/pdf |
Page generated in 0.0071 seconds