• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 3
  • Tagged with
  • 10
  • 10
  • 10
  • 7
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Mobilized ad-hoc networks: A reinforcement learning approach

Chang, Yu-Han, Ho, Tracey, Kaelbling, Leslie Pack 04 December 2003 (has links)
Research in mobile ad-hoc networks has focused on situations in which nodes have no control over their movements. We investigate an important but overlooked domain in which nodes do have control over their movements. Reinforcement learning methods can be used to control both packet routing decisions and node mobility, dramatically improving the connectivity of the network. We first motivate the problem by presenting theoretical bounds for the connectivity improvement of partially mobile networks and then present superior empirical results under a variety of different scenarios in which the mobile nodes in our ad-hoc network are embedded with adaptive routing policies and learned movement policies.
2

Mobilized ad-hoc networks: A reinforcement learning approach

Chang, Yu-Han, Ho, Tracey, Kaelbling, Leslie Pack 04 December 2003 (has links)
Research in mobile ad-hoc networks has focused on situations in whichnodes have no control over their movements. We investigate animportant but overlooked domain in which nodes do have controlover their movements. Reinforcement learning methods can be used tocontrol both packet routing decisions and node mobility, dramaticallyimproving the connectivity of the network. We first motivate theproblem by presenting theoretical bounds for the connectivityimprovement of partially mobile networks and then present superiorempirical results under a variety of different scenarios in which themobile nodes in our ad-hoc network are embedded with adaptive routingpolicies and learned movement policies.
3

Learning Successful Strategies in Repeated General-sum Games

Crandall, Jacob W. 21 December 2005 (has links) (PDF)
Many environments in which an agent can use reinforcement learning techniques to learn profitable strategies are affected by other learning agents. These situations can be modeled as general-sum games. When playing repeated general-sum games with other learning agents, the goal of a self-interested learning agent is to maximize its own payoffs over time. Traditional reinforcement learning algorithms learn myopic strategies in these games. As a result, they learn strategies that produce undesirable results in many games. In this dissertation, we develop and analyze algorithms that learn non-myopic strategies when playing many important infinitely repeated general-sum games. We show that, in many of these games, these algorithms outperform existing multiagent learning algorithms. We derive performance guarantees for these algorithms (for certain learning parameters) and show that these guarantees become stronger and apply to larger classes of games as more information is observed and used by the agents. We establish these results through empirical studies and mathematical proofs.
4

Deep Reinforcement Learning For Distributed Fog Network Probing

Guan, Xiaoding 01 September 2020 (has links)
The sixth-generation (6G) of wireless communication systems will significantly rely on fog/edge network architectures for service provisioning. To satisfy stringent quality of service requirements using dynamically available resources at the edge, new network access schemes are needed. In this paper, we consider a cognitive dynamic edge/fog network where primary users (PUs) may temporarily share their resources and act as fog nodes for secondary users (SUs). We develop strategies for distributed dynamic fog probing so SUs can find out available connections to access the fog nodes. To handle the large-state space of the connectivity availability that includes availability of channels, computing resources, and fog nodes, and the partial observability of the states, we design a novel distributed Deep Q-learning Fog Probing (DQFP) algorithm. Our goal is to develop multi-user strategies for accessing fog nodes in a distributed manner without any centralized scheduling or message passing. By using cooperative and competitive utility functions, we analyze the impact of the multi-user dynamics on the connectivity availability and establish design principles for our DQFP algorithm.
5

Dynamic Structure Adaptation for Communities of Learning Machines

LeJeune, Kennan Clark 23 May 2022 (has links)
No description available.
6

Scaling Multi-Agent Learning in Complex Environments

Zhang, Chongjie 01 September 2011 (has links)
Cooperative multi-agent systems (MAS) are finding applications in a wide variety of domains, including sensor networks, robotics, distributed control, collaborative decision support systems, and data mining. A cooperative MAS consists of a group of autonomous agents that interact with one another in order to optimize a global performance measure. A central challenge in cooperative MAS research is to design distributed coordination policies. Designing optimal distributed coordination policies offline is usually not feasible for large-scale complex multi-agent systems, where 10s to 1000s of agents are involved, there is limited communication bandwidth and communication delay between agents, agents have only limited partial views of the whole system, etc. This infeasibility is either due to a prohibitive cost to build an accurate decision model, or a dynamically evolving environment, or the intractable computation complexity. This thesis develops a multi-agent reinforcement learning paradigm to allow agents to effectively learn and adapt coordination policies in complex cooperative domains without explicitly building the complete decision models. With multi-agent reinforcement learning (MARL), agents explore the environment through trial and error, adapt their behaviors to the dynamics of the uncertain and evolving environment, and improve their performance through experiences. To achieve the scalability of MARL and ensure the global performance, the MARL paradigm developed in this thesis restricts the learning of each agent to using information locally observed or received from local interactions with a limited number of agents (i.e., neighbors) in the system and exploits non-local interaction information to coordinate the learning processes of agents. This thesis develops new MARL algorithms for agents to learn effectively with limited observations in multi-agent settings and introduces a low-overhead supervisory control framework to collect and integrate non-local information into the learning process of agents to coordinate their learning. More specifically, the contributions of already completed aspects of this thesis are as follows: Multi-Agent Learning with Policy Prediction: This thesis introduces the concept of policy prediction and augments the basic gradient-based learning algorithm to achieve two properties: best-response learning and convergence. The convergence property of multi-agent learning with policy prediction is proven for a class of static games under the assumption of full observability. MARL Algorithm with Limited Observability: This thesis develops PGA-APP, a practical multi-agent learning algorithm that extends Q-learning to learn stochastic policies. PGA-APP combines the policy gradient technique with the idea of policy prediction. It allows an agent to learn effectively with limited observability in complex domains in presence of other learning agents. The empirical results demonstrate that PGA-APP outperforms state-of-the-art MARL techniques in both benchmark games. MARL Application in Cloud Computing: This thesis illustrates how MARL can be applied to optimizing online distributed resource allocation in cloud computing. Empirical results show that the MARL approach performs reasonably well, compared to an optimal solution, and better than a centralized myopic allocation approach in some cases. A General Paradigm for Coordinating MARL: This thesis presents a multi-level supervisory control framework to coordinate and guide the agents' learning process. This framework exploits non-local information and introduces a more global view to coordinate the learning process of individual agents without incurring significant overhead and exploding their policy space. Empirical results demonstrate that this coordination significantly improves the speed, quality and likelihood of MARL convergence in large-scale, complex cooperative multi-agent systems. An Agent Interaction Model: This thesis proposes a new general agent interaction model. This interaction model formalizes a type of interactions among agents, called {\em joint-even-driven} interactions, and define a measure for capturing the strength of such interactions. Formal analysis reveals the relationship between interactions between agents and the performance of individual agents and the whole system. Self-Organization for Nearly-Decomposable Hierarchy: This thesis develops a distributed self-organization approach, based on the agent interaction model, that dynamically form a nearly decomposable hierarchy for large-scale multi-agent systems. This self-organization approach is integrated into supervisory control framework to automatically evolving supervisory organizations to better coordinating MARL during the learning process. Empirically results show that dynamically evolving supervisory organizations can perform better than static ones. Automating Coordination for Multi-Agent Learning: We tailor our supervision framework for coordinating MARL in ND-POMDPs. By exploiting structured interaction in ND-POMDPs, this tailored approach distributes the learning of the global joint policy among supervisors and employs DCOP techniques to automatically coordinate distributed learning to ensure the global learning performance. We prove that this approach can learn a globally optimal policy for ND-POMDPs with a property called groupwise observability.
7

Playing is believing: the role of beliefs in multi-agent learning

Chang, Yu-Han, Kaelbling, Leslie P. 01 1900 (has links)
We propose a new classification for multi-agent learning algorithms, with each league of players characterized by both their possible strategies and possible beliefs. Using this classification, we review the optimality of existing algorithms and discuss some insights that can be gained. We propose an incremental improvement to the existing algorithms that seems to achieve average payoffs that are at least the Nash equilibrium payoffs in the long-run against fair opponents. / Singapore-MIT Alliance (SMA)
8

Multi-Agent Reinforcement Learning Approaches for Distributed Job-Shop Scheduling Problems

Gabel, Thomas 10 August 2009 (has links)
Decentralized decision-making is an active research topic in artificial intelligence. In a distributed system, a number of individually acting agents coexist. If they strive to accomplish a common goal, the establishment of coordinated cooperation between the agents is of utmost importance. With this in mind, our focus is on multi-agent reinforcement learning (RL) methods which allow for automatically acquiring cooperative policies based solely on a specification of the desired joint behavior of the whole system.The decentralization of the control and observation of the system among independent agents, however, has a significant impact on problem complexity. Therefore, we address the intricacy of learning and acting in multi-agent systems by two complementary approaches.First, we identify a subclass of general decentralized decision-making problems that features regularities in the way the agents interact with one another. We show that the complexity of optimally solving a problem instance from this class is provably lower than solving a general one.Although a lower complexity class may be entered by sticking to certain subclasses of general multi-agent problems, the computational complexitymay be still so high that optimally solving it is infeasible. Hence, our second goal is to develop techniques capable of quickly obtaining approximate solutions in the vicinity of the optimum. To this end, we will develop and utilize various model-free reinforcement learning approaches.Many real-world applications are well-suited to be formulated in terms of spatially or functionally distributed entities. Job-shop scheduling represents one such application. We are going to interpret job-shop scheduling problems as distributed sequential decision-making problems, to employ the multi-agent RL algorithms we propose for solving such problems, and to evaluate the performance of our learning approaches in the scope of various established scheduling benchmark problems.
9

Dynamic opponent modelling in two-player games

Mealing, Richard Andrew January 2015 (has links)
This thesis investigates decision-making in two-player imperfect information games against opponents whose actions can affect our rewards, and whose strategies may be based on memories of interaction, or may be changing, or both. The focus is on modelling these dynamic opponents, and using the models to learn high-reward strategies. The main contributions of this work are: 1. An approach to learn high-reward strategies in small simultaneous-move games against these opponents. This is done by using a model of the opponent learnt from sequence prediction, with (possibly discounted) rewards learnt from reinforcement learning, to lookahead using explicit tree search. Empirical results show that this gains higher average rewards per game than state-of-the-art reinforcement learning agents in three simultaneous-move games. They also show that several sequence prediction methods model these opponents effectively, supporting the idea of using them from areas such as data compression and string matching; 2. An online expectation-maximisation algorithm that infers an agent's hidden information based on its behaviour in imperfect information games; 3. An approach to learn high-reward strategies in medium-size sequential-move poker games against these opponents. This is done by using a model of the opponent learnt from sequence prediction, which needs its hidden information (inferred by the online expectation-maximisation algorithm), to train a state-of-the-art no-regret learning algorithm by simulating games between the algorithm and the model. Empirical results show that this improves the no-regret learning algorithm's rewards when playing against popular and state-of-the-art algorithms in two simplified poker games; 4. Demonstrating that several change detection methods can effectively model changing categorical distributions with experimental results comparing their accuracies to empirical distributions. These results also show that their models can be used to outperform state-of-the-art reinforcement learning agents in two simultaneous-move games. This supports the idea of modelling changing opponent strategies with change detection methods; 5. Experimental results for the self-play convergence to mixed strategy Nash equilibria of the empirical distributions of plays of sequence prediction and change detection methods. The results show that they converge faster, and in more cases for change detection, than fictitious play.
10

On iterated learning for task-oriented dialogue

Singhal, Soumye 01 1900 (has links)
Dans le traitement de langue et des système de dialogue, il est courant de pré-entraîner des modèles de langue sur corpus humain avant de les affiner par le biais d'un simulateur et de résolution de tâches. Malheuresement, ce type d'entrainement tend aussi à induire un phénomène connu sous le nom de dérive du langage. Concrétement, les propriétés syntaxiques et sémantiques de la langue intiallement apprise se détériorent: les agents se concentrent uniquement sur la résolution de la tâche, et non plus sur la préservation de la langue. En s'inspirant des travaux en sciences cognitives, et notamment l'apprentigssage itératif Kirby and Griffiths (2014), nous proposons ici une approche générique pour contrer cette dérive du langage. Nous avons appelé cette méthode Seeded iterated learning (SIL), ou apprentissage itératif capitalisé. Ce travail a été publié sous le titre (Lu et al., 2020b) et est présenté au chapitre 2. Afin d'émuler la transmission de la langue entre chaque génération d'agents, un agent étudiant est d'abord pré-entrainé avant d'être affiné de manière itérative, et ceci, en imitant des données échantillonnées à partir d'un agent enseignant nouvellement formé. À chaque génération, l'enseignant est créé en copiant l'agent étudiant, avant d'être de nouveau affiné en maximisant le taux de réussite de la tâche sous-jacente. Dans un second temps, nous présentons Supervised Seeded iterated learning (SSIL) dans le chapitre 3, où apprentissage itératif capitalisé avec supervision, qui a été publié sous le titre (Lu et al., 2020b). SSIL s'appuie sur SIL en le combinant avec une autre méthode populaire appelée Supervised SelfPlay (S2P) (Gupta et al., 2019), où apprentissage supervisé par auto-jeu. SSIL est capable d'atténuer les problèmes de S2P et de SIL, i.e. la dérive du langage dans les dernier stades de l'entrainement tout en préservant une plus grande diversité linguistique. Tout d'abord, nous évaluons nos méthodes dans sous la forme d'une preuve de concept à traver le Jeu de Lewis avec du langage synthetique. Dans un second temps, nous l'étendons à un jeu de traduction se utilisant du langage naturel. Dans les deux cas, nous soulignons l'efficacité de nos méthodes par rapport aux autres méthodes de la litterature. Dans le chapitre 1, nous discutons des concepts de base nécessaires à la compréhension des articles présentés dans les chapitres 2 et 3. Nous décrivons le problème spécifique du dialogue orienté tâche, y compris les approches actuelles et les défis auxquels ils sont confrontés : en particulier, la dérive linguistique. Nous donnons également un aperçu du cadre d'apprentissage itéré. Certaines sections du chapitre 1 sont empruntées aux articles pour des raisons de cohérence et de facilité de compréhension. Le chapitre 2 comprend les travaux publiés sous le nom de (Lu et al., 2020b) et le chapitre 3 comprend les travaux publiés sous le nom de (Lu et al., 2020a), avant de conclure au chapitre 4. / In task-oriented dialogue, pretraining on human corpus followed by finetuning in a simulator using selfplay suffers from a phenomenon called language drift. The syntactic and semantic properties of the learned language deteriorates as the agents only focuses on solving the task. Inspired by the iterative learning framework in cognitive science Kirby and Griffiths (2014), we propose a generic approach to counter language drift called Seeded iterated learning (SIL). This work was published as (Lu et al., 2020b) and is presented in Chapter 2. In an attempt to emulate transmission of language between generations, a pretrained student agent is iteratively refined by imitating data sampled from a newly trained teacher agent. At each generation, the teacher is created by copying the student agent, before being finetuned to maximize task completion.We further introduce Supervised Seeded iterated learning (SSIL) in Chapter 3, work which was published as (Lu et al., 2020a). SSIL builds upon SIL by combining it with the other popular method called Supervised SelfPlay (S2P) (Gupta et al., 2019). SSIL is able to mitigate the problems of both S2P and SIL namely late-stage training collapse and low language diversity. We evaluate our methods in a toy setting of Lewis Game, and then scale it up to the translation game with natural language. In both settings, we highlight the efficacy of our methods compared to the baselines. In Chapter 1, we talk about the core concepts required for understanding the papers presented in Chapters 2 and 3. We describe the specific problem of task-oriented dialogue including current approaches and the challenges they face: particularly, the challenge of language drift. We also give an overview of the iterated learning framework. Some sections in Chapter 1 are borrowed from the papers for coherence and ease of understanding. Chapter 2 comprises of the work published as (Lu et al., 2020b) and Chapter 3 comprises of the work published as (Lu et al., 2020a). Chapter 4 gives a conclusion on the work.

Page generated in 0.1027 seconds