• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 3
  • 1
  • Tagged with
  • 12
  • 12
  • 11
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Reinforcement Learning in Keepaway Framework for RoboCup Simulation League

Li, Wei January 2011 (has links)
This thesis aims to apply the reinforcement learning into soccer robot and show the great power of reinforcement learning for the RoboCup. In the first part, the background of reinforcement learning is briefly introduced before showing the previous work on it. Therefore the difficulty in implementing reinforcement learning is proposed. The second section demonstrates basic concepts in reinforcement learning, including three fundamental elements, state, action and reward respectively, and three classical approaches, dynamic programming, monte carlo methods and temporal-difference learning respectively. When it comes to keepaway framework, more explanations are given to further combine keepaway with reinforcement learning. After the suggestion about sarsa algorithm with two function approximation, artificial neural network and tile coding, it is implemented successfully during the simulations. The results show it significantly improves the performance of soccer robot.
2

Machine Learning for Traffic Control of Unmanned Mining Machines : Using the Q-learning and SARSA algorithms / Maskininlärning för Trafikkontroll av Obemannade Gruvmaskiner : Med användning av algoritmerna Q-learning och SARSA

Gustafsson, Robin, Fröjdendahl, Lucas January 2019 (has links)
Manual configuration of rules for unmanned mining machine traffic control can be time-consuming and therefore expensive. This paper presents a Machine Learning approach for automatic configuration of rules for traffic control in mines with autonomous mining machines by using Q-learning and SARSA. The results show that automation might be able to cut the time taken to configure traffic rules from 1-2 weeks to a maximum of approximately 6 hours which would decrease the cost of deployment. Tests show that in the worst case the developed solution is able to run continuously for 24 hours 82% of the time compared to the 100% accuracy of the manual configuration. The conclusion is that machine learning can plausibly be used for the automatic configuration of traffic rules. Further work in increasing the accuracy to 100% is needed for it to replace manual configuration. It remains to be examined whether the conclusion retains pertinence in more complex environments with larger layouts and more machines. / Manuell konfigurering av trafikkontroll för obemannade gruvmaskiner kan vara en tidskrävande process. Om denna konfigurering skulle kunna automatiseras så skulle det gynnas tidsmässigt och ekonomiskt. Denna rapport presenterar en lösning med maskininlärning med Q-learning och SARSA som tillvägagångssätt. Resultaten visar på att konfigureringstiden möjligtvis kan tas ned från 1–2 veckor till i värsta fallet 6 timmar vilket skulle minska kostnaden för produktionssättning. Tester visade att den slutgiltiga lösningen kunde köra kontinuerligt i 24 timmar med minst 82% träffsäkerhet jämfört med 100% då den manuella konfigurationen används. Slutsatsen är att maskininlärning eventuellt kan användas för automatisk konfiguration av trafikkontroll. Vidare arbete krävs för att höja träffsäkerheten till 100% så att det kan användas istället för manuell konfiguration. Fler studier bör göras för att se om detta även är sant och applicerbart för mer komplexa scenarier med större gruvlayouts och fler maskiner.
3

Game-independent AI agents for playing Atari 2600 console games

Naddaf, Yavar 06 1900 (has links)
This research focuses on developing AI agents that play arbitrary Atari 2600 console games without having any game-specific assumptions or prior knowledge. Two main approaches are considered: reinforcement learning based methods and search based methods. The RL-based methods use feature vectors generated from the game screen as well as the console RAM to learn to play a given game. The search-based methods use the emulator to simulate the consequence of actions into the future, aiming to play as well as possible by only exploring a very small fraction of the state-space. To insure the generic nature of our methods, all agents are designed and tuned using four specific games. Once the development and parameter selection is complete, the performance of the agents is evaluated on a set of 50 randomly selected games. Significant learning is reported for the RL-based methods on most games. Additionally, some instances of human-level performance is achieved by the search-based methods.
4

Reinforcement Learning for Active Length Control and Hysteresis Characterization of Shape Memory Alloys

Kirkpatrick, Kenton C. 16 January 2010 (has links)
Shape Memory Alloy actuators can be used for morphing, or shape change, by controlling their temperature, which is effectively done by applying a voltage difference across their length. Control of these actuators requires determination of the relationship between voltage and strain so that an input-output map can be developed. In this research, a computer simulation uses a hyperbolic tangent curve to simulate the hysteresis behavior of a virtual Shape Memory Alloy wire in temperature-strain space, and uses a Reinforcement Learning algorithm called Sarsa to learn a near-optimal control policy and map the hysteretic region. The algorithm developed in simulation is then applied to an experimental apparatus where a Shape Memory Alloy wire is characterized in temperature-strain space. This algorithm is then modified so that the learning is done in voltage-strain space. This allows for the learning of a control policy that can provide a direct input-output mapping of voltage to position for a real wire. This research was successful in achieving its objectives. In the simulation phase, the Reinforcement Learning algorithm proved to be capable of controlling a virtual Shape Memory Alloy wire by determining an accurate input-output map of temperature to strain. The virtual model used was also shown to be accurate for characterizing Shape Memory Alloy hysteresis by validating it through comparison to the commonly used modified Preisach model. The validated algorithm was successfully applied to an experimental apparatus, in which both major and minor hysteresis loops were learned in temperature-strain space. Finally, the modified algorithm was able to learn the control policy in voltage-strain space with the capability of achieving all learned goal states within a tolerance of +-0.5% strain, or +-0.65mm. This policy provides the capability of achieving any learned goal when starting from any initial strain state. This research has validated that Reinforcement Learning is capable of determining a control policy for Shape Memory Alloy crystal phase transformations, and will open the door for research into the development of length controllable Shape Memory Alloy actuators.
5

Game-independent AI agents for playing Atari 2600 console games

Naddaf, Yavar Unknown Date
No description available.
6

Generating adaptive companion behaviors using reinforcement learning in games

Sharifi, AmirAli Unknown Date
No description available.
7

Generating adaptive companion behaviors using reinforcement learning in games

Sharifi, AmirAli 11 1900 (has links)
Non-Player Character (NPC) behaviors in todays computer games are mostly generated from manually written scripts. The high cost of manually creating complex behaviors for each NPC to exhibit intelligence in response to every situation in the game results in NPCs with repetitive and artificial looking behaviors. The goal of this research is to enable NPCs in computer games to exhibit natural and human-like behaviors in non-combat situations. The quality of these behaviors affects the game experience especially in story-based games, which rely heavily on player-NPC interactions. Reinforcement Learning has been used in this research for BioWare Corp.s Neverwinter Nights to learn natural-looking behaviors for companion NPCs. The proposed method enables NPCs to rapidly learn reasonable behaviors and adapt to the changes in the game environment. This research also provides a learning architecture to divide the NPC behavior into sub-behaviors and sub-tasks called decision domains.
8

Applying Agent Modeling to Behaviour Patterns of Characters in Story-Based Games

Zhao, Richard 11 1900 (has links)
Most story-based games today have manually-scripted non-player characters (NPCs) and the scripts are usually simple and repetitive since it is time-consuming for game developers to script each character individually. ScriptEase, a publicly-available author-oriented developer tool, attempts to solve this problem by generating script code from high-level design patterns, for BioWare Corp.'s role-playing game Neverwinter Nights. The ALeRT algorithm uses reinforcement learning (RL) to automatically generate NPC behaviours that change over time as the NPCs learn from the successes or failures of their own actions. This thesis aims to provide a new learning mechanism to game agents so they are capable of adapting to new behaviours based on the actions of other agents. The new on-line RL algorithm, ALeRT-AM, which includes an agent-modeling mechanism, is applied in a series of combat experiments in Neverwinter Nights and integrated into ScriptEase to produce adaptive behaviour patterns for NPCs.
9

Applying Agent Modeling to Behaviour Patterns of Characters in Story-Based Games

Zhao, Richard Unknown Date
No description available.
10

Aprendizado por esforço aplicado ao combate em jogos eletrônicos de estratégia em tempo real

Botelho Neto, Gutenberg Pessoa 28 March 2014 (has links)
Made available in DSpace on 2015-05-14T12:36:51Z (GMT). No. of bitstreams: 1 arquivototal.pdf: 4482656 bytes, checksum: 11b85e413d691749edd8d5be0d8f56d4 (MD5) Previous issue date: 2014-03-28 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / Electronic games and, in particular, real-time strategy (RTS) games, are increasingly seen as viable and important fields for artificial intelligence research because of commonly held characteristics, like the presence of complex environments, usually dynamic and with multiple agents. In commercial RTS games, the computer behavior is mostly designed with simple ad hoc, static techniques that require manual definition of actions and leave the agent unable to adapt to the various situations it may find. This approach, besides being lengthy and error-prone, makes the game relatively predictable after some time, allowing the human player to eventually discover the strategy used by the computer and develop an optimal way of countering it. Using machine learning techniques like reinforcement learning is a way of trying to avoid this predictability, allowing the computer to evaluate the situations that occur during the games, learning with these situations and improving its behavior over time, being able to choose autonomously and dynamically the best action when needed. This work proposes a modeling for the use of SARSA, a reinforcement learning technique, applied to combat situations in RTS games, with the goal of allowing the computer to better perform in this fundamental area for achieving victory in an RTS game. Several tests were made with various game situations and the agent applying the proposed modeling, facing the game's default AI opponent, was able to improve its performance in all of them, developing knowledge about the best actions to choose for the various possible game states and using this knowledge in an efficient way to obtain better results in later games / Jogos eletrônicos e, em especial, jogos de estratégia em tempo real (RTS), são cada vez mais vistos como campos viáveis e importantes para pesquisas de inteligência artificial por possuírem características interessantes para a área, como a presença de ambientes complexos, muitas vezes dinâmicos e com múltiplos agentes. Nos jogos RTS comerciais, o comportamento do computador é geralmente definido a partir de técnicas ad hoc simples e estáticas, com a necessidade de definição manual de ações e a incapacidade de adaptação às situações encontradas. Esta abordagem, além de demorada e propícia a erros, faz com que o jogo se torne relativamente previsível após algum tempo, permitindo ao jogador eventualmente descobrir a estratégia utilizada pelo computador e desenvolver uma forma ótima de enfrentá-lo. Uma maneira de tentar combater esta previsibilidade consiste na utilização de técnicas de aprendizagem de máquina, mais especificamente do aprendizado por reforço, para permitir ao computador avaliar as situações ocorridas durante as partidas, aprendendo com estas situações e aprimorando seu conhecimento ao longo do tempo, sendo capaz de escolher de maneira autônoma e dinâmica a melhor ação quando necessário. Este trabalho propõe uma modelagem para a utilização de SARSA, uma técnica do aprendizado por reforço, aplicada a situações de combate em jogos RTS, com o objetivo de fazer com o que o computador possa se portar de maneira mais adequada nessa área, uma das mais fundamentais para a busca da vitória em um jogo RTS. Nos testes realizados em diversas situações de jogo, o agente aplicando a modelagem proposta, enfrentando o oponente padrão controlado pela IA do jogo, foi sempre capaz de melhorar seus resultados ao longo do tempo, obtendo conhecimento acerca das melhores ações a serem tomadas a cada momento decisório e aproveitando esse conhecimento nas suas partidas futuras

Page generated in 0.0546 seconds