1 |
ChicuxBot : genetic algorithm configured behavior network multi-agent for Quake II / ChicuxBot – Sistema Multi Agente de Rede de Comportamento Configurado por Algoritmo Genético para Quake IIAlegretti, Francisco José Prates January 2006 (has links)
Este trabalho descreve a implementação de um sistema multi agente usando Redes de Comportamento configurada por Algoritmos Genéticos. O sistema utiliza o jogo de computador Quake II como o ambiente simulado para os agentes. Redes de Comportamento são utilizadas como o mecanismo de tomada de decisão. Um Algoritmo Genético é utilizado para configurar os parâmetros da Rede de Comportamento. Cada agente é um programa independente que se conecta ao servidor do jogo para realizar tarefas e trocar material genético a fim de evoluir. Os resultados obtidos mostram um ambiente multi agente dinamicamente configurado capaz de evoluir e se adaptar apropriadamente conforme o andamento do jogo. / This work describes the implementation of a multi-agent system using Behavior Networks configured by Genetic Algorithms. The system uses the computer game Quake II as the simulated environment for the agents. Behavior Networks are used as the decision making mechanism. The Genetic Algorithm is used to configure the parameters of the Behavior Network. Each agent of the system is an independent program that connects to the game server to perform tasks and to exchange genetic material in order to evolve. The results obtained indicate a dynamically configured multi-agent system that can evolve and adapt accordingly throughout the course of the game.
|
2 |
ChicuxBot : genetic algorithm configured behavior network multi-agent for Quake II / ChicuxBot – Sistema Multi Agente de Rede de Comportamento Configurado por Algoritmo Genético para Quake IIAlegretti, Francisco José Prates January 2006 (has links)
Este trabalho descreve a implementação de um sistema multi agente usando Redes de Comportamento configurada por Algoritmos Genéticos. O sistema utiliza o jogo de computador Quake II como o ambiente simulado para os agentes. Redes de Comportamento são utilizadas como o mecanismo de tomada de decisão. Um Algoritmo Genético é utilizado para configurar os parâmetros da Rede de Comportamento. Cada agente é um programa independente que se conecta ao servidor do jogo para realizar tarefas e trocar material genético a fim de evoluir. Os resultados obtidos mostram um ambiente multi agente dinamicamente configurado capaz de evoluir e se adaptar apropriadamente conforme o andamento do jogo. / This work describes the implementation of a multi-agent system using Behavior Networks configured by Genetic Algorithms. The system uses the computer game Quake II as the simulated environment for the agents. Behavior Networks are used as the decision making mechanism. The Genetic Algorithm is used to configure the parameters of the Behavior Network. Each agent of the system is an independent program that connects to the game server to perform tasks and to exchange genetic material in order to evolve. The results obtained indicate a dynamically configured multi-agent system that can evolve and adapt accordingly throughout the course of the game.
|
3 |
ChicuxBot : genetic algorithm configured behavior network multi-agent for Quake II / ChicuxBot – Sistema Multi Agente de Rede de Comportamento Configurado por Algoritmo Genético para Quake IIAlegretti, Francisco José Prates January 2006 (has links)
Este trabalho descreve a implementação de um sistema multi agente usando Redes de Comportamento configurada por Algoritmos Genéticos. O sistema utiliza o jogo de computador Quake II como o ambiente simulado para os agentes. Redes de Comportamento são utilizadas como o mecanismo de tomada de decisão. Um Algoritmo Genético é utilizado para configurar os parâmetros da Rede de Comportamento. Cada agente é um programa independente que se conecta ao servidor do jogo para realizar tarefas e trocar material genético a fim de evoluir. Os resultados obtidos mostram um ambiente multi agente dinamicamente configurado capaz de evoluir e se adaptar apropriadamente conforme o andamento do jogo. / This work describes the implementation of a multi-agent system using Behavior Networks configured by Genetic Algorithms. The system uses the computer game Quake II as the simulated environment for the agents. Behavior Networks are used as the decision making mechanism. The Genetic Algorithm is used to configure the parameters of the Behavior Network. Each agent of the system is an independent program that connects to the game server to perform tasks and to exchange genetic material in order to evolve. The results obtained indicate a dynamically configured multi-agent system that can evolve and adapt accordingly throughout the course of the game.
|
4 |
Inlärning i Emotional Behavior Networks : Online Unsupervised Reinforcement Learning i kontinuerliga domäner / Learning in Emotional Behavior Networks : Online Unsupervised Reinforcement Learning in Continuous DomainsWahlström, Jonathan, Djupfeldt, Oscar January 2010 (has links)
<p>The largest project at the AICG lab at Linköping University, Cognitive models for virtual characters, focuses on creating an agent architecture for intelligent, virtual characters. The goal is to create an agent that acts naturally and gives a realistic user experience. The purpose of this thesis is to develop and implement an appropriate learning model that fits the existing agent architecture using an agile project methodology. The model developed can be seen as an online unsupervised reinforcement learning model that enhances experiences through reward. The model is based on Maes model where new effects are created depending on whether the agent is fulfilling its goals or not.</p><p>The model we have developed is based on constant monitoring of the system. If an action is chosen it is saved in a short-term memory. The memory is constantly updated with current information about the environment and the agent’s state. These memories will be evaluated on the basis of user defined classes that define what all values must satisfy to be successful. If the last memory in the list is considered to be evaluated it will be saved in a long-term memory. This long-term memory works all the time as a basis for how theagent’s network is structured. The long term memory is filtered based on where the agent is, how it feels and its current state.</p><p>Our model is evaluated in a series of tests where the agent's ability to adapt and how repetitive the agent is, is tested.</p><p>In reality, an agent with learning will get a dynamic network based on input from the user, but after a short period it may look completely different, depending on the amount of situations experienced by the agent and where it has been. An agent will have one network structure in the vicinity of food at location x and a completely different structure at anenemy at location y. If the agent enters a new situation where past experience does notfavor the agent, it will explore all possible actions it can take and thus creating newexperiences.</p><p>A comparison with an implementation without classification and learning indicates that the user needs to create fewer classes than it otherwise needs to create effects to cover all possible combinations. <img src="http://www.diva-portal.org/cgi-bin/mimetex.cgi?K_%7Bs%7D+K_%7Bb%7D" /><img src="http://www.diva-portal.org/cgi-bin/mimetex.cgi?K" />K<sub>S</sub>+K<sub>B</sub> classes creates effects for S*B state/behavior combinations, where K<sub>S</sub> and K<sub>B</sub> is the number of state classes and behavior classes and S and B is the number of states and behaviors in the network.</p> / Cognitive models for virtual characters
|
5 |
Designing autonomous agents for computer games with extended behavior networks : an investigation of agent performance, character modeling and action selection in unreal tournament / Construção de agentes autônomos para jogos de computador com redes de comportamentos estendidas: uma investigação de seleção de ações, performance de agentes e modelagem de personagens no jogo unreal tournamentPinto, Hugo da Silva Corrêa January 2005 (has links)
Este trabalho investiga a aplicação de rede de comportamentos estendidas ao domínio de jogos de computador. Redes de comportamentos estendidas (RCE) são uma classe de arquiteturas para seleção de ações capazes de selecionar bons conjuntos de ações para agentes complexos situados em ambientes contínuos e dinâmicos. Foram aplicadas com sucesso na Robocup, mas nunca foram aplicadas a jogos. PHISH-Nets, um modelo de redes de comportamentos capaz de selecionar apenas uma ação por vez, foi aplicado à modelagem de personagens, com bons resultados. Apesar de RCEs serem aplicáveis a um conjunto de domínios maior, nunca foram usadas para modelagem de personagens. Apresenta-se como projetar um agente controlado por uma rede de comportamentos para o domínio do Unreal Tournament e como integrar a rede de comportamentos a sensores nebulosos e comportamentos baseados em máquinas de estado-finito aumentadas. Investiga-se a qualidade da seleção de ações e a correção do mecanismo em uma série de experimentos. A performance é medida através da comparação das pontuações de um agente baseado em redes de comportamentos com outros dois agentes. Um dos agentes foi implementado por outro grupo e usava sensores, efetores e comportamentos diferentes. O outro agente era idêntico ao agente baseado em RCEs, exceto pelo mecanismo de controle empregado. A modelagem de personalidade é investigada através do projeto e análise de cinco estereótipos: Samurai, Veterano, Berserker, Novato e Covarde. Apresenta-se três maneiras de construir personalidades e situa-se este trabalho dentro de outras abordagems de projeto de personalidades. Conclui-se que a rede de comportamentos estendida é um bom mecanismo de seleção de ações para o domínio de jogos de computador e um mecanismo interessante para a construção de agentes com personalidades simples. / This work investigates the application of extended behavior networks to the computer game domain. We use as our test bed the game Unreal Tournament. Extended Behavior Networks (EBNs) are a class of action selection architectures capable of selecting a good set of actions for complex agents situated in continuous and dynamic environments. They have been successfully applied to the Robocup, but never before used in computer games. PHISH-Nets, a behavior network model capable of selecting just single actions, was applied to character modeling with promising results. Although extended behavior networks are applicable to a larger domain, they had not been used to character modeling before. We present how to design an agent with extended behavior networks, fuzzy sensors and finite-state machine based behaviors. We investigate the quality of the action selection mechanism and its correctness in a series of experiments. The performance is assessed comparing the scores of an agent using an extended behavior network against a plain reactive agent with identical sensory-motor apparatus and against a totally different agent built around finite-state machines. We investigate how EBNs fare on agent personality modeling via the design and analysis of five stereotypes in Unreal Tournament. We discuss three ways to build character personas and situate our work within other approaches. We conclude that extended behavior networks are a good action selection architecture for the computer game domain and an interesting mechanism to build agents with simple personalities.
|
6 |
Designing autonomous agents for computer games with extended behavior networks : an investigation of agent performance, character modeling and action selection in unreal tournament / Construção de agentes autônomos para jogos de computador com redes de comportamentos estendidas: uma investigação de seleção de ações, performance de agentes e modelagem de personagens no jogo unreal tournamentPinto, Hugo da Silva Corrêa January 2005 (has links)
Este trabalho investiga a aplicação de rede de comportamentos estendidas ao domínio de jogos de computador. Redes de comportamentos estendidas (RCE) são uma classe de arquiteturas para seleção de ações capazes de selecionar bons conjuntos de ações para agentes complexos situados em ambientes contínuos e dinâmicos. Foram aplicadas com sucesso na Robocup, mas nunca foram aplicadas a jogos. PHISH-Nets, um modelo de redes de comportamentos capaz de selecionar apenas uma ação por vez, foi aplicado à modelagem de personagens, com bons resultados. Apesar de RCEs serem aplicáveis a um conjunto de domínios maior, nunca foram usadas para modelagem de personagens. Apresenta-se como projetar um agente controlado por uma rede de comportamentos para o domínio do Unreal Tournament e como integrar a rede de comportamentos a sensores nebulosos e comportamentos baseados em máquinas de estado-finito aumentadas. Investiga-se a qualidade da seleção de ações e a correção do mecanismo em uma série de experimentos. A performance é medida através da comparação das pontuações de um agente baseado em redes de comportamentos com outros dois agentes. Um dos agentes foi implementado por outro grupo e usava sensores, efetores e comportamentos diferentes. O outro agente era idêntico ao agente baseado em RCEs, exceto pelo mecanismo de controle empregado. A modelagem de personalidade é investigada através do projeto e análise de cinco estereótipos: Samurai, Veterano, Berserker, Novato e Covarde. Apresenta-se três maneiras de construir personalidades e situa-se este trabalho dentro de outras abordagems de projeto de personalidades. Conclui-se que a rede de comportamentos estendida é um bom mecanismo de seleção de ações para o domínio de jogos de computador e um mecanismo interessante para a construção de agentes com personalidades simples. / This work investigates the application of extended behavior networks to the computer game domain. We use as our test bed the game Unreal Tournament. Extended Behavior Networks (EBNs) are a class of action selection architectures capable of selecting a good set of actions for complex agents situated in continuous and dynamic environments. They have been successfully applied to the Robocup, but never before used in computer games. PHISH-Nets, a behavior network model capable of selecting just single actions, was applied to character modeling with promising results. Although extended behavior networks are applicable to a larger domain, they had not been used to character modeling before. We present how to design an agent with extended behavior networks, fuzzy sensors and finite-state machine based behaviors. We investigate the quality of the action selection mechanism and its correctness in a series of experiments. The performance is assessed comparing the scores of an agent using an extended behavior network against a plain reactive agent with identical sensory-motor apparatus and against a totally different agent built around finite-state machines. We investigate how EBNs fare on agent personality modeling via the design and analysis of five stereotypes in Unreal Tournament. We discuss three ways to build character personas and situate our work within other approaches. We conclude that extended behavior networks are a good action selection architecture for the computer game domain and an interesting mechanism to build agents with simple personalities.
|
7 |
Designing autonomous agents for computer games with extended behavior networks : an investigation of agent performance, character modeling and action selection in unreal tournament / Construção de agentes autônomos para jogos de computador com redes de comportamentos estendidas: uma investigação de seleção de ações, performance de agentes e modelagem de personagens no jogo unreal tournamentPinto, Hugo da Silva Corrêa January 2005 (has links)
Este trabalho investiga a aplicação de rede de comportamentos estendidas ao domínio de jogos de computador. Redes de comportamentos estendidas (RCE) são uma classe de arquiteturas para seleção de ações capazes de selecionar bons conjuntos de ações para agentes complexos situados em ambientes contínuos e dinâmicos. Foram aplicadas com sucesso na Robocup, mas nunca foram aplicadas a jogos. PHISH-Nets, um modelo de redes de comportamentos capaz de selecionar apenas uma ação por vez, foi aplicado à modelagem de personagens, com bons resultados. Apesar de RCEs serem aplicáveis a um conjunto de domínios maior, nunca foram usadas para modelagem de personagens. Apresenta-se como projetar um agente controlado por uma rede de comportamentos para o domínio do Unreal Tournament e como integrar a rede de comportamentos a sensores nebulosos e comportamentos baseados em máquinas de estado-finito aumentadas. Investiga-se a qualidade da seleção de ações e a correção do mecanismo em uma série de experimentos. A performance é medida através da comparação das pontuações de um agente baseado em redes de comportamentos com outros dois agentes. Um dos agentes foi implementado por outro grupo e usava sensores, efetores e comportamentos diferentes. O outro agente era idêntico ao agente baseado em RCEs, exceto pelo mecanismo de controle empregado. A modelagem de personalidade é investigada através do projeto e análise de cinco estereótipos: Samurai, Veterano, Berserker, Novato e Covarde. Apresenta-se três maneiras de construir personalidades e situa-se este trabalho dentro de outras abordagems de projeto de personalidades. Conclui-se que a rede de comportamentos estendida é um bom mecanismo de seleção de ações para o domínio de jogos de computador e um mecanismo interessante para a construção de agentes com personalidades simples. / This work investigates the application of extended behavior networks to the computer game domain. We use as our test bed the game Unreal Tournament. Extended Behavior Networks (EBNs) are a class of action selection architectures capable of selecting a good set of actions for complex agents situated in continuous and dynamic environments. They have been successfully applied to the Robocup, but never before used in computer games. PHISH-Nets, a behavior network model capable of selecting just single actions, was applied to character modeling with promising results. Although extended behavior networks are applicable to a larger domain, they had not been used to character modeling before. We present how to design an agent with extended behavior networks, fuzzy sensors and finite-state machine based behaviors. We investigate the quality of the action selection mechanism and its correctness in a series of experiments. The performance is assessed comparing the scores of an agent using an extended behavior network against a plain reactive agent with identical sensory-motor apparatus and against a totally different agent built around finite-state machines. We investigate how EBNs fare on agent personality modeling via the design and analysis of five stereotypes in Unreal Tournament. We discuss three ways to build character personas and situate our work within other approaches. We conclude that extended behavior networks are a good action selection architecture for the computer game domain and an interesting mechanism to build agents with simple personalities.
|
8 |
Design and Implementation of an Appraisal Module for Virtual CharactersGrundström, Petter January 2012 (has links)
In the field of artificial intelligence the production of believable emotions are vital to be able to produce believable behavior of virtual agents. This is done with a process called affective appraisal, which means that events and situations are appraised and emotions are produced accordingly. The Artificial Intelligence and Computer Graphics (AICG) lab at Linköpings University has been devel- oping an AI architecture for virtual agents. This architecture had an appraisal module in need of improvement. This M.Sc. thesis had the purpose of doing this. Several approaches to affective appraisal are discussed and compared and finally one approach, called the OCC model, is chosen for implementation. This model is suitable for a real-time AI architecture as it is simple, easy to implement and can produce a wide range of emotions. The implementation of the OCC model is described in terms of how its different parts are incorporated into the previously existing AI architecture. Three extensions to the OCC model are also implemented to improve the results: emotional memories, the appraisal of unexpected events and interaction between the produced emotions. Finally the implementation is tested and the results of the tests are discussed. It is found that the implementation produces sufficient results for the scope of the thesis and for the requirements of the AI architecture into which it is incorporated.
|
9 |
Inlärning i Emotional Behavior Networks : Online Unsupervised Reinforcement Learning i kontinuerliga domäner / Learning in Emotional Behavior Networks : Online Unsupervised Reinforcement Learning in Continuous DomainsWahlström, Jonathan, Djupfeldt, Oscar January 2010 (has links)
The largest project at the AICG lab at Linköping University, Cognitive models for virtual characters, focuses on creating an agent architecture for intelligent, virtual characters. The goal is to create an agent that acts naturally and gives a realistic user experience. The purpose of this thesis is to develop and implement an appropriate learning model that fits the existing agent architecture using an agile project methodology. The model developed can be seen as an online unsupervised reinforcement learning model that enhances experiences through reward. The model is based on Maes model where new effects are created depending on whether the agent is fulfilling its goals or not. The model we have developed is based on constant monitoring of the system. If an action is chosen it is saved in a short-term memory. The memory is constantly updated with current information about the environment and the agent’s state. These memories will be evaluated on the basis of user defined classes that define what all values must satisfy to be successful. If the last memory in the list is considered to be evaluated it will be saved in a long-term memory. This long-term memory works all the time as a basis for how theagent’s network is structured. The long term memory is filtered based on where the agent is, how it feels and its current state. Our model is evaluated in a series of tests where the agent's ability to adapt and how repetitive the agent is, is tested. In reality, an agent with learning will get a dynamic network based on input from the user, but after a short period it may look completely different, depending on the amount of situations experienced by the agent and where it has been. An agent will have one network structure in the vicinity of food at location x and a completely different structure at anenemy at location y. If the agent enters a new situation where past experience does notfavor the agent, it will explore all possible actions it can take and thus creating newexperiences. A comparison with an implementation without classification and learning indicates that the user needs to create fewer classes than it otherwise needs to create effects to cover all possible combinations. <img src="http://www.diva-portal.org/cgi-bin/mimetex.cgi?K_%7Bs%7D+K_%7Bb%7D" /><img src="http://www.diva-portal.org/cgi-bin/mimetex.cgi?K" />KS+KB classes creates effects for S*B state/behavior combinations, where KS and KB is the number of state classes and behavior classes and S and B is the number of states and behaviors in the network. / Cognitive models for virtual characters
|
Page generated in 0.0764 seconds