Spelling suggestions: "subject:"realtime strategy games"" "subject:"mealtime strategy games""
1 |
Integrace Pogamutu s Defconem / Integrace Pogamutu s DefconemPíbil, Radek January 2011 (has links)
Title: Bridging Pogamut and Defcon Author: Bc. Radek Píbil Department: Department of Software and Computer Science Education Supervisor of the master thesis: Mgr. Jakub Gemrot Abstract: In this thesis we are going to discuss the support of Pogamut AI frame- work for the Defcon PC game. Defcon is a multiplayer real-time strategy putting player into control of one part of the world's sea force, air force and nuclear ar- senal. We are going to cover five main topics. First is concerned with bridging Pogamut and Defcon. Next discusses provided algorithms useful for agent pro- gramming for such a kind of environment. Third describes the implementation of a purely Java agent. Fourth shows an implementation using Jason MAS frame- work. Final is going to evaluate the performance of the agents. Our main reason for bridging Pogamut is that as the gaming AI becomes more and more prominent in academia, more and more computer games allow programmers to implement their own AI. Pogamut AI platform follows this trend by expanding into two new environments Starcraft and Defcon, which introduce real-time strategy environ- ments to Pogamut, whose origins are in first-person shooters. Keywords: Defcon, Artificial Intelligence, Real-Time Strategy Games, Pogamut
|
2 |
An adaptive AI for real-time strategy gamesDahlbom, Anders January 2004 (has links)
<p>In real-time strategy (RTS) games, the human player faces tasks such as resource allocation, mission planning, and unit coordination. An Artificial Intelligence (AI) system that acts as an opponent against the human player need to be quite powerful, in order to create one cohesive strategy for victory. Even though the goal for an AI system in a computer game is not to defeat the human player, it might still need to act intelligently and look credible. It might however also need to provide just enough difficulty, so that both novice and expert players appreciates the game. The behavior of computer controlled opponents in RTS games of today has to a large extent been based on static algorithms and structures. Furthermore, the AI in RTS games performs the worst at the strategic level, and many of the problems can be tracked to its static nature. By introducing an adaptive AI at the strategic level, many of the problems could possibly be solved, the illusion of intelligence might be strengthened, and the entertainment value could perhaps be increased.</p><p>The aim of this dissertation has been to investigate how dynamic scripting, a technique for achieving adaptation in computer games, possibly could be applied at the strategic level in an RTS game. The dynamic scripting technique proposed by Spronck, et al. (2003), was originally intended for computer role-playing games (CRPGs), where it was used for online creation of scripts to control non-player characters (NPCs). The focus in this dissertation has been to investigate: (1) how the structure of dynamic scripting possibly could be modified to fit the strategic level in an RTS game, (2) how the adaptation time possibly could be lowered, and (3) how the performance of dynamic scripting possibly could be throttled.</p><p>A new structure for applying dynamic scripting has been proposed: a goal-rule hierarchy, where goals are used as domain knowledge for selecting rules. A rule is seen as a strategy for achieving a goal, and a goal can in turn be realized by several different rules. The adaptation process operates on the probability of selecting a specific rule as strategy for a specific goal. Rules can be realized by sub-goals, which create a hierarchical system. Further, a rule can be coupled with preconditions, which if false initiates goals with the purpose of fulfilling them. This introduces planning.</p><p>Results have shown that it can be more effective, with regard to adaptation time, re-adaptation time, and performance, to have equal punishment and reward factors, or to have higher punishments than rewards, compared to having higher rewards than punishments. It has also been shown that by increasing the learning rate, or including the derivative, both adaptation, and re-adaptation times, can effectively be lowered.</p><p>Finally, this dissertation has shown that by applying a fitness-mapping function, the performance of the AI can effectively be throttled. Results have shown that learning rate, and maximum weight setting, also can be used to vary the performance, but not to negative performance levels.</p>
|
3 |
Improved Combat Tactics of AI Agents in Real-Time Strategy Games Using Qualitative Spatial Reasoningívarsson, Óli January 2005 (has links)
<p>Real-time strategy (RTS) games constitute one of the largest game genres today and have done so for the past decade. A central feature of real-time strategy games is opponent AI which is suggestively the “last frontier” of game development because the focus of research has primarily been on other components, graphics in particular. This has led to AI research being largely ignored within the commercial game industry but several methods have recently been suggested for improving the strategic ability of AI agents in real-time strategy games.</p><p>The aim of this project is to evaluate how a method called qualitative spatial reasoning can improve AI on a tactical level in a selected RTS game. An implementation of an AI agent that uses qualitative spatial reasoning has been obtained and an evaluation of its performance in an RTS game example monitored and analysed.</p><p>The study has shown that qualitative spatial reasoning affects AI agent’s behaviour significantly and indicates that it can be used to deduce a rule-base that increases the unpredictability and performance of the agent.</p>
|
4 |
Implementation of Asymmetric Potential Fields in Real Time Strategy GameMansur-Ul-Islam, Muhammad, Sajjad, Muhammad January 2011 (has links)
In eighties, the idea of using potential fields was first introduced in the field of the robotics. The purpose of using potential fields was to achieve the natural movement in robotics. Many researchers proceeded this idea to enhance their research. The idea of using potential fields was also introduced in real time strategy games for the better movement of objects. In this thesis we worked on the idea of using asymmetric potential fields in the game environment. The purpose of our study was to analyze the affect of asymmetric potential fields on unit’s formation and their movement in game environment. In this study performance of asymmetric potential fields was also compared with symmetric potential fields. By literature review the potential field and its usage in RTS games were studied. The methodology to implement the potential fields in RTS game was also identified in literature review. In experimental part the asymmetric potential fields implemented by using the methodology proposed by Hagelbäck and Johansson. By following that methodology asymmetric potential field was applied on StarCraft bot by using the BWAPI. Experiment was also designed to test the asymmetric potential field bot. Asymmetric potential field bot was tested on the two maps of StarCraft: Brood War game. On these two maps, bot implemented with asymmetric potential field and the bot implemented with symmetric potential field competed with four bots. Three bots were selected from StarCraft competition and one was built-in bot of this game. The results of these competition shows that asymmetric potential field bot has better performance than symmetric potential field bot. The results of experiments show that the performance of bot implemented with asymmetric potential fields was better than symmetric potential field on single unit type and two unit types. This study shows that with the help of asymmetric potential fields interesting unit formation can be formed in real time strategy games, which can give better result than symmetric potential fields.
|
5 |
An adaptive AI for real-time strategy gamesDahlbom, Anders January 2004 (has links)
In real-time strategy (RTS) games, the human player faces tasks such as resource allocation, mission planning, and unit coordination. An Artificial Intelligence (AI) system that acts as an opponent against the human player need to be quite powerful, in order to create one cohesive strategy for victory. Even though the goal for an AI system in a computer game is not to defeat the human player, it might still need to act intelligently and look credible. It might however also need to provide just enough difficulty, so that both novice and expert players appreciates the game. The behavior of computer controlled opponents in RTS games of today has to a large extent been based on static algorithms and structures. Furthermore, the AI in RTS games performs the worst at the strategic level, and many of the problems can be tracked to its static nature. By introducing an adaptive AI at the strategic level, many of the problems could possibly be solved, the illusion of intelligence might be strengthened, and the entertainment value could perhaps be increased. The aim of this dissertation has been to investigate how dynamic scripting, a technique for achieving adaptation in computer games, possibly could be applied at the strategic level in an RTS game. The dynamic scripting technique proposed by Spronck, et al. (2003), was originally intended for computer role-playing games (CRPGs), where it was used for online creation of scripts to control non-player characters (NPCs). The focus in this dissertation has been to investigate: (1) how the structure of dynamic scripting possibly could be modified to fit the strategic level in an RTS game, (2) how the adaptation time possibly could be lowered, and (3) how the performance of dynamic scripting possibly could be throttled. A new structure for applying dynamic scripting has been proposed: a goal-rule hierarchy, where goals are used as domain knowledge for selecting rules. A rule is seen as a strategy for achieving a goal, and a goal can in turn be realized by several different rules. The adaptation process operates on the probability of selecting a specific rule as strategy for a specific goal. Rules can be realized by sub-goals, which create a hierarchical system. Further, a rule can be coupled with preconditions, which if false initiates goals with the purpose of fulfilling them. This introduces planning. Results have shown that it can be more effective, with regard to adaptation time, re-adaptation time, and performance, to have equal punishment and reward factors, or to have higher punishments than rewards, compared to having higher rewards than punishments. It has also been shown that by increasing the learning rate, or including the derivative, both adaptation, and re-adaptation times, can effectively be lowered. Finally, this dissertation has shown that by applying a fitness-mapping function, the performance of the AI can effectively be throttled. Results have shown that learning rate, and maximum weight setting, also can be used to vary the performance, but not to negative performance levels.
|
6 |
Improved Combat Tactics of AI Agents in Real-Time Strategy Games Using Qualitative Spatial Reasoningívarsson, Óli January 2005 (has links)
Real-time strategy (RTS) games constitute one of the largest game genres today and have done so for the past decade. A central feature of real-time strategy games is opponent AI which is suggestively the “last frontier” of game development because the focus of research has primarily been on other components, graphics in particular. This has led to AI research being largely ignored within the commercial game industry but several methods have recently been suggested for improving the strategic ability of AI agents in real-time strategy games. The aim of this project is to evaluate how a method called qualitative spatial reasoning can improve AI on a tactical level in a selected RTS game. An implementation of an AI agent that uses qualitative spatial reasoning has been obtained and an evaluation of its performance in an RTS game example monitored and analysed. The study has shown that qualitative spatial reasoning affects AI agent’s behaviour significantly and indicates that it can be used to deduce a rule-base that increases the unpredictability and performance of the agent.
|
7 |
La stratégie comme processus cognitif dans le jeu vidéo StarCraftDor, Simon 08 1900 (has links)
Pour respecter les droits d’auteur, la version électronique de ce mémoire a été dépouillée de ses documents visuels et audio‐visuels. La version intégrale du mémoire a été déposée au Service de la gestion des documents et des archives de l'Université de Montréal. / Cette recherche propose une analyse du jeu de stratégie en temps réel StarCraft (Blizzard Entertainment, 1998). Il s’agit de questionner le concept de stratégie dans le jeu sans s’en tenir à ce qu’on peut voir et entendre. Ce mémoire débute sur une description du jeu en détails afin de faire ressortir comment la stratégie joue un rôle dans l’ensemble des compétences qui y sont mobilisées. Ensuite, le cercle heuristique du processus stratégique offre une modélisation du fonctionnement de la stratégie en tant que processus cognitif, basé sur les états du jeu inférés chez le joueur et sur ses plans stratégiques. Ce modèle et les concepts qui en découlent sont consolidés par des analyses de parties spécifiques de StarCraft. / This thesis offers an analysis of the Real-Time Strategy game StarCraft (Blizzard Entertainment, 1998). Its goal is to explore beyond the visible and audible part of the game to elucidate the concept of strategy into play. Following a description of the game and its constraints, it demonstrates how strategy plays a major role within the skills needed to play. Then, our “heuristic circle of the strategic process” describes how strategy works as a cognitive process, and how it interacts with both the game states inferred by the player and his or her strategic plans. Finally, this model and its underlying concepts are supported by close analyses of StarCraft game sequences.
|
8 |
La stratégie comme processus cognitif dans le jeu vidéo StarCraftDor, Simon 08 1900 (has links)
Cette recherche propose une analyse du jeu de stratégie en temps réel StarCraft (Blizzard Entertainment, 1998). Il s’agit de questionner le concept de stratégie dans le jeu sans s’en tenir à ce qu’on peut voir et entendre. Ce mémoire débute sur une description du jeu en détails afin de faire ressortir comment la stratégie joue un rôle dans l’ensemble des compétences qui y sont mobilisées. Ensuite, le cercle heuristique du processus stratégique offre une modélisation du fonctionnement de la stratégie en tant que processus cognitif, basé sur les états du jeu inférés chez le joueur et sur ses plans stratégiques. Ce modèle et les concepts qui en découlent sont consolidés par des analyses de parties spécifiques de StarCraft. / This thesis offers an analysis of the Real-Time Strategy game StarCraft (Blizzard Entertainment, 1998). Its goal is to explore beyond the visible and audible part of the game to elucidate the concept of strategy into play. Following a description of the game and its constraints, it demonstrates how strategy plays a major role within the skills needed to play. Then, our “heuristic circle of the strategic process” describes how strategy works as a cognitive process, and how it interacts with both the game states inferred by the player and his or her strategic plans. Finally, this model and its underlying concepts are supported by close analyses of StarCraft game sequences. / Pour respecter les droits d’auteur, la version électronique de ce mémoire a été dépouillée de ses documents visuels et audio‐visuels. La version intégrale du mémoire a été déposée au Service de la gestion des documents et des archives de l'Université de Montréal.
|
9 |
Zlepšování systému pro automatické hraní hry Starcraft II v prostředí PySC2 / Improving Bots Playing Starcraft II Game in PySC2 EnvironmentKrušina, Jan January 2018 (has links)
The aim of this thesis is to create an automated system for playing a real-time strategy game Starcraft II. Learning from replays via supervised learning and reinforcement learning techniques are used for improving bot's behavior. The proposed system should be capable of playing the whole game utilizing PySC2 framework for machine learning. Performance of the bot is evaluated against the built-in scripted AI in the game.
|
Page generated in 0.0963 seconds