1 |
An adaptive AI for real-time strategy gamesDahlbom, Anders January 2004 (has links)
<p>In real-time strategy (RTS) games, the human player faces tasks such as resource allocation, mission planning, and unit coordination. An Artificial Intelligence (AI) system that acts as an opponent against the human player need to be quite powerful, in order to create one cohesive strategy for victory. Even though the goal for an AI system in a computer game is not to defeat the human player, it might still need to act intelligently and look credible. It might however also need to provide just enough difficulty, so that both novice and expert players appreciates the game. The behavior of computer controlled opponents in RTS games of today has to a large extent been based on static algorithms and structures. Furthermore, the AI in RTS games performs the worst at the strategic level, and many of the problems can be tracked to its static nature. By introducing an adaptive AI at the strategic level, many of the problems could possibly be solved, the illusion of intelligence might be strengthened, and the entertainment value could perhaps be increased.</p><p>The aim of this dissertation has been to investigate how dynamic scripting, a technique for achieving adaptation in computer games, possibly could be applied at the strategic level in an RTS game. The dynamic scripting technique proposed by Spronck, et al. (2003), was originally intended for computer role-playing games (CRPGs), where it was used for online creation of scripts to control non-player characters (NPCs). The focus in this dissertation has been to investigate: (1) how the structure of dynamic scripting possibly could be modified to fit the strategic level in an RTS game, (2) how the adaptation time possibly could be lowered, and (3) how the performance of dynamic scripting possibly could be throttled.</p><p>A new structure for applying dynamic scripting has been proposed: a goal-rule hierarchy, where goals are used as domain knowledge for selecting rules. A rule is seen as a strategy for achieving a goal, and a goal can in turn be realized by several different rules. The adaptation process operates on the probability of selecting a specific rule as strategy for a specific goal. Rules can be realized by sub-goals, which create a hierarchical system. Further, a rule can be coupled with preconditions, which if false initiates goals with the purpose of fulfilling them. This introduces planning.</p><p>Results have shown that it can be more effective, with regard to adaptation time, re-adaptation time, and performance, to have equal punishment and reward factors, or to have higher punishments than rewards, compared to having higher rewards than punishments. It has also been shown that by increasing the learning rate, or including the derivative, both adaptation, and re-adaptation times, can effectively be lowered.</p><p>Finally, this dissertation has shown that by applying a fitness-mapping function, the performance of the AI can effectively be throttled. Results have shown that learning rate, and maximum weight setting, also can be used to vary the performance, but not to negative performance levels.</p>
|
2 |
An adaptive AI for real-time strategy gamesDahlbom, Anders January 2004 (has links)
In real-time strategy (RTS) games, the human player faces tasks such as resource allocation, mission planning, and unit coordination. An Artificial Intelligence (AI) system that acts as an opponent against the human player need to be quite powerful, in order to create one cohesive strategy for victory. Even though the goal for an AI system in a computer game is not to defeat the human player, it might still need to act intelligently and look credible. It might however also need to provide just enough difficulty, so that both novice and expert players appreciates the game. The behavior of computer controlled opponents in RTS games of today has to a large extent been based on static algorithms and structures. Furthermore, the AI in RTS games performs the worst at the strategic level, and many of the problems can be tracked to its static nature. By introducing an adaptive AI at the strategic level, many of the problems could possibly be solved, the illusion of intelligence might be strengthened, and the entertainment value could perhaps be increased. The aim of this dissertation has been to investigate how dynamic scripting, a technique for achieving adaptation in computer games, possibly could be applied at the strategic level in an RTS game. The dynamic scripting technique proposed by Spronck, et al. (2003), was originally intended for computer role-playing games (CRPGs), where it was used for online creation of scripts to control non-player characters (NPCs). The focus in this dissertation has been to investigate: (1) how the structure of dynamic scripting possibly could be modified to fit the strategic level in an RTS game, (2) how the adaptation time possibly could be lowered, and (3) how the performance of dynamic scripting possibly could be throttled. A new structure for applying dynamic scripting has been proposed: a goal-rule hierarchy, where goals are used as domain knowledge for selecting rules. A rule is seen as a strategy for achieving a goal, and a goal can in turn be realized by several different rules. The adaptation process operates on the probability of selecting a specific rule as strategy for a specific goal. Rules can be realized by sub-goals, which create a hierarchical system. Further, a rule can be coupled with preconditions, which if false initiates goals with the purpose of fulfilling them. This introduces planning. Results have shown that it can be more effective, with regard to adaptation time, re-adaptation time, and performance, to have equal punishment and reward factors, or to have higher punishments than rewards, compared to having higher rewards than punishments. It has also been shown that by increasing the learning rate, or including the derivative, both adaptation, and re-adaptation times, can effectively be lowered. Finally, this dissertation has shown that by applying a fitness-mapping function, the performance of the AI can effectively be throttled. Results have shown that learning rate, and maximum weight setting, also can be used to vary the performance, but not to negative performance levels.
|
3 |
Improving Computer Game Bots' behavior using Q-LearningPatel, Purvag 01 December 2009 (has links)
In modern computer video games, the quality of artificial characters plays a prominent role in the success of the game in the market. The aim of intelligent techniques, termed game AI, used in these games is to provide an interesting and challenging game play to a game player. Being highly sophisticated, these games present game developers with similar kind of requirements and challenges as faced by academic AI community. The game companies claim to use sophisticated game AI to model artificial characters such as computer game bots, intelligent realistic AI agents. However, these bots work via simple routines pre-programmed to suit the game map, game rules, game type, and other parameters unique to each game. Mostly, illusive intelligent behaviors are programmed using simple conditional statements and are hard-coded in the bots' logic. Moreover, a game programmer has to spend considerable time configuring crisp inputs for these conditional statements. Therefore, we realize a need for machine learning techniques to dynamically improve bots' behavior and save precious computer programmers' man-hours. So, we selected Q-learning, a reinforcement learning technique, to evolve dynamic intelligent bots, as it is a simple, efficient, and online learning algorithm. Machine learning techniques such as reinforcement learning are know to be intractable if they use a detailed model of the world, and also requires tuning of various parameters to give satisfactory performance. Therefore, for this research we opt to examine Q-learning for evolving a few basic behaviors viz. learning to fight, and planting the bomb for computer game bots. Furthermore, we experimented on how bots would use knowledge learned from abstract models to evolve its behavior in more detailed model of the world. Bots evolved using these techniques would become more pragmatic, believable and capable of showing human-like behavior. This will provide more realistic feel to the game and provide game programmers with an efficient learning technique for programming these bots.
|
4 |
Mimicking human player strategies in fighting games using game artificial intelligence techniquesSaini, Simardeep S. January 2014 (has links)
Fighting videogames (also known as fighting games) are ever growing in popularity and accessibility. The isolated console experiences of 20th century gaming has been replaced by online gaming services that allow gamers to play from almost anywhere in the world with one another. This gives rise to competitive gaming on a global scale enabling them to experience fresh play styles and challenges by playing someone new. Fighting games can typically be played either as a single player experience, or against another human player, whether it is via a network or a traditional multiplayer experience. However, there are two issues with these approaches. First, the single player offering in many fighting games is regarded as being simplistic in design, making the moves by the computer predictable. Secondly, while playing against other human players can be more varied and challenging, this may not always be achievable due to the logistics involved in setting up such a bout. Game Artificial Intelligence could provide a solution to both of these issues, allowing a human player s strategy to be learned and then mimicked by the AI fighter. In this thesis, game AI techniques have been researched to provide a means of mimicking human player strategies in strategic fighting games with multiple parameters. Various techniques and their current usages are surveyed, informing the design of two separate solutions to this problem. The first solution relies solely on leveraging k nearest neighbour classification to identify which move should be executed based on the in-game parameters, resulting in decisions being made at the operational level and being fed from the bottom-up to the strategic level. The second solution utilises a number of existing Artificial Intelligence techniques, including data driven finite state machines, hierarchical clustering and k nearest neighbour classification, in an architecture that makes decisions at the strategic level and feeds them from the top-down to the operational level, resulting in the execution of moves. This design is underpinned by a novel algorithm to aid the mimicking process, which is used to identify patterns and strategies within data collated during bouts between two human players. Both solutions are evaluated quantitatively and qualitatively. A conclusion summarising the findings, as well as future work, is provided. The conclusions highlight the fact that both solutions are proficient in mimicking human strategies, but each has its own strengths depending on the type of strategy played out by the human. More structured, methodical strategies are better mimicked by the data driven finite state machine hybrid architecture, whereas the k nearest neighbour approach is better suited to tactical approaches, or even random button bashing that does not always conform to a pre-defined strategy.
|
5 |
AI-controlled life in Role-playing games / AI-kontrollerat liv i rollspelJeppsson, Bertil January 2008 (has links)
Will more realistic behaviour among non-playing characters (NPCs) in a role-playing game(RPG) improve the overall feeling of the game for the player? Would players notice the enhanced life of a NPC in a role-playing game, or is the time spent in cities and villages insufficient to notice any difference at all? There are plenty best-selling RPGs with simplistic, repetitive NPC behaviour on the market. Does that mean that smarter NPCs is not necessary and that an improvement of them wouldn't benefit the players' impression of it? Or would some of these well recognised games get even better with a more evolved AI? These are some of the thoughts that created the initial spark of curiosity that inspired the making of this article. By assuming that a more complex game AI for the NPCs will improve the realism and feeling in a role-playing game, a research about possible techniques to achieve this was made. The technique Smart Terrain was found most beneficial for the purpose with this research. It's been used successfully in the well-selling game The Sims and appeared to be a good choice for an NPC AI with the flexibility and expandability it delivers. With a technique of great potential selected, a first version of an AI using it was implemented as a module to the commercial RPG Neverwinter Nights 2(NWN2). With the implemented Smart Terrain AI at hand, twelve testers got to compare this AI with the one that is encountered in the original campaign of NWN2. As all the participants in the test thought the new version of the AI more realistic than the original AI, the hypothesis was proven to be true. The results gave a strong indication of that using the Smart Terrain technique is a good choice to achieve higher realism among non-hostile NPCs in a RPG like NWN2.
|
6 |
Dynamic Strategy in Real-Time Strategy Games : with the use of finite-state machinesSvensson, Marcus January 2015 (has links)
Developing real-time strategy game AI is a challenging task due to that an AI-player has to deal with many different decisions and actions in an ever changing complex game world. Humans have little problem when it comes to dealing with the complexity of the game genre while it is a difficult obstacle to overcome for the computer. Adapting to the opponents strategy is one of many things that players typically have to do during the course of a game in the real-time strategy genre. This report presents a finite-state machine based solution to the mentioned problem and implements it with the help of the existing Starcraft: Broodwar AI Opprimobot. The extension is experimentally compared to the original implementation of Opprimobot. The comparison shows that both manages to achieve approximately the same win ratio against the built-in AI of Starcraft: Broodwar, but the modified version provides away to model more complex strategies.
|
7 |
Natively Implementing Deep Reinforcement Learning into a Game EngineKincer, Austin 01 December 2021 (has links)
Artificial intelligence (AI) increases the immersion that players can have while playing games. Modern game engines, a middleware software used to create games, implement simple AI behaviors that developers can use. Advanced AI behaviors must be implemented manually by game developers, which decreases the likelihood of game developers using advanced AI due to development overhead.
A custom game engine and custom AI architecture that handled deep reinforcement learning was designed and implemented. Snake was created using the custom game engine to test the feasibility of natively implementing an AI architecture into a game engine. A snake agent was successfully trained using the AI architecture, but the learned behavior was suboptimal. Although the learned behavior was suboptimal, the AI architecture was successfully implemented into a custom game engine because a behavior was successfully learned.
|
8 |
Game AI of StarCraft II based on Deep Reinforcement LearningJunjie Luo (8786552) 30 April 2020 (has links)
The research problem of this article is the Game AI agent of StarCraft II based on Deep Reinforcement Learning (DRL). StarCraft II is viewed as the most challenging Real-time Strategy (RTS) game for now, and it is also the most popular game where researchers are developing and improving AI agents. Building AI agents of StarCraft II can help researchers on machine learning figure out the weakness of DRL and improve this series of algorithms. In 2018, DeepMind and Blizzard developed the StarCraft II Learning Environment (PySC2) to enable researchers to promote the development of AI agents. DeepMind started to develop a new project called AlphaStar after AlphaGo based on DRL, while several laboratories also published articles about the AI agents of StarCraft II. Most of them are researching on the AI agents of Terran and Zerg, which are two of three races in StarCraft II. AI agents show high-level performance compared with most StarCraft II players. However, the performance is far from defeating E-sport players because Game AI for StarCraft II has large observation space and large action space. However, there is no publication on Protoss, which is the remaining and most complicated race to deal with (larger action space, larger observation space) for AI agents due to its characteristics. Thus, in this paper, the research question is whether the AI agent of Protoss, which is developed by the model based on DRL, for a full-length game on a particular map can defeat the high-level built-in cheating AI. The population of this research design is the StarCraft II AI agents that researchers built based on their DRL models, while the sample is the Protoss AI agent in this paper. The raw data is from the game matches between the Protoss AI agent and built-in AI agents. PySC2 can capture features and numerical variables in each match to obtain the training data. The expected outcome is the model based on DRL, which can train a Protoss AI agent to defeat high-level game AI agents with the win rate. The model includes the action space of Protoss, the observation space and the realization of DRL algorithms. Meanwhile, the model is built on PySC2 v2.0, which provides additional action functions. Due to the complexity and the unique characteristics of Protoss in StarCraft II, the model cannot be applied to other games or platforms. However, how the model trains a Protoss AI agent can show the limitation of DRL and push DRL algorithm a little forward.
|
9 |
Conditioning behavior styles of Reinforcement Learning policiesMysore Sthaneshwar, Siddharth 19 September 2023 (has links)
Reinforcement Learning (RL) algorithms may learn any of an arbitrary set of behaviors that may satisfy a reward-based objective, and this lack of consistency can limit the reliability and practical utility of RL. By considering how RL policies are trained, aspects of the core optimization loop are identified, that significantly impact what behaviors are learned and how. The work presented in this thesis develops frameworks for manipulating these aspects to define and train desirable behavior in practical and more user-friendly ways.
Smoothness in RL-based control was found to be a common issue among existing applications of RL in real-world controls. Our initial work on REinforcement-based transferable Agents through Learning (RE+AL) demonstrates that, through principled reward engineering and training-environment tuning, it is possible to learn effective and smooth control. However, this would still be tedious to extend to new tasks. Conditioning for Action Policy Smoothness (CAPS) introduces simple regularization terms directly to the policy optimization and serves as a generalized solution to smooth control that is more easily extensible across tasks.
Looking next at how neural network architectural choices impact policy learning, it was noted that the burden of complexity in learning and representation often fell disproportionately to the value function approximations learned during training. Building on this knowledge, Multi-Critic Actor Learning (MultiCriticAL) was developed for multi-task RL, drawing from the intuition that, if value functions to estimate policy quality are difficult to learn, having distinct functions to evaluate each task would ease this representational burden. MultiCriticAL provides an effective tool for learning policies that can smoothly transition between multiple behavior styles and demonstrates superior performance over commonly used single-critic techniques, both in reward-based performance metrics, as well as data efficiency, even enabling learning in cases where baseline methods would otherwise fail.
When considering user-friendliness for non-expert practitioners, demonstrations of desirable behavior can often be easier to provide than fine-tuned heuristics, making imitation learning an attractive avenue of exploration for user-friendly tools in policy design. Where heuristic-based rewards can guide RL in general learning, imitation can be used to condition optimization for specific behaviors, though this requires a balancing of possibly conflicting RL and imitation policy optimization signals. We overcome this challenge by extending MultiCriticAL to learning behavior from demonstrations. The Split-Critic Imitation Learning (SCIL) framework allows the definition of specific behaviors in parts of the state space, where it matters, and allows policies to learn any other compatible, generally useful behavior over the rest of the states, using a more standard RL reward-based training loop. Inheriting the strengths of MultiCriticAL, SCIL is able to better separate and balance reinforcement- and imitation-based policy optimization signals to adequately handle both, where contemporary state-of-the-art imitation learning frameworks may fail while enabling improved imitation performance and data efficiency. / 2024-09-18T00:00:00Z
|
10 |
Personality and Mood for Non-player Characters: A Method for Behavior Simulation in a Maze EnvironmentPaige, Noah L 01 December 2020 (has links) (PDF)
When it comes to video games, immersion is key. All types of games aim to keep the player immersed in some form or another. A common aspect of the immersive world in most role-playing games -- but not exclusive to the genre -- is the non-playable character (NPC). At their best, NPCs play an integral role to the sense of immersion the player feels by behaving in a way that feels believable and fits within the world of the game. However, due to lack of innovation in this area of video games, at their worst NPCs can jar the player out of the immersive state of flow with unnatural behavior.
In an effort towards making non-playable characters (NPCs) in games smarter, more believable, and more immersive, a method based in psychological theory for controlling the behavior of NPCs was developed. Based on a behavior model similar to most modern games, our behavior model for NPCs traverses a behavior tree. A novel method was introduced using the five-factor model of personality (also known as the big-five personality traits) and the circumplex model of affect (a model of emotion) to inform the traversal of the behavior tree of NPCs. This behavior model has two main beneficial outcomes. The first is emergent gameplay, resulting in unplanned, unpredictable experiences in games which feel closer to natural behavior, leading to an increase in immersion. This can be used for complex storytelling as well by offering information about an NPC's personality to be used in the narrative of games. Secondly, the model is able to provide the emotional status of an NPC in real time. This capability allows developers to programmatically display facial and body expression, eschewing the current time-consuming approach of artist-choreographed animation. Finally, a maze simulation environment was constructed to test the results of our behavior model and procedural animation.
The data collected from 100 iterations in our maze simulation environment about our behavior model found that a correlation can be observed between traits and actions, showing that emergent gameplay can be achieved by varying personality traits. Additionally, by incorporating a novel method for procedural animation based on real-time emotion data, a more realistic representation of human behavior is achieved.
|
Page generated in 0.0695 seconds