• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 1
  • Tagged with
  • 7
  • 7
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Comparison of cumulative reward withone, two and three layered artificialneural network in a simple environmentwhen using ml-agents

Björkberg, David January 2021 (has links)
Background.In machine learning you let the computer play a scenario, often millions of times. When the computer plays it receives feedback based on preset guidelines. The computer then adjusts its behaviour based on that feedback. The way the computer stores its feedback is in its artificial neural network(ANN). The ANN consists of an input layer, a set amount of hidden layers and an output layer. The ANN calculates actions using weights between the nodes in each layer and modifies those weights when it receives feedback. ml-agents is Unity Technologies implementation of machine learning. Objectives.ml-agents is a complex system with many different configurations. This results in users needing sources on what configuration to use for the best results. Our thesis aimed to answer the question of how many hidden layers yield the best results.We did this by attempting to answer our research question "How many layers are required to make the network capable of capturing the complexities of the environ-ment?". Methods.We used a prebuilt environment provided by Unity, in which the agent aims to keep a ball on its head for as long as possible. The training was collected by Tensorflow, which then provided graphs for each training session. We used these graphs to evaluate the training sessions. We ran each training session several times to get more consistent results. To evaluate the training sessions we looked at the peak of their cumulative reward graph and secondarily on how fast they reached this peak. Results.We found that with just one layer, the agent could only get roughly a fifth of the way to capturing the complexity of the environment. However, with two and three layers the agent was capable of capturing the complexity of the environment.The three layered training sessions reached their cumulative reward peak 22 percent faster than the two layered. Conclusions.We managed to get an answer to our research question. The minimum amount of hidden layers required to capture the complexity of the environment is two. However, with an additional layer the agent was able to get the same result faster. Which is worth taking into consideration
2

Machine Learning Adversaries in Video Games : Using reinforcement learning in the Unity Engine to create compelling enemy characters

Nämerforslund, Tim January 2021 (has links)
I och med att videospel blir mer avancerade, inte bara grafiskt utan också som konstform samt att dom erbjuder en mer inlevelsefull upplevelse, så kan det förväntas att spelen också ska erbjuda en större utmaning för att få spelaren bli ännu mer engagerad i spelet. Dagens spelare är vana vid fiender vars beteende styrs av tydliga mönster och regler, som beroende på situation agerar på ett förprogrammerat sätt och agerar utifrån förutsägbara mönster. Detta leder till en spelupplevelse där målet blir att klura ut det här mönstret och hitta ett sätt att överlista eller besegra det. Men tänk om det fanns en möjlighet att skapa en ny form av fiende svarar och anpassar sig beroende på hur spelaren beter sig? Som anpassar sig och kommer på egna strategier utifrån hur spelaren spelar, som aktivt försöker överlista spelaren? Genom maskininlärning i spel möjliggörs just detta. Med en maskininlärningsmodell som styr fienderna och tränas mot spelarna som möter den så lär sig fienderna att möta spelarna på ett dynamiskt sätt som anpassas allt eftersom spelaren spelar spelet. Den här studien ämnar att undersöka stegen som krävs för att implementera maskininlärning i Unity motorn samt undersöka ifall det finns någon upplevd skillnad i spelupplevelsen hos spelare som fått möta fiender styrda av en maskininlärningsmodell samt en mer traditionell typ av fiende. Data samlas in från testspelarnas spelsessioner samt deras svar i form av ett frågeformulär, där datan presenteras i grafform för att ge insikt kring ifall fienderna var likvärdigt svåra att spela mot. Svaren från frågeformulären används för att jämföra spelarnas spelupplevelser och utifrån detta se skillnaderna mellan dom. Skalan på spelet och dess enkelhet leder till att svaren inte bör påverkas av okända och ej kontrollerbara faktorer, vilket ger svar som ger oss insikt i skillnaderna mellan dom olika spelupplevelserna där en preferens för fiender styrda av maskininlärningsmodeller kan anas, då dom upplevs mer oförutsägbara och varierande. / As video games become more complex and more immersive, not just graphically or as an artform, but also technically, it can be expected that games behave on a deeper level to challenge and immerse the player further. Today’s gamers have gotten used to pattern based enemies, moving between preprogrammed states with predictable patterns, which lends itself to a certain kind of gameplay where the goal is to figure out how to beat said pattern. But what if there could be more in terms of challenging the player on an interactive level? What if the enemies could learn and adapt, trying to outsmart the player just as much as the player tries to outsmart the enemies. This is where the field of machine learning enters the stage and opens up for an entirely new type of non-player character in videogames. An enemy who uses a trained machine learning model to play against the player, who can adapt and become better as more people play the game. This study aims to look at early steps to implement machine learning in video games, in this case in the Unity engine, and look at the players perception of said enemies compared to normal state-driven enemies. Via testing voluntary players by letting them play against two kinds of enemies, data is gathered to compare the average performance of the players, after which players answer a questionnaire. These answers are analysed to give an indication of preference in type of enemy. Overall the small scale of the game and simplicity of the enemies gives clear answers but also limits the potential complexity of the enemies and thus the players enjoyment. Though this also enables us to discern a perceived difference in the players experience, where a preference for machine learning controlled enemies is noticeable, as they behave less predictable with more varied behaviour.
3

An Evaluation of the Unity Machine Learning Agents Toolkit in Dense and Sparse Reward Video Game Environments

Hanski, Jari, Biçak, Kaan Baris January 2021 (has links)
In computer games, one use case for artificial intelligence is used to create interesting problems for the player. To do this new techniques such as reinforcement learning allows game developers to create artificial intelligence agents with human-like or superhuman abilities. The Unity ML-agents toolkit is a plugin that provides game developers with access to reinforcement algorithms without expertise in machine learning. In this paper, we compare reinforcement learning methods and provide empirical training data from two different environments. First, we describe the chosen reinforcement methods and then explain the design of both training environments. We compared the benefits in both dense and sparse rewards environments. The reinforcement learning methods were evaluated by comparing the training speed and cumulative rewards of the agents. The goal was to evaluate how much the combination of extrinsic and intrinsic rewards accelerated the training process in the sparse rewards environment. We hope this study helps game developers utilize reinforcement learning more effectively, saving time during the training process by choosing the most fitting training method for their video game environment. The results show that when training reinforcement agents in sparse rewards environments the agents trained faster with the combination of extrinsic and intrinsic rewards. And when training an agent in a sparse reward environment with only extrinsic rewards the agent failed to learn to complete the task.
4

Future-proofing Video Game Agents with Reinforced Learning and Unity ML-Agents / Framtidssäkring av datorspelsagenter med förstärkningsinlärning och Unity ML-Agents

Andersson, Pontus January 2021 (has links)
In later years, a number of simulation platforms has utilized video games as training grounds for designing and experimenting with different Machine Learning algorithms. One issue for many is that video games usually do not provide any source code. The Unity ML-Agents toolkit provides both example environments and state-of-the-art Machine Learning algorithms in an attempt solve this. This has sparked curiosity in a local game company which wished to investigate the incorporation of machine-learned agents into their game using the toolkit. As such, the goal was to produce high performing, integrable agents capable of completing locomotive tasks. A pilot study was conducted which contributed with insight in training functionality and aspect which were important to producing a robust behavior model. With the use of Proximal Policy Optimization and different training configurations several neural network models were produced and evaluated on existing and new data. Several of the produced models displayed promising results but did not achieve the defined success rate of 80%. With some additional testing it is believed that the desired result could be reached. Alternatively, different aspect of the toolkit like Soft Actor Critic and Curriculum Learning could be investigated. / På senare tid har ett handfull simulationsplattformar använt datorspel som en träningsmiljö för att designa och experimentera med olika maskininlärningsalgoritmer. Ett problem för många är att dessa spel vanligtvis inte tillhandahåller någon källkod. Unity ML-Agents toolkit ämnar lösa behovet genom att erbjuda befintliga träningsmiljöer tillsammans med de senaste maskininlärningsalgoritmerna. Detta har väckt intresset hos ett lokalt spelföretag som vill undersöka möjligheten att integrera maskininlärda agenter i ett av deras spel. Som följd formulerades målet att skapa högpresterande och integrerbara agenter kapabla att utföra lokomotoriska uppgifter. En förstudie genomfördes och tillhandagav nyttig information om träningsfunktionalitet och kringliggande aspekter om att producera robusta beteendemodeller. Med hjälp av proximal policyoptimering och olika träningskonfigurationer skapades modeller av neurala nätverk som utvärderades på befintlig respektive ny data. Flertalet modeller visade lovande resultat men ingendera nådde det specificerade prestandamålet på 80%. Tron är att med ytterligare tester hade ett önskat resultat kunnat bli nått. Fortsättningsvis är det även möjligt att undersöka andra lärotekniker inkluderade i ML-Agent verktyget.
5

Återskapa mänskligt beteende med artificiell intelligens i 2D top-down wave shooter spel / Recreate human behaviour with artificial intelligence in 2D top-down wave shooter game

Bjärehall, Johannes, Hallberg, Johan January 2020 (has links)
Arbetet undersöker mänskligt beteende hos beteendeträd och LSTM nätverk. Ett spel skapades som testades av personer i en undersökning där deltagarna fick spela tillsammans med vardera agent i slumpmässig ordning för att bedöma agenternas beteende. Resultatet från undersökningen visade att beteendeträdet var den mänskliga varianten enligt deltagarna oavsett ordning som testpersonerna spelade med vardera agent. Problemet med resultatet beror antagligen till störst del på att det inte fanns tillräckligt med tid och bristande CPU kraft för att utveckla LSTM agenten ytterligare. För att förbättra och arbeta vidare med arbetet kan mer tid läggas på att träna LSTM nätverket och finjustera beteendeträdet. För att förbättra testet borde riktig multiplayer funktionalitet implementeras som gör att det går att testa agenterna jämfört med riktiga mänskliga spelare.
6

Reinforcement Learning for Procedural Game Animation: Creating Uncanny Zombie Movements

Tayeh, Adrian, Almquist, Arvid January 2024 (has links)
This thesis explores the use of reinforcement learning within the Unity ML Agents framework to simulate zombie-like movements in humanoid ragdolls. The generated locomotion aims to embrace the Uncanny Valley phenomenon, partly through the way it walks, but also through limb disablement. Additionally, the paper strives to test the effectiveness of reinforcement learning as a valuable tool for generative adaptive locomotion. The research implements reward functions and addresses technical challenges. It lays a focus on adaptability through the limb disablement system. A user study comparing the reinforcement learning agent to Mixamo animations evaluates the effectiveness of simulating zombie-like movements as well as if the Uncanny Valley phenomenon was achieved. Results show that while the reinforcement learning agent may lack believability and uncanniness when compared to the Mixamo animation, it features a level of adaptability that is worth expanding upon. Given the inconclusive results, there is room for further research on the topic to achieve the Uncanny Valley effect and enhance zombie-like locomotion with reinforcement learning.
7

Deep Reinforcement Learning for Multi-Agent Path Planning in 2D Cost Map Environments : using Unity Machine Learning Agents toolkit

Persson, Hannes January 2024 (has links)
Multi-agent path planning is applied in a wide range of applications in robotics and autonomous vehicles, including aerial vehicles such as drones and other unmanned aerial vehicles (UAVs), to solve tasks in areas like surveillance, search and rescue, and transportation. In today's rapidly evolving technology in the fields of automation and artificial intelligence, multi-agent path planning is growing increasingly more relevant. The main problems encountered in multi-agent path planning are collision avoidance with other agents, obstacle evasion, and pathfinding from a starting point to an endpoint. In this project, the objectives were to create intelligent agents capable of navigating through two-dimensional eight-agent cost map environments to a static target, while avoiding collisions with other agents and simultaneously minimizing the path cost. The method of reinforcement learning was used by utilizing the development platform Unity and the open-source ML-Agents toolkit that enables the development of intelligent agents with reinforcement learning inside Unity. Perlin Noise was used to generate the cost maps. The reinforcement learning algorithm Proximal Policy Optimization was used to train the agents. The training was structured as a curriculum with two lessons, the first lesson was designed to teach the agents to reach the target, without colliding with other agents or moving out of bounds. The second lesson was designed to teach the agents to minimize the path cost. The project successfully achieved its objectives, which could be determined from visual inspection and by comparing the final model with a baseline model. The baseline model was trained only to reach the target while avoiding collisions, without minimizing the path cost. A comparison of the models showed that the final model outperformed the baseline model, reaching an average of $27.6\%$ lower path cost. / Multi-agent-vägsökning används inom en rad olika tillämpningar inom robotik och autonoma fordon, inklusive flygfarkoster såsom drönare och andra obemannade flygfarkoster (UAV), för att lösa uppgifter inom områden som övervakning, sök- och räddningsinsatser samt transport. I dagens snabbt utvecklande teknik inom automation och artificiell intelligens blir multi-agent-vägsökning allt mer relevant. De huvudsakliga problemen som stöts på inom multi-agent-vägsökning är kollisioner med andra agenter, undvikande av hinder och vägsökning från en startpunkt till en slutpunkt. I detta projekt var målen att skapa intelligenta agenter som kan navigera genom tvådimensionella åtta-agents kostnadskartmiljöer till ett statiskt mål, samtidigt som de undviker kollisioner med andra agenter och minimerar vägkostnaden. Metoden förstärkningsinlärning användes genom att utnyttja utvecklingsplattformen Unity och Unitys open-source ML-Agents toolkit, som möjliggör utveckling av intelligenta agenter med förstärkningsinlärning inuti Unity. Perlin Brus användes för att generera kostnadskartorna. Förstärkningsinlärningsalgoritmen Proximal Policy Optimization användes för att träna agenterna. Träningen strukturerades som en läroplan med två lektioner, den första lektionen var utformad för att lära agenterna att nå målet, utan att kollidera med andra agenter eller röra sig utanför gränserna. Den andra lektionen var utformad för att lära agenterna att minimera vägkostnaden. Projektet uppnådde framgångsrikt sina mål, vilket kunde fastställas genom visuell inspektion och genom att jämföra den slutliga modellen med en basmodell. Basmodellen tränades endast för att nå målet och undvika kollisioner, utan att minimera vägen kostnaden. En jämförelse av modellerna visade att den slutliga modellen överträffade baslinjemodellen, och uppnådde en genomsnittlig $27,6\%$ lägre vägkostnad.

Page generated in 0.0771 seconds