• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 46
  • 4
  • 2
  • 2
  • 1
  • Tagged with
  • 59
  • 59
  • 40
  • 40
  • 20
  • 18
  • 16
  • 15
  • 11
  • 10
  • 10
  • 8
  • 8
  • 8
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Game-independent AI agents for playing Atari 2600 console games

Naddaf, Yavar 06 1900 (has links)
This research focuses on developing AI agents that play arbitrary Atari 2600 console games without having any game-specific assumptions or prior knowledge. Two main approaches are considered: reinforcement learning based methods and search based methods. The RL-based methods use feature vectors generated from the game screen as well as the console RAM to learn to play a given game. The search-based methods use the emulator to simulate the consequence of actions into the future, aiming to play as well as possible by only exploring a very small fraction of the state-space. To insure the generic nature of our methods, all agents are designed and tuned using four specific games. Once the development and parameter selection is complete, the performance of the agents is evaluated on a set of 50 randomly selected games. Significant learning is reported for the RL-based methods on most games. Additionally, some instances of human-level performance is achieved by the search-based methods.
12

Game-independent AI agents for playing Atari 2600 console games

Naddaf, Yavar Unknown Date
No description available.
13

Umělá inteligence pro počítačovou hru Children of the Galaxy / Artificial Intelligence for Children of the Galaxy Computer Game

Šmejkal, Pavel January 2018 (has links)
Even though artificial intelligence (AI) agents are now able to solve many classical games, in the field of computer strategy games, the AI opponents still leave much to be desired. In this work we tackle a problem of combat in strategy video games by adapting existing search approaches: Portfolio greedy search (PGS) and Monte-Carlo tree search (MCTS). We also introduce an improved version of MCTS called MCTS considering hit points (MCTS_HP). These methods are evaluated in context of a recently released 4X strategy game Children of the Galaxy. We implement a combat simulator for the game and a benchmarking framework where various AI approaches can be compared. We show that for small to medium combat MCTS methods are superior to PGS. In all scenarios MCTS_HP is equal or better than regular MCTS due to its better search guidance. In smaller scenarios MCTS_HP with only 100 millisecond time limit outperforms regular MCTS with 2 second time limit. By combining fast greedy search for large combats and more precise MCTS_HP for smaller scenarios a universal AI player can be created.
14

RESOURCE CONSTRAINT COOPERATIVE GAME WITH MONTE CARLO TREE SEARCH

Cheng, Chee Chian 01 August 2016 (has links)
A hybrid methodology of game theory and Monte Carlo Tree Search was developed and the hybrid methodology was tested with various case studies through the nurse scheduling problem to show that it was able to form Pareto front dominance solutions, finding feasible solutions that were optimal and finding feasible partial solutions in over-constrained problems. The performance comparison was carried out with the Genetic Algorithm on the Resident Physician Scheduling problem and showed that the hybrid methodology was able to produce better quality solutions compared to the state of the art approach.
15

Umělá inteligence pro hru Quoridor / Artificial Intelligence for Quoridor Board Game

Brenner, Matyáš January 2015 (has links)
The aim of this work is to design an Artificial Intelligence for Sector 66, which is a board game based on Quoridor. In Sector 66 there is a possibility to use spells and fields with some special effects. The Artificial Intelligence is based on Monte Carlo Tree Search. It can be used for 2 to 4 players. The Artificial Intelligence introduced in this work can work with the high branching factor of Quoridor/Sector 66 Game and can also handle unknown elements represented by user defined plug-ins. The game and the Artificial Intelligence has been developed using .NET Framework, XNA and C#. Powered by TCPDF (www.tcpdf.org)
16

Domain independent enhancements to Monte Carlo tree search for eurogames

Bergh, Peter January 2020 (has links)
The Monte Carlo tree search-algorithm (MCTS) has been proven successful when applied to combinatorial games, a term applied to sequential games with perfect information. As the focus for MCTS has tended to lean towards combinatorial games, general MCTS-strategies for other types of board games are hard to find. On another front, board games under the name of “Eurogames” have become increasingly popular in the last decade. These games introduce yet another set of challenges for game-playing agents on top of what combinatorial games already offer. Since its initial conception, a large number of enhancements to the MCTS-algorithm has been proposed. Seeing that eurogames share much of the same game-mechanics with each other, MCTS-enhancements proving effective for one game could potentially be aimed towards eurogames in general. In this paper, alterations to the expansion phase, the playout phase and the backpropagation phase are made to the standard MCTS-algorithm for agents playing the game of Carcassonne. To detect how enhancements are affected by chance events, both a deterministic and a stochastic version of the game is examined. It can be concluded that a reward policy relying solely on in-game score outperforms the conventional wins-against-losses policy. Concerning playouts, the Early Playout Termination enhancement only yields better results when the number of MCTS-iterations are somewhat restricted. Lastly, delayed node expansion is shown to be preferable over that of conventional node expansion. None of the enhancements showed any increasing or declining performance with regard to chance events. Additional experiments on other eurogames are needed to reaffirm any findings. Moreover, subsequent studies which introduce modifications to the examined enhancements is proposed as a measure to further increase agent performance. / Monte Carlo tree search-algoritmen (MCTS) har visat sig framgångsrik när den tillämpats på "combinatorial games", en term som används för sekventiella spel med perfekt information. Eftersom fokusområdet för MCTS har tenderat att luta mot "combinatorial games", är det svårt att hitta allmänna MCTS-strategier för andra typer av brädspel. På en annan front har brädspel under namnet "Eurogames" blivit allt populärare under det senaste decenniet. Dessa spel introducerar ännu en uppsättning utmaningar för agenter utöver vad "combinatorial games" redan erbjuder. Sedan dess begynnande så har ett stort antal förbättringar av MCTS-algoritmen föreslagits. Med tanke på att eurogames delar mycket av samma spelmekanik med varandra kan MCTS-förbättringar som visar sig vara effektiva för ett spel potentiellt riktas mot eurogames i allmänhet. I denna studie görs förändringar av expansionsfasen, playout-fasen och backpropagation-fasen i standard MCTS-algoritmen för agenter som spelar spelet Carcassonne. För att upptäcka hur förbättringar påverkas av slumpmässiga händelser undersöks både en deterministisk och en stokastisk version av spelet. Man kan dra slutsatsen att en belöningspolicy som enbart förlitar sig på poäng i spelet överträffar konventionell vinst-mot-förlust-policy. När det gäller "playouts" så bidrar Early Playout Termination-tillägget endast med bättre resultat när antalet MCTS-iterationer är något begränsat. Slutligen kan det visas att fördröjd expansion av noder att föredra framför konventionell expansion. Ingen av förbättringarna visade någon ökande eller minskande prestanda med avseende på slumpmässiga händelser. Ytterligare experiment på andra eurogames behövs för att bekräfta eventuella fynd. Dessutom föreslås efterföljande studier som introducerar modifieringar av de undersökta förbättringarna som ett mått för att ytterligare öka agentens prestanda.
17

Modelling Large Protein Complexes

Chim, Ho Yeung January 2023 (has links)
AlphaFold [Jumper et al., 2021, Evans et al., 2022] is a deep learning-based method that can accurately predict the structure of single- and multiple-chain proteins. However, its accuracy decreases with an increasing number of chains, and GPU memory limits the size of protein complexes that can be predicted. Recently, Elofsson’s groupintroduced a Monte Carlo tree search method, MoLPC, that can predict the structure of large complexes from predictions of sub-components [Bryant et al., 2022b]. However, MoLPC cannot adjust for errors in the sub-component predictions and requires knowledge of the correct protein stoichiometry. Large protein complexes are responsible for many essential cellular processes, such as mRNA splicing [Will and Lührmann, 2011], protein degradation [Tanaka, 2009], and protein folding [Ditzel et al., 1998]. However, the lack of structural knowledge of many large protein complexes remains challenging. Only a fraction of the eukaryoticcore complexes in CORUM [Giurgiu et al., 2019] have homologous structures covering all chains in PDB, indicating a significant gap in our structural understanding of protein complexes. AlphaFold-Multimer [Evans et al., 2022] is the only deep learning method that can predict the structure of more than two protein chains, trained on proteins of up to 20 chains, and can predict complexes of up to a few thousand residues, where memory limitations come into play. Another approach, MoLPC, is to predict the structure of sub-components of large complexes and assemble them. It has shown that it is possible to manually assemble large complexes from dimers manually [Burke et al., 2021] or use Monte Carlo tree search [Bryant et al., 2022b]. One limitation of the previous MoLPC approach is its inability to account for errors in sub-component prediction. The addition of small errors in each sub-component can propagate to a significant error when building the entire complex, leading toMoLPC’s failure. To overcome this challenge, the Monte Carlo Tree Search algorithms in MoLPC2 is enhanced to assemble protein complexes while simultaneously predicting their stoichiometry. Using MoLPC2, we accurately predicted the structures of 50 out of 175 non-redundant protein complexes (TM-score >0.8), while MoLPC only predicted 30. It should be noted that improvements introduced in AlphaFold version 2.3 enable the prediction of larger complexes, and if stoichiometry is known, it can accurately predict the structures of 74 complexes. Our findings suggest that assembling symmetrical complexes from sub-components results in higher accuracy while assembling asymmetrical complexes remains challenging.
18

Parallel Go on CUDA with Monte Carlo Tree Search

Zhou, Jun 11 October 2013 (has links)
No description available.
19

Mastering the Game of Gomoku Without Human Knowledge

Wang, Yuan 01 June 2018 (has links) (PDF)
Gomoku, also called Five in a row, is one of the earliest checkerboard games invented by humans. For a long time, it has brought countless pleasures to us. We humans, as players, also created a lot of skills in playing it. Scientists normalize and enter these skills into the computer so that the computer knows how to play Gomoku. However, the computer just plays following the pre-entered skills, it doesn’t know how to develop these skills by itself. Inspired by Google’s AlphaGo Zero, in this thesis, by combining the technologies of Monte Carlo Tree Search, Deep Neural Networks, and Reinforcement Learning, we propose a system that trains machine Gomoku players without prior human skills. These are self-evolving players that no prior knowledge is given. They develop their own skills from scratch by themselves. We have run this system for a month and half, during which time 150 different players were generated. The later these players were generated, the stronger abilities they have. During the training, beginning with zero knowledge, these players developed a row-based bottom-up strategy, followed by a column-based bottom-up strategy, and finally, a more flexible and intelligible strategy with a preference to the surrounding squares. Although even the latest players do not have strong capacities and thus couldn’t be regarded as strong AI agents, they still show the abilities to learn from the previous games. Therefore, this thesis proves that it is possible for the machine Gomoku player to evolve by itself without human knowledge. These players are on the right track, with continuous training, they would become better Gomoku players.
20

Acceleration of CFD and Data Analysis Using Graphics Processors

Khajeh Saeed, Ali 01 February 2012 (has links)
Graphics processing units function well as high performance computing devices for scientific computing. The non-standard processor architecture and high memory bandwidth allow graphics processing units (GPUs) to provide some of the best performance in terms of FLOPS per dollar. Recently these capabilities became accessible for general purpose computations with the CUDA programming environment on NVIDIA GPUs and ATI Stream Computing environment on ATI GPUs. Many applications in computational science are constrained by memory access speeds and can be accelerated significantly by using GPUs as the compute engine. Using graphics processing units as a compute engine gives the personal desktop computer a processing capacity that competes with supercomputers. Graphics Processing Units represent an energy efficient architecture for high performance computing in flow simulations and many other fields. This document reviews the graphic processing unit and its features and limitations.

Page generated in 0.1175 seconds