• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1384
  • 372
  • 195
  • 157
  • 74
  • 59
  • 44
  • 24
  • 23
  • 21
  • 17
  • 17
  • 17
  • 17
  • 17
  • Tagged with
  • 2937
  • 1215
  • 565
  • 389
  • 336
  • 290
  • 249
  • 243
  • 242
  • 242
  • 233
  • 226
  • 197
  • 195
  • 168
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Some effects of schedules of vicarious reinformcement and rate of responding by a model upon extinction in an observer

Borden, Betty Louise, 1948- January 1972 (has links)
No description available.
82

Resistance to extinction as a function of type of acquisition experience

Lee, Penny Elizabeth, 1941- January 1966 (has links)
No description available.
83

Process assessment: an examination of the acquisition and retention of sight work vocabulary through reinforcement procedures

Green, Leslie Marion, 1951- January 1975 (has links)
No description available.
84

Effects of instructions and vicarious reinforcement schedules on extinction resistance and response rate

Ziesat, Harold Anthony, 1951- January 1973 (has links)
No description available.
85

A learned helplessness model of delay of reinforcement

Gill, Sheila McVeigh January 1980 (has links)
No description available.
86

The effects of within-session manipulation of reinforcer magnitude on schedule-induced polydipsia /

Pasquali, Paula E. January 1976 (has links)
No description available.
87

Gustatory and post-ingestional aspects of reinforcement

Messier, Claude. January 1982 (has links)
No description available.
88

Electrical self stimulation, a conventional reinforcer

Beninger, Richard J. January 1974 (has links)
No description available.
89

Second-order schedule performance: the role of brief stimuli and the effects of imipramine

Bradford, Linda DiAnne 05 1900 (has links)
No description available.
90

Reinforcement learning in commercial computer games

Coggan, Melanie. January 2008 (has links)
The goal of this thesis is to explore the use of reinforcement learning (RL) in commercial computer games. Although RL has been applied with success to many types of board games and non-game simulated environments, there has been little work in applying RL to the most popular genres of games: first-person shooters, role-playing games, and real-time strategies. In this thesis we use a first-person shooter environment to create computer players, or bots, that learn to play the game using reinforcement learning techniques. / We have created three experimental bots: ChaserBot, ItemBot and HybridBot. The two first bots each focus on a different aspect of the first-person shooter genre, and learn using basic RL. ChaserBot learns to chase down and shoot an enemy player. ItemBot, on the other hand, learns how to pick up the items---weapons, ammunition, armor---that are available, scattered on the ground, for the players to improve their arsenal. Both of these bots become reasonably proficient at their assigned task. Our goal for the third bot, HybridBot, was to create a bot that both chases and shoots an enemy player and goes after the items in the environment. Unlike the two previous bots which only have primitive actions available (strafing right or left, moving forward or backward, etc.), HybridBot uses options. At any state, it may choose either the player chasing option or the item gathering option. These options' internal policies are determined by the data learned by ChaserBot and ItemBot. HybridBot uses reinforcement learning to learn which option to pick at a given state. / Each bot learns to perform its given tasks. We compare the three bots' ability to gather items, and ChaserBot's and HybridBot's ability to chase their opponent. HybridBot's results are of particular interest as it outperforms ItemBot at picking up items by a large amount. However, none of our experiments yielded bots that are competitive with human players. We discuss the reasons for this and suggest improvements for future work that could lead to competitive reinforcement learning bots.

Page generated in 0.0223 seconds