• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 688
  • 81
  • 68
  • 22
  • 11
  • 8
  • 8
  • 7
  • 7
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 1109
  • 1109
  • 277
  • 232
  • 212
  • 188
  • 168
  • 167
  • 159
  • 157
  • 152
  • 134
  • 128
  • 127
  • 118
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Model-based active learning in hierarchical policies

Cora, Vlad M. 05 1900 (has links)
Hierarchical task decompositions play an essential role in the design of complex simulation and decision systems, such as the ones that arise in video games. Game designers find it very natural to adopt a divide-and-conquer philosophy of specifying hierarchical policies, where decision modules can be constructed somewhat independently. The process of choosing the parameters of these modules manually is typically lengthy and tedious. The hierarchical reinforcement learning (HRL) field has produced elegant ways of decomposing policies and value functions using semi-Markov decision processes. However, there is still a lack of demonstrations in larger nonlinear systems with discrete and continuous variables. To narrow this gap between industrial practices and academic ideas, we address the problem of designing efficient algorithms to facilitate the deployment of HRL ideas in more realistic settings. In particular, we propose Bayesian active learning methods to learn the relevant aspects of either policies or value functions by focusing on the most relevant parts of the parameter and state spaces respectively. To demonstrate the scalability of our solution, we have applied it to The Open Racing Car Simulator (TORCS), a 3D game engine that implements complex vehicle dynamics. The environment is a large topological map roughly based on downtown Vancouver, British Columbia. Higher level abstract tasks are also learned in this process using a model-based extension of the MAXQ algorithm. Our solution demonstrates how HRL can be scaled to large applications with complex, discrete and continuous non-linear dynamics.
172

A service-oriented approach to topology formation and resource discovery in wireless ad-hoc networks

Gonzalez Valenzuela, Sergio 05 1900 (has links)
The past few years have witnessed a significant evolution in mobile computing and communications, in which new trends and applications have the traditional role of computer networks into that of distributed service providers. In this thesis we explore an alternative way to form wireless ad-hoc networks whose topologies can be customized as required by the users’ software applications. In particular, we investigate the applicability of mobile codes to networks created by devices equipped with Bluetooth technology. Computer simulations results suggest that our proposed approach can achieve this task effectively, while matching the level of efficiency seen in other salient proposals in this area. This thesis also addresses the issue of service discovery in mobile ad-hoc networks. We propose the use of a directory whose network location varies in an attempt to reduce traffic overhead driven by users’ hosts looking for service information. We refer to this scheme as the Service Directory Placement Algorithm, or SDPA. We formulate the directory relocation problem as a Markov Decision Process that is solved by using Q-learning. Performance evaluations through computer simulations reveal bandwidth overhead reductions that range between 40% and 48% when compared with a basic broadcast flooding approach for networks comprising hosts moving at pedestrian speeds. We then extend our proposed approach and introduce a multi-directory service discovery system called the Service Directory Placement Protocol, or SDPP. Our findings reveal bandwidth overhead reductions typically ranging from 15% to 75% in networks comprising slow-moving hosts with restricted memory availability. In the fourth and final part of this work, we present the design foundations and architecture of a middleware system that called WISEMAN – WIreless Sensors Employing Mobile Agents. We employ WISEMAN for dispatching and processing mobile programs in Wireless Sensor Networks (WSNs). Our proposed system enables the dynamic creation of semantic relationships between network nodes that cooperate to provide an aggregate service. We present discussions on the advantages of our proposed approach, and in particular, how WISEMAN facilitates the realization of service-oriented tasks in WSNs.
173

RELPH: A Computational Model for Human Decision Making

Mohammadi Sepahvand, Nazanin January 2013 (has links)
The updating process, which consists of building mental models and adapting them to the changes occurring in the environment, is impaired in neglect patients. A simple rock-paper-scissors experiment was conducted in our lab to examine updating impairments in neglect patients. The results of this experiment demonstrate a significant difference between the performance of healthy and brain damaged participants. While healthy controls did not show any difficulty learning the computer’s strategy, right brain damaged patients failed to learn the computer’s strategy. A computational modeling approach is employed to help us better understand the reason behind this difference and thus learn more about the updating process in healthy people and its impairment in right brain damaged patients. Broadly, we hope to learn more about the nature of the updating process, in general. Also the hope is that knowing what must be changed in the model to “brain-damage” it can shed light on the updating deficit in right brain damaged patients. To do so I adapted a pattern detection method named “ELPH” to a reinforcement-learning human decision making model called “RELPH”. This model is capable of capturing the behavior of both healthy and right brain damaged participants in our task according to our defined measures. Indeed, this thesis is an effort to discuss the possible differences among these groups employing this computational model.
174

Reinforcement Learning and Simulation-Based Search in Computer Go

Silver, David Unknown Date
No description available.
175

Dynamic Tuning of PI-Controllers based on Model-free Reinforcement Learning Methods

Abbasi Brujeni, Lena Unknown Date
No description available.
176

Online Learning for Linearly Parametrized Control Problems

Abbasi-Yadkori, Yasin Unknown Date
No description available.
177

Complying with norms : a neurocomputational exploration

Colombo, Matteo January 2012 (has links)
The subject matter of this thesis can be summarized by a triplet of questions and answers. Showing what these questions and answers mean is, in essence, the goal of my project. The triplet goes like this: Q: How can we make progress in our understanding of social norms and norm compliance? A: Adopting a neurocomputational framework is one effective way to make progress in our understanding of social norms and norm compliance. Q: What could the neurocomputational mechanism of social norm compliance be? A: The mechanism of norm compliance probably consists of Bayesian - Reinforcement Learning algorithms implemented by activity in certain neural populations. Q: What could information about this mechanism tell us about social norms and social norm compliance? A: Information about this mechanism tells us that: a1: Social norms are uncertainty-minimizing devices. a2: Social norm compliance is one trick that agents employ to interact coadaptively and smoothly in their social environment. Most of the existing treatments of norms and norm compliance (e.g. Bicchieri 2006; Binmore 1993; Elster 1989; Gintis 2010; Lewis 1969; Pettit 1990; Sugden 1986; Ullmann‐Margalit 1977) consist in what Cristina Bicchieri (2006) refers to as “rational reconstructions.” A rational reconstruction of the concept of social norm “specifies in which sense one may say that norms are rational, or compliance with a norm is rational” (Ibid., pp. 10-11). What sets my project apart from these types of treatments is that it aims, first and foremost, at providing a description of some core aspects of the mechanism of norm compliance. The single most original idea put forth in my project is to bring an alternative explanatory framework to bear on social norm compliance. This is the framework of computational cognitive neuroscience. The chapters of this thesis describe some ways in which central issues concerning social norms can be fruitfully addressed within a neurocomputational framework. In order to qualify and articulate the triplet above, my strategy consists firstly in laying down the beginnings of a model of the mechanism of norm compliance behaviour, and then zooming in on specific aspects of the model. Such a model, the chapters of this thesis argue, explains apparently important features of the psychology and neuroscience of norm compliance, and helps us to understand the nature of the social norms we live by.
178

A study of learning models for analyzing prisoners' dilemma game data / 囚犯困境資料分析之學習模型研究

賴宜祥, Lai, Yi Hsiang Unknown Date (has links)
人們如何在重覆的囚犯困境賽局選擇策略是本文探討的議題,其中的賽局學習理論就是預測賽局的參與者(player)會選擇何種策略。本文使用的資料包括3個囚犯困境的實驗,各自有不同的實驗設定及配對程序,參加者都是政治大學的大學部學生,我們將使用這些資料比較不同的學習模型。除了常見的3個學習模型:增強學習模型(Reinforcement Learning model)、信念學習模型(Belief Learning model)及加權經驗吸引模型(Experience-Weighted Attraction model),本文也提出一個延伸的增強學習模型(Extended reinforcement learning model)。接著將分析劃為Training (in-sample)及Testing (out-sample),並比較各實驗間或模型間的結果。   雖然延伸增強學習模型(Extended reinforcement learning model)較原始的增強學習模型(Reinforcement learning model)多了一個參數,該模型(Extended reinforcement learning model)在Training(in-sample)及Testing(out-sample)表現多較之前的模型來得些許的好。 / How people choose strategies in a finite repeated prisoners’ dilemma game is of interest in Game Theory. The way to predict which strategies the people choose in a game is so-called game learning theory. The objective of this study is to find a proper learning model for the prisoners’ dilemma game data collected in National Cheng-Chi University. The game data consist of three experiments with different game and matching rules. Four learning models are considered, including Reinforcement learning model, Belief learning model, Experience Weighted Attraction learning model and a proposed model modified from reinforcement learning model. The data analysis was divided into 2 parts: training (in-sample) and testing (out-sample). The proposed learning model is slightly better than the original reinforcement learning model no matter when in training or testing prediction although one more parameter is added. The performances of prediction by model fitting are all better than guessing the decisions with equal chance.
179

Model-based active learning in hierarchical policies

Cora, Vlad M. 05 1900 (has links)
Hierarchical task decompositions play an essential role in the design of complex simulation and decision systems, such as the ones that arise in video games. Game designers find it very natural to adopt a divide-and-conquer philosophy of specifying hierarchical policies, where decision modules can be constructed somewhat independently. The process of choosing the parameters of these modules manually is typically lengthy and tedious. The hierarchical reinforcement learning (HRL) field has produced elegant ways of decomposing policies and value functions using semi-Markov decision processes. However, there is still a lack of demonstrations in larger nonlinear systems with discrete and continuous variables. To narrow this gap between industrial practices and academic ideas, we address the problem of designing efficient algorithms to facilitate the deployment of HRL ideas in more realistic settings. In particular, we propose Bayesian active learning methods to learn the relevant aspects of either policies or value functions by focusing on the most relevant parts of the parameter and state spaces respectively. To demonstrate the scalability of our solution, we have applied it to The Open Racing Car Simulator (TORCS), a 3D game engine that implements complex vehicle dynamics. The environment is a large topological map roughly based on downtown Vancouver, British Columbia. Higher level abstract tasks are also learned in this process using a model-based extension of the MAXQ algorithm. Our solution demonstrates how HRL can be scaled to large applications with complex, discrete and continuous non-linear dynamics.
180

A service-oriented approach to topology formation and resource discovery in wireless ad-hoc networks

Gonzalez Valenzuela, Sergio 05 1900 (has links)
The past few years have witnessed a significant evolution in mobile computing and communications, in which new trends and applications have the traditional role of computer networks into that of distributed service providers. In this thesis we explore an alternative way to form wireless ad-hoc networks whose topologies can be customized as required by the users’ software applications. In particular, we investigate the applicability of mobile codes to networks created by devices equipped with Bluetooth technology. Computer simulations results suggest that our proposed approach can achieve this task effectively, while matching the level of efficiency seen in other salient proposals in this area. This thesis also addresses the issue of service discovery in mobile ad-hoc networks. We propose the use of a directory whose network location varies in an attempt to reduce traffic overhead driven by users’ hosts looking for service information. We refer to this scheme as the Service Directory Placement Algorithm, or SDPA. We formulate the directory relocation problem as a Markov Decision Process that is solved by using Q-learning. Performance evaluations through computer simulations reveal bandwidth overhead reductions that range between 40% and 48% when compared with a basic broadcast flooding approach for networks comprising hosts moving at pedestrian speeds. We then extend our proposed approach and introduce a multi-directory service discovery system called the Service Directory Placement Protocol, or SDPP. Our findings reveal bandwidth overhead reductions typically ranging from 15% to 75% in networks comprising slow-moving hosts with restricted memory availability. In the fourth and final part of this work, we present the design foundations and architecture of a middleware system that called WISEMAN – WIreless Sensors Employing Mobile Agents. We employ WISEMAN for dispatching and processing mobile programs in Wireless Sensor Networks (WSNs). Our proposed system enables the dynamic creation of semantic relationships between network nodes that cooperate to provide an aggregate service. We present discussions on the advantages of our proposed approach, and in particular, how WISEMAN facilitates the realization of service-oriented tasks in WSNs.

Page generated in 0.1156 seconds