• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 688
  • 81
  • 68
  • 22
  • 11
  • 8
  • 8
  • 7
  • 7
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 1108
  • 1108
  • 277
  • 231
  • 212
  • 188
  • 167
  • 167
  • 159
  • 157
  • 152
  • 134
  • 128
  • 127
  • 118
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Aprendizado por reforço em ambientes não-estacionários

Silva, Bruno Castro da January 2007 (has links)
Neste trabalho apresentamos o RL-CD (Reinforcement Learning with Context Detection), um método desenvolvido a fim de lidar com o problema do aprendizado por reforço (RL) em ambientes não-estacionários. Embora os métodos existentes de RL consigam, muitas vezes, superar a não-estacionariedade, o fazem sob o inconveniente de terem de reaprender políticas que já haviam sido calculadas, o que implica perda de desempenho durante os períodos de readaptação. O método proposto baseia-se em um mecanismo geral através do qual são criados, atualizados e selecionados um dentre vários modelos e políticas parciais. Os modelos parciais do ambiente são incrementalmente construídos de acordo com a capacidade do sistema de fazer predições eficazes. A determinação de tal medida de eficácia baseia-se no cálculo de qualidades globais para cada modelo, as quais refletem o ajuste total necessário para tornar cada modelo coerente com as experimentações reais. Depois de apresentadas as bases teóricas necessárias para fundamentar o RL-CD e suas equações, são propostos e discutidos um conjunto de experimentos que demonstram sua eficiência, tanto em relação a estratégias clássicas de RL quanto em comparação a algoritmos especialmente projetados para lidar com cenários não-estacionários. O RL-CD é comparado com métodos reconhecidos na área de aprendizado por reforço e também com estratégias RL multi-modelo. Os resultados obtidos sugerem que o RLCD constitui uma abordagem eficiente para lidar com uma subclasse de ambientes nãoestacionários, especificamente aquela formada por ambientes cuja dinâmica é corretamente representada por um conjunto finito de Modelos de Markov estacionários. Por fim, apresentamos a análise teórica de um dos parâmetros mais importantes do RL-CD, possibilitada pela aproximação empírica de distribuições de probabilidades via métodos de Monte Carlo. Essa análise permite que os valores ideais de tal parâmetro sejam calculados, tornando assim seu ajuste independente da aplicação específica sendo estudada. / In this work we introduce RL-CD (Reinforcement Learning with Context Detection), a novel method for solving reinforcement learning (RL) problems in non-stationary environments. In face of non-stationary scenarios, standard RL methods need to continually readapt themselves to the changing dynamics of the environment. This causes a performance drop during the readjustment phase and implies the need for relearning policies even for dynamics which have already been experienced. RL-CD overcomes these problems by implementing a mechanism for creating, updating and selecting one among several partial models of the environment. The partial models are incrementally built according to the system’s capability of making predictions regarding a given sequence of observations. First, we present the motivations and the theorical basis needed to develop the conceptual framework of RL-CD. Afterwards, we propose, formalize and show the efficiency of RL-CD both in a simple non-stationary environment and in a noisy scenarios. We show that RL-CD performs better than two standard reinforcement learning algorithms and that it has advantages over methods specifically designed to cope with non-stationarity. Finally, we present the theoretical examination of one of RL-CD’s most important parameters, made possible by means of the analysis of probability distributions obtained via Monte Carlo methods. This analysis makes it possible for us to calculate the optimum values for this parameter, so that its adjustment can be performed independently of the scenario being studied.
192

Aprendizado por reforço em ambientes não-estacionários

Silva, Bruno Castro da January 2007 (has links)
Neste trabalho apresentamos o RL-CD (Reinforcement Learning with Context Detection), um método desenvolvido a fim de lidar com o problema do aprendizado por reforço (RL) em ambientes não-estacionários. Embora os métodos existentes de RL consigam, muitas vezes, superar a não-estacionariedade, o fazem sob o inconveniente de terem de reaprender políticas que já haviam sido calculadas, o que implica perda de desempenho durante os períodos de readaptação. O método proposto baseia-se em um mecanismo geral através do qual são criados, atualizados e selecionados um dentre vários modelos e políticas parciais. Os modelos parciais do ambiente são incrementalmente construídos de acordo com a capacidade do sistema de fazer predições eficazes. A determinação de tal medida de eficácia baseia-se no cálculo de qualidades globais para cada modelo, as quais refletem o ajuste total necessário para tornar cada modelo coerente com as experimentações reais. Depois de apresentadas as bases teóricas necessárias para fundamentar o RL-CD e suas equações, são propostos e discutidos um conjunto de experimentos que demonstram sua eficiência, tanto em relação a estratégias clássicas de RL quanto em comparação a algoritmos especialmente projetados para lidar com cenários não-estacionários. O RL-CD é comparado com métodos reconhecidos na área de aprendizado por reforço e também com estratégias RL multi-modelo. Os resultados obtidos sugerem que o RLCD constitui uma abordagem eficiente para lidar com uma subclasse de ambientes nãoestacionários, especificamente aquela formada por ambientes cuja dinâmica é corretamente representada por um conjunto finito de Modelos de Markov estacionários. Por fim, apresentamos a análise teórica de um dos parâmetros mais importantes do RL-CD, possibilitada pela aproximação empírica de distribuições de probabilidades via métodos de Monte Carlo. Essa análise permite que os valores ideais de tal parâmetro sejam calculados, tornando assim seu ajuste independente da aplicação específica sendo estudada. / In this work we introduce RL-CD (Reinforcement Learning with Context Detection), a novel method for solving reinforcement learning (RL) problems in non-stationary environments. In face of non-stationary scenarios, standard RL methods need to continually readapt themselves to the changing dynamics of the environment. This causes a performance drop during the readjustment phase and implies the need for relearning policies even for dynamics which have already been experienced. RL-CD overcomes these problems by implementing a mechanism for creating, updating and selecting one among several partial models of the environment. The partial models are incrementally built according to the system’s capability of making predictions regarding a given sequence of observations. First, we present the motivations and the theorical basis needed to develop the conceptual framework of RL-CD. Afterwards, we propose, formalize and show the efficiency of RL-CD both in a simple non-stationary environment and in a noisy scenarios. We show that RL-CD performs better than two standard reinforcement learning algorithms and that it has advantages over methods specifically designed to cope with non-stationarity. Finally, we present the theoretical examination of one of RL-CD’s most important parameters, made possible by means of the analysis of probability distributions obtained via Monte Carlo methods. This analysis makes it possible for us to calculate the optimum values for this parameter, so that its adjustment can be performed independently of the scenario being studied.
193

Design, evaluation and comparison of evolution and reinforcement learning models

Mclean, Clinton Brett January 2002 (has links)
This work presents the design, evaluation and comparison of evolution and reinforcement learning models, in isolation and combined in Darwinian and Lamarckian frameworks, with a particular emphasis being placed on their adaptive nature in response to environments that become increasingly unstable. Our ultimate objective is to determine whether hybrid models of evolution and learning can demonstrate adaptive qualities beyond those of such models when applied in isolation. This work demonstrates the limitations of evolution, reinforcement learning and Lamarckian models in dealing with increasingly unstable environments, while noting the effective adaptive nature of a Darwinian model to assimilate increasing levels of instability. This is shown to be a result of the Darwinian evolution model's ability to separate learning at two levels, the population's experience of the environment over the course of many generations and the individual's experience of the environment over the course of its lifetime. Thus, knowledge relating to the general characteristics of the environment over many generations can be maintained in the population's genotypes with phenotype (reinforcement) learning being utilized to adapt a particular agent to the particular characteristics of its environment. Lamarckian evolution, though, is shown to demonstrate adaptive characteristics that are highly effective in response to the stable environments. Selection and reproduction combined with reinforcement learning creates a model that has the ability to utilize useful knowledge produced by reinforcements, as opposed to random mutations, to accelerate the search process. As a result the influence of individual learning on the populations evolution is shown to be more successful when applied in the more direct Lamarckian form. Based on our results demonstrating the success of Lamarckian strategies in stable environments and Darwinian strategies in unstable environments, hybrid Darwinian/Lamarckian models are created with a view towards combining the advantages of both forms of evolution to produce a superior adaptive capability. Our investigation demonstrates that such hybrid models can effectively combine the adaptive advantageous of both Darwinian and Lamarckian evolution to provide a more effective capability of adapting to a range of conditions, from stable to unstable, appropriately adjusting the required degree of inheritance in response to the requirements of the environment.
194

Model-based active learning in hierarchical policies

Cora, Vlad M. 05 1900 (has links)
Hierarchical task decompositions play an essential role in the design of complex simulation and decision systems, such as the ones that arise in video games. Game designers find it very natural to adopt a divide-and-conquer philosophy of specifying hierarchical policies, where decision modules can be constructed somewhat independently. The process of choosing the parameters of these modules manually is typically lengthy and tedious. The hierarchical reinforcement learning (HRL) field has produced elegant ways of decomposing policies and value functions using semi-Markov decision processes. However, there is still a lack of demonstrations in larger nonlinear systems with discrete and continuous variables. To narrow this gap between industrial practices and academic ideas, we address the problem of designing efficient algorithms to facilitate the deployment of HRL ideas in more realistic settings. In particular, we propose Bayesian active learning methods to learn the relevant aspects of either policies or value functions by focusing on the most relevant parts of the parameter and state spaces respectively. To demonstrate the scalability of our solution, we have applied it to The Open Racing Car Simulator (TORCS), a 3D game engine that implements complex vehicle dynamics. The environment is a large topological map roughly based on downtown Vancouver, British Columbia. Higher level abstract tasks are also learned in this process using a model-based extension of the MAXQ algorithm. Our solution demonstrates how HRL can be scaled to large applications with complex, discrete and continuous non-linear dynamics. / Science, Faculty of / Computer Science, Department of / Graduate
195

A service-oriented approach to topology formation and resource discovery in wireless ad-hoc networks

Gonzalez Valenzuela, Sergio 05 1900 (has links)
The past few years have witnessed a significant evolution in mobile computing and communications, in which new trends and applications have the traditional role of computer networks into that of distributed service providers. In this thesis we explore an alternative way to form wireless ad-hoc networks whose topologies can be customized as required by the users’ software applications. In particular, we investigate the applicability of mobile codes to networks created by devices equipped with Bluetooth technology. Computer simulations results suggest that our proposed approach can achieve this task effectively, while matching the level of efficiency seen in other salient proposals in this area. This thesis also addresses the issue of service discovery in mobile ad-hoc networks. We propose the use of a directory whose network location varies in an attempt to reduce traffic overhead driven by users’ hosts looking for service information. We refer to this scheme as the Service Directory Placement Algorithm, or SDPA. We formulate the directory relocation problem as a Markov Decision Process that is solved by using Q-learning. Performance evaluations through computer simulations reveal bandwidth overhead reductions that range between 40% and 48% when compared with a basic broadcast flooding approach for networks comprising hosts moving at pedestrian speeds. We then extend our proposed approach and introduce a multi-directory service discovery system called the Service Directory Placement Protocol, or SDPP. Our findings reveal bandwidth overhead reductions typically ranging from 15% to 75% in networks comprising slow-moving hosts with restricted memory availability. In the fourth and final part of this work, we present the design foundations and architecture of a middleware system that called WISEMAN – WIreless Sensors Employing Mobile Agents. We employ WISEMAN for dispatching and processing mobile programs in Wireless Sensor Networks (WSNs). Our proposed system enables the dynamic creation of semantic relationships between network nodes that cooperate to provide an aggregate service. We present discussions on the advantages of our proposed approach, and in particular, how WISEMAN facilitates the realization of service-oriented tasks in WSNs. / Applied Science, Faculty of / Electrical and Computer Engineering, Department of / Graduate
196

Recommender System using Reinforcement Learning

January 2020 (has links)
abstract: Currently, recommender systems are used extensively to find the right audience with the "right" content over various platforms. Recommendations generated by these systems aim to offer relevant items to users. Different approaches have been suggested to solve this problem mainly by using the rating history of the user or by identifying the preferences of similar users. Most of the existing recommendation systems are formulated in an identical fashion, where a model is trained to capture the underlying preferences of users over different kinds of items. Once it is deployed, the model suggests personalized recommendations precisely, and it is assumed that the preferences of users are perfectly reflected by the historical data. However, such user data might be limited in practice, and the characteristics of users may constantly evolve during their intensive interaction between recommendation systems. Moreover, most of these recommender systems suffer from the cold-start problems where insufficient data for new users or products results in reduced overall recommendation output. In the current study, we have built a recommender system to recommend movies to users. Biclustering algorithm is used to cluster the users and movies simultaneously at the beginning to generate explainable recommendations, and these biclusters are used to form a gridworld where Q-Learning is used to learn the policy to traverse through the grid. The reward function uses the Jaccard Index, which is a measure of common users between two biclusters. Demographic details of new users are used to generate recommendations that solve the cold-start problem too. Lastly, the implemented algorithm is examined with a real-world dataset against the widely used recommendation algorithm and the performance for the cold-start cases. / Dissertation/Thesis / Masters Thesis Computer Science 2020
197

Approaches for Efficient Autonomous Exploration using Deep Reinforcement Learning

Thomas Molnar (8735079) 24 April 2020 (has links)
<p>For autonomous exploration of complex and unknown environments, existing Deep Reinforcement Learning (Deep RL) approaches struggle to generalize from computer simulations to real world instances. Deep RL methods typically exhibit low sample efficiency, requiring a large amount of data to develop an optimal policy function for governing an agent's behavior. RL agents expect well-shaped and frequent rewards to receive feedback for updating policies. Yet in real world instances, rewards and feedback tend to be infrequent and sparse.</p><p> </p><p>For sparse reward environments, an intrinsic reward generator can be utilized to facilitate progression towards an optimal policy function. The proposed Augmented Curiosity Modules (ACMs) extend the Intrinsic Curiosity Module (ICM) by Pathak et al. These modules utilize depth image and optical flow predictions with intrinsic rewards to improve sample efficiency. Additionally, the proposed Capsules Exploration Module (Caps-EM) pairs a Capsule Network, rather than a Convolutional Neural Network, architecture with an A2C algorithm. This provides a more compact architecture without need for intrinsic rewards, which the ICM and ACMs require. Tested using ViZDoom for experimentation in visually rich and sparse feature scenarios, both the Depth-Augmented Curiosity Module (D-ACM) and Caps-EM improve autonomous exploration performance and sample efficiency over the ICM. The Caps-EM is superior, using 44% and 83% fewer trainable network parameters than the ICM and D-ACM, respectively. On average across all “My Way Home” scenarios, the Caps-EM converges to a policy function with 1141% and 437% time improvements over the ICM and D-ACM, respectively.</p>
198

Using Concurrent Schedules of Reinforcement to Decrease Behavior

Palmer, Ashlyn 12 1900 (has links)
We manipulated delay and magnitude of reinforcers in two concurrent schedules of reinforcement to decrease a prevalent behavior while increasing another behavior already in the participant's repertoire. The first experiment manipulated delay, implementing a five second delay between the behavior and delivery of reinforcement for a behavior targeted for decrease while no delay was implemented after the behavior targeted for increase. The second experiment manipulated magnitude, providing one piece of food for the behavior targeted for decrease while two pieces of food were provided for the behavior targeted for increase. The experiments used an ABAB reversal design. Results suggest that behavior can be decreased without the use of extinction when contingencies favor the desirable behavior.
199

Cooperative Vehicular Communications for High Throughput Applications / 大容量車載アプリケーションに向けた車車間協調通信

Taya, Akihiro 24 September 2019 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(情報学) / 甲第22099号 / 情博第709号 / 新制||情||122(附属図書館) / 京都大学大学院情報学研究科通信情報システム専攻 / (主査)教授 守倉 正博, 教授 原田 博司, 教授 梅野 健 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
200

Transfer of reinforcement learning for a robotic skill

Gómez Rosal, Dulce Adriana January 2018 (has links)
In this work, we develop the transfer learning (TL) of reinforcement learning (RL) for the robotic skill of throwing a ball into a basket, from a computer simulated environment to a real-world implementation. Whereas learning of the same skill has been previously explored by using a Programming by Demonstration approach directly on the real-world robot, for our work, the model-based RL algorithm PILCO was employed as an alternative as it provides the robot with no previous knowledge or hints, i.e. the robot begins learning from a tabula rasa state, PILCO learns directly on the simulated environment, and as part of its procedure, PILCO models the dynamics of the inflatable, plastic ball used to perform the task. The robotic skill is represented as a Markov Decision Process, the robotic arm is a Kuka LWR4+, RL is enabled by PILCO, and TL is achieved through policy adjustments. Two learned policies were transferred, and although the results show that no exhaustive policy adjustments are required, large gaps remain between the simulated and the real environment in terms of the ball and robot dynamics. The contributions of this thesis include: a novel TL of RL framework for teaching the basketball skill to the Kuka robotic arm; the development of a pythonised version of PILCO; robust and extendable ROS packages for policy learning and adjustment in a simulated or real robot; a tracking-vision package with a Kinect camera; and an Orocos package for a position controller in the robotic arm.

Page generated in 0.1093 seconds