Spelling suggestions: "subject:"reinforcement learning"" "subject:"einforcement learning""
281 |
MODEL-FREE ALGORITHMS FOR CONSTRAINED REINFORCEMENT LEARNING IN DISCOUNTED AND AVERAGE REWARD SETTINGSQinbo Bai (19804362) 07 October 2024 (has links)
<p dir="ltr">Reinforcement learning (RL), which aims to train an agent to maximize its accumulated reward through time, has attracted much attention in recent years. Mathematically, RL is modeled as a Markov Decision Process, where the agent interacts with the environment step by step. In practice, RL has been applied to autonomous driving, robotics, recommendation systems, and financial management. Although RL has been greatly studied in the literature, most proposed algorithms are model-based, which requires estimating the transition kernel. To this end, we begin to study the sample efficient model-free algorithms under different settings.</p><p dir="ltr">Firstly, we propose a conservative stochastic primal-dual algorithm in the infinite horizon discounted reward setting. The proposed algorithm converts the original problem from policy space to the occupancy measure space, which makes the non-convex problem linear. Then, we advocate the use of a randomized primal-dual approach to achieve O(\eps^-2) sample complexity, which matches the lower bound.</p><p dir="ltr">However, when it comes to the infinite horizon average reward setting, the problem becomes more challenging since the environment interaction never ends and can’t be reset, which makes reward samples not independent anymore. To solve this, we design an epoch-based policy-gradient algorithm. In each epoch, the whole trajectory is divided into multiple sub-trajectories with an interval between each two of them. Such intervals are long enough so that the reward samples are asymptotically independent. By controlling the length of trajectory and intervals, we obtain a good gradient estimator and prove the proposed algorithm achieves O(T^3/4) regret bound.</p>
|
282 |
Remembering how to walk - Using Active Dendrite Networks to Drive Physical Animations / Att minnas att gå - användning av Active Dendrite Nätverk för att driva fysiska animeringarHenriksson, Klas January 2023 (has links)
Creating embodied agents capable of performing a wide range of tasks in different types of environments has been a longstanding challenge in deep reinforcement learning. A novel network architecture introduced in 2021 called the Active Dendrite Network [A. Iyer et al., “Avoiding Catastrophe: Active Dendrites Enable Multi-Task Learning in Dynamic Environments”] designed to create sparse subnetworks for different tasks showed promising multi-tasking performance on the Meta-World [T. Yu et al., “Meta-World: A Benchmark and Evaluation for Multi-Task and Meta Reinforcement Learning”] multi-tasking benchmark. This thesis further explores the performance of this novel architecture in a multi-tasking environment focused on physical animations and locomotion. Specifically we implement and compare the architecture to the commonly used Multi-Layer Perceptron (MLP) architecture on a multi-task reinforcement learning problem in a video-game setting consisting of training a hexapedal agent on a set of locomotion tasks involving moving at different speeds, turning and standing still. The evaluation focused on two areas: (1) Assessing the average overall performance of the Active Dendrite Network relative to the MLP on a set of locomotive scenarios featuring our behaviour sets and environments. (2) Assessing the relative impact Active Dendrite networks have on transfer learning between related tasks by comparing their performance on novel behaviours shortly after training a related behaviour. Our findings suggest that the novel Active Dendrite Network can make better use of limited network capacity compared to the MLP - the Active Dendrite Network outperformed the MLP by ∼18% on our benchmark using limited network capacity. When both networks have sufficient capacity however, there is not much difference between the two. We further find that Active Dendrite Networks have very similar transfer-learning capabilities compared to the MLP in our benchmarks.
|
283 |
Towards Novelty-Resilient AI: Learning in the Open WorldTrevor A Bonjour (18423153) 22 April 2024 (has links)
<p dir="ltr">Current artificial intelligence (AI) systems are proficient at tasks in a closed-world setting where the rules are often rigid. However, in real-world applications, the environment is usually open and dynamic. In this work, we investigate the effects of such dynamic environments on AI systems and develop ways to mitigate those effects. Central to our exploration is the concept of \textit{novelties}. Novelties encompass structural changes, unanticipated events, and environmental shifts that can confound traditional AI systems. We categorize novelties based on their representation, anticipation, and impact on agents, laying the groundwork for systematic detection and adaptation strategies. We explore novelties in the context of stochastic games. Decision-making in stochastic games exercises many aspects of the same reasoning capabilities needed by AI agents acting in the real world. A multi-agent stochastic game allows for infinitely many ways to introduce novelty. We propose an extension of the deep reinforcement learning (DRL) paradigm to develop agents that can detect and adapt to novelties in these environments. To address the sample efficiency challenge in DRL, we introduce a hybrid approach that combines fixed-policy methods with traditional DRL techniques, offering enhanced performance in complex decision-making tasks. We present a novel method for detecting anticipated novelties in multi-agent games, leveraging information theory to discern patterns indicative of collusion among players. Finally, we introduce DABLER, a pioneering deep reinforcement learning architecture that dynamically adapts to changing environmental conditions through broad learning approaches and environment recognition. Our findings underscore the importance of developing AI systems equipped to navigate the uncertainties of the open world, offering promising pathways for advancing AI research and application in real-world settings.</p>
|
284 |
Revisiting user simulation in dialogue systems : do we still need them ? : will imitation play the role of simulation ? / Revisiter la simulation d'utilisateurs dans les systèmes de dialogue parlé : est-elle encore nécessaire ? : est-ce que l'imitation peut jouer le rôle de la simulation ?Chandramohan, Senthilkumar 25 September 2012 (has links)
Les récents progrès dans le domaine du traitement du langage ont apporté un intérêt significatif à la mise en oeuvre de systèmes de dialogue parlé. Ces derniers sont des interfaces utilisant le langage naturel comme medium d'interaction entre le système et l'utilisateur. Le module de gestion de dialogue choisit le moment auquel l'information qu'il choisit doit être échangée avec l'utilisateur. Ces dernières années, l'optimisation de dialogue parlé en utilisant l'apprentissage par renforcement est devenue la référence. Cependant, une grande partie des algorithmes utilisés nécessite une importante quantité de données pour être efficace. Pour gérer ce problème, des simulations d'utilisateurs ont été introduites. Cependant, ces modèles introduisent des erreurs. Par un choix judicieux d'algorithmes, la quantité de données d'entraînement peut être réduite et ainsi la modélisation de l'utilisateur évitée. Ces travaux concernent une partie des contributions présentées. L'autre partie des travaux consiste à proposer une modélisation à partir de données réelles des utilisateurs au moyen de l'apprentissage par renforcement inverse / Recent advancements in the area of spoken language processing and the wide acceptance of portable devices, have attracted signicant interest in spoken dialogue systems.These conversational systems are man-machine interfaces which use natural language (speech) as the medium of interaction.In order to conduct dialogues, computers must have the ability to decide when and what information has to be exchanged with the users. The dialogue management module is responsible to make these decisions so that the intended task (such as ticket booking or appointment scheduling) can be achieved.Thus learning a good strategy for dialogue management is a critical task.In recent years reinforcement learning-based dialogue management optimization has evolved to be the state-of-the-art. A majority of the algorithms used for this purpose needs vast amounts of training data.However, data generation in the dialogue domain is an expensive and time consuming process. In order to cope with this and also to evaluatethe learnt dialogue strategies, user modelling in dialogue systems was introduced. These models simulate real users in order to generate synthetic data.Being computational models, they introduce some degree of modelling errors. In spite of this, system designers are forced to employ user models due to the data requirement of conventional reinforcement learning algorithms can learn optimal dialogue strategies from limited amount of training data when compared to the conventional algorithms. As a consequence of this, user models are no longer required for the purpose of optimization, yet they continue to provide a fast and easy means for quantifying the quality of dialogue strategies. Since existing methods for user modelling are relatively less realistic compared to real user behaviors, the focus is shifted towards user modelling by means of inverse reinforcement learning. Using experimental results, the proposed method's ability to learn a computational models with real user like qualities is showcased as part of this work.
|
285 |
Reinforcement learning and reward estimation for dialogue policy optimisationSu, Pei-Hao January 2018 (has links)
Modelling dialogue management as a reinforcement learning task enables a system to learn to act optimally by maximising a reward function. This reward function is designed to induce the system behaviour required for goal-oriented applications, which usually means fulfilling the user’s goal as efficiently as possible. However, in real-world spoken dialogue systems, the reward is hard to measure, because the goal of the conversation is often known only to the user. Certainly, the system can ask the user if the goal has been satisfied, but this can be intrusive. Furthermore, in practice, the reliability of the user’s response has been found to be highly variable. In addition, due to the sparsity of the reward signal and the large search space, reinforcement learning-based dialogue policy optimisation is often slow. This thesis presents several approaches to address these problems. To better evaluate a dialogue for policy optimisation, two methods are proposed. First, a recurrent neural network-based predictor pre-trained from off-line data is proposed to estimate task success during subsequent on-line dialogue policy learning to avoid noisy user ratings and problems related to not knowing the user’s goal. Second, an on-line learning framework is described where a dialogue policy is jointly trained alongside a reward function modelled as a Gaussian process with active learning. This mitigates the noisiness of user ratings and minimises user intrusion. It is shown that both off-line and on-line methods achieve practical policy learning in real-world applications, while the latter provides a more general joint learning system directly from users. To enhance the policy learning speed, the use of reward shaping is explored and shown to be effective and complementary to the core policy learning algorithm. Furthermore, as deep reinforcement learning methods have the potential to scale to very large tasks, this thesis also investigates the application to dialogue systems. Two sample-efficient algorithms, trust region actor-critic with experience replay (TRACER) and episodic natural actor-critic with experience replay (eNACER), are introduced. In addition, a corpus of demonstration data is utilised to pre-train the models prior to on-line reinforcement learning to handle the cold start problem. Combining these two methods, a practical approach is demonstrated to effectively learn deep reinforcement learning-based dialogue policies in a task-oriented information seeking domain. Overall, this thesis provides solutions which allow truly on-line and continuous policy learning in spoken dialogue systems.
|
286 |
Simulated Fixed-Wing Aircraft Attitude Control using Reinforcement Learning MethodsDavid Jona Richter (11820452) 20 December 2021 (has links)
<div>Autonomous transportation is a research field that has gained huge interest in recent years, with autonomous electric or hydrogen cars coming ever closer to seeing everyday use. Not just cars are subject to autonomous research though, the field of aviation is also being explored for fully autonomous flight. One very important aspect for making autonomous flight a reality is attitude control, the control of roll, pitch, and sometimes yaw. Traditional approaches for automated attitude control use PID (proportional-integral-derivative) controllers, which use hand-tuned parameters to fulfill the task. In this work, however, the use of Reinforcement Learning algorithms for attitude control will be explored. With the surge of more and more powerful artificial neural networks, which have proven to be universally usable function approximators, Deep Reinforcement Learning also becomes an intriguing option. </div><div>A software toolkit will be developed and used to allow for the use of multiple flight simulators to train agents with Reinforcement Learning as well as Deep Reinforcement Learning. Experiments will be run using different hyperparamters, algorithms, state representations, and reward functions to explore possible options for autonomous attitude control using Reinforcement Learning.</div>
|
287 |
Policy-based Reinforcement learning control for window opening and closing in an office buildingKaisaravalli Bhojraj, Gokul, Markonda, Yeswanth Surya Achyut January 2020 (has links)
The level of indoor comfort can highly be influenced by window opening and closing behavior of the occupant in an office building. It will not only affect the comfort level but also affects the energy consumption, if not properly managed. This occupant behavior is not easy to predict and control in conventional way. Nowadays, to call a system smart it must learn user behavior, as it gives valuable information to the controlling system. To make an efficient way of controlling a window, we propose RL (Reinforcement Learning) in our thesis which should be able to learn user behavior and maintain optimal indoor climate. This model free nature of RL gives the flexibility in developing an intelligent control system in a simpler way, compared to that of the conventional techniques. Data in our thesis is taken from an office building in Beijing. There has been implementation of Value-based Reinforcement learning before for controlling the window, but here in this thesis we are applying policy-based RL (REINFORCE algorithm) and also compare our results with value-based (Q-learning) and there by getting a better idea, which suits better for the task that we have in our hand and also to explore how they behave. Based on our work it is found that policy based RL provides a great trade-off in maintaining optimal indoor temperature and learning occupant’s behavior, which is important for a system to be called smart.
|
288 |
Optimizing Power Consumption, Resource Utilization, and Performance for Manycore Architectures using Reinforcement LearningFettes, Quintin 23 May 2022 (has links)
No description available.
|
289 |
Evaluating behaviour tree integration in the option critic framework in Starcraft 2 mini-games with training restricted by consumer level hardwareLundberg, Fredrik January 2022 (has links)
This thesis investigates the performance of the option critic (OC) framework combined with behaviour trees (BTs) in Starcraft 2 mini-games when training time is constrained by a time frame limited by consumer level hardware. We test two such combination models: BTs as macro actions (OCBT) and BTs as options (OCBToptions) and measure the relative performance to the plain OC model through an ablation study. The tests were conducted in two of the mini-games called build marines (BM) and defeat zerglings and banelings (DZAB) and a set of metrics were collected, including game score. We find that BTs improve the performance in the BM mini-game using both OCBT and OCBToptions, but in DZAB the models performed equally. Additionally, results indicate that the improvement in BM scores does not stem solely from the complexity of the BTs but from the OC model learning to use the BTs effectively and learning beneficial options in relation to the BT options. Thus, it is concluded that BTs can improve performance when training time is limited by consumer level hardware. / Denna avhandling undersöker hur kombinationen av option critic (OC) ramverket och beteendeträd (BT) förbättrar resultatet i Starcraft 2 minispel när träningstiden är begränsad av konsumenthårdvara. Vi testar två kombinationsmodeller: BT som makrohandlingar (OCBT) och BT som options (OCBToptions) och mäter den relativa förbättringen jämte OC modellen med en ablationsstudie. Testen utfördes i två minispel build marines (BM) och defeat zerglings and banelings (DZAB) och olika typer av data insamlades, bland annat spelpoängen. Vi fann att BT förbättrade resultatet i BM på båda hierarkiska nivåerna men i DZAB var resultaten ungefär lika mellan de olika modellerna. Resultaten indikerar också att förbättringen i BM inte beror bara på BT komplexitet utan på att OC modellen lär sig att använda BT och lär sig options som kompletterar dess BT options. Vi finner därför att BT kan förbättra resultaten när träningen är begränsad av konsumenthårdvara.
|
290 |
Safety-Oriented Task Offloading for Human-Robot Collaboration : A Learning-Based Approach / Säkerhetsorienterad Uppgiftsavlastning för Människa-robotkollaboration : Ett Inlärningsbaserat TillvägagångssättRuggeri, Franco January 2021 (has links)
In Human-Robot Collaboration scenarios, safety must be ensured by a risk management process that requires the execution of computationally expensive perception models (e.g., based on computer vision) in real-time. However, robots usually have constrained hardware resources that hinder timely responses, resulting in unsafe operations. Although Multi-access Edge Computing allows robots to offload complex tasks to servers on the network edge to meet real-time requirements, this might not always be possible due to dynamic changes in the network that can cause congestion or failures. This work proposes a safety-based task offloading strategy to address this problem. The goal is to intelligently use edge resources to reduce delays in the risk management process and consequently enhance safety. More specifically, depending on safety and network metrics, a Reinforcement Learning (RL) solution is implemented to decide whether a less accurate model should run locally on the robot or a more complex one should run remotely on the network edge. A third possibility is to reuse the previous output through verification of temporal coherence. Experiments are performed in a simulated warehouse scenario where humans and robots have close interactions. Results show that the proposed RL solution outperforms the baselines in several aspects. First, the edge is used only when the network performance is good, reducing the number of failures (up to 47%). Second, the latency is also adapted to the safety requirements (risk X latency reduced up to 48%), avoiding unnecessary network congestion in safe situations and letting other robots in hazardous situations use the edge. Overall, the latency of the risk management process is largely reduced (up to 68%), and this positively affects safety (time in safe zone increased up to 3:1%). / I scenarier med människa-robotkollaboration måste säkerheten säkerställas via en riskhanteringsprocess. Denna process kräver exekvering av beräkningstunga uppfattningsmodeller (t.ex. datorseende) i realtid. Robotar har vanligtvis begränsade hårdvaruresurser vilket förhindrar att respons uppnås i tid, vilket resulterar i osäkra operationer. Även om Multi-access Edge Computing tillåter robotar att avlasta komplexa uppgifter till servrar på edge, för att möta realtidskraven, så är detta inte alltid möjligt på grund av dynamiska förändringar i nätverket som kan skapa överbelastning eller fel. Detta arbete föreslår en säkerhetsbaserad uppgiftsavlastningsstrategi för att hantera detta problem. Målet är att intelligent använda edge-resurser för att minska förseningar i riskhanteringsprocessen och följaktligen öka säkerheten. Mer specifikt, beroende på säkerhet och nätverksmätvärden, implementeras en Reinforcement Learning (RL) lösning för att avgöra om en modell med mindre noggrannhet ska köras lokalt eller om en mer komplex ska köras avlägset på edge. En tredje möjlighet är att återanvända sista utmatningen genom verifiering av tidsmässig koherens. Experimenten utförs i ett simulerat varuhusscenario där människor och robotar har nära interaktioner. Resultaten visar att den föreslagna RL-lösningen överträffar baslinjerna i flera aspekter. För det första används edge bara när nätverkets prestanda är bra, vilket reducerar antal fel (upp till 47%). För det andra anpassas latensen också till säkerhetskraven (risk X latens reducering upp till 48%), undviker onödig överbelastning i nätverket i säkra situationer och låter andra robotar i farliga situationer använda edge. I det stora hela reduceras latensen av riskhanterings processen kraftigt (upp till 68%) och påverkar på ett positivt sätt säkerheten (tiden i säkerhetszonen ökas upp till 4%).
|
Page generated in 0.0838 seconds