Spelling suggestions: "subject:"deep reinforcement 1earning"" "subject:"deep reinforcement c1earning""
11 |
Neuro-Symbolic Distillation of Reinforcement Learning AgentsAbir, Farhan Fuad 01 January 2024 (has links) (PDF)
In the past decade, reinforcement learning (RL) has achieved breakthroughs across various domains, from surpassing human performance in strategy games to enhancing the training of large language models (LLMs) with human feedback. However, RL has yet to gain widespread adoption in mission-critical fields such as healthcare and autonomous vehicles. This is primarily attributed to the inherent lack of trust, explainability, and generalizability of neural networks in deep reinforcement learning (DRL) agents. While neural DRL agents leverage the power of neural networks to solve specific tasks robustly and efficiently, this often comes at the cost of explainability and generalizability. In contrast, pure symbolic agents maintain explainability and trust but often underperform in high-dimensional data. In this work, we developed a method to distill explainable and trustworthy agents using neuro-symbolic AI. Neuro-symbolic distillation combines the strengths of symbolic reasoning and neural networks, creating a hybrid framework that leverages the structured knowledge representation of symbolic systems alongside the learning capabilities of neural networks. The key steps of neuro-symbolic distillation involve training traditional DRL agents, followed by extracting, selecting, and distilling their learned policies into symbolic forms using symbolic regression and tree-based models. These symbolic representations are then employed instead of the neural agents to make interpretable decisions with comparable accuracy. The approach is validated through experiments on Lunar Lander and Pong, demonstrating that symbolic representations can effectively replace neural agents while enhancing transparency and trustworthiness. Our findings suggest that this approach mitigates the black-box nature of neural networks, providing a pathway toward more transparent and trustworthy AI systems. The implications of this research are significant for fields requiring both high performance and explainability, such as autonomous systems, healthcare, and financial modeling.
|
12 |
Reinforcement Learning for Hydrobatic AUVs / Reinforcement learning för Hydrobatiska AUVWoźniak, Grzegorz January 2022 (has links)
This master thesis focuses on developing a Reinforcement Learning (RL) controller to perform hydrobatic maneuvers on an Autonomous Underwater Vehicle (AUV) successfully. This work also aims to analyze the robustness of the RL controller, as well as provide a comparison between RL algorithms and Proportional Integral Derivative (PID) control. Training of the algorithms is initially conducted in a Numpy simulation in Python. We show how to model the Equations of Motion (EOM) of the AUV and how to use it to train the RL controllers. We use the stablebaselines3 RL framework and create a training environment with the OpenAI gym. The Twin-Delay Deep Deterministic Policy Gradient (TD3) algorithm offers good performance in the simulation. The following maneuvers are studied: trim control, waypoint following, and an inverted pendulum. We test the maneuvers both in the Numpy simulation and Stonefish simulator. Also, we test the robustness of the RL trim controller by simulating noise in the state feedback. Lastly, we run the RL trim controller on a real AUV hardware called SAM. We show that the RL algorithm trained in the Numpy simulator can achieve similar performance to the PID controller in the Stonefish simulator. We generate a policy that can perform the trim control and the Inverted Pendulum maneuver in the Numpy simulation. We show that we can generate a robust policy that executes other types of maneuvers by providing a parameterized cost function to the RL algorithm. We discuss the results of every maneuver we perform with the SAM AUV and provide a discussion about the advantages and disadvantages of this control method applied to underwater robotics. We conclude that RL can be used to create policies that perform hydrobatic maneuvers. This data-driven approach can be applied in the future to more complex problems in underwater robotics. / Denna masteruppsats fokuserar på att utveckla en Reinforcement Learning (RL) kontroller för att framgångsrikt utföra hydrobatiska manövrar på ett autonomt undervattensfordon (AUV). Detta arbete syftar också till att analysera robustheten hos RL-kontrollern, samt tillhandahålla en jämförelse mellan RL-algoritmer och Proportional Integral Derivative (PID) kontroll. Träning av algoritmerna utförs initialt i Numpy-simuleringen i Python. Vi visar hur man modellerar rörelseekvationerna (EOM) för AUV, och hur man använder den för att träna RL-kontrollerna. Vi använder ramverket stablebaselines3 RL och skapar en träningsmiljö med gymmet OpenAI. Algoritmen Twin-Delay Deep Deterministic Policy Gradient (TD3) erbjuder bra prestanda i simuleringen. Följande manövrar studeras: trimkontroll, waypointföljning och en inverterad pendel. Vi testar manövrarna både i Numpy-simulering och Stonefish-simulator. Vi testar också robustheten hos RL-trimkontrollern genom att simulera bruset i tillståndsåterkopplingen. Slutligen kör vi RL-trimkontrollern på den riktiga SAM AUV-hårdvaran. Vi visar att RL-algoritmen tränad i Numpy-simulatorn kan uppnå liknande prestanda som PID-regulatorn i Stonefish-simulatorn. Vi genererar en policy som kan utföra trimkontrollen och manövern med inverterad pendel i Numpy-simuleringen. Vi visar att vi kan generera en robust policy som utför andra typer av manövrar genom att tillhandahålla en parameteriserad kostnadsfunktion till RL-algoritmen. Vi diskuterar resultaten av varje manöver vi utför med SAM AUV och ger en diskussion om fördelarna och nackdelarna med denna kontrollmetod som tillämpas på undervattensrobotik. Vi drar slutsatsen att RL kan användas för att skapa policyer som utför hydrobatiska manövrar. Detta datadrivna tillvägagångssätt kan tillämpas i framtiden på mer komplexa problem inom undervattensrobotik.
|
13 |
Machine Learning-Based Instruction Scheduling for a DSP Architecture Compiler : Instruction Scheduling using Deep Reinforcement Learning and Graph Convolutional Networks / Maskininlärningsbaserad schemaläggning av instruktioner för en DSP-arkitekturkompilator : Schemaläggning av instruktioner med Deep Reinforcement Learning och grafkonvolutionella nätverkAlava Peña, Lucas January 2023 (has links)
Instruction Scheduling is a back-end compiler optimisation technique that can provide significant performance gains. It refers to ordering instructions in a particular order to reduce latency for processors with instruction-level parallelism. At the present typical compilers use heuristics to perform instruction scheduling and solve other related non-polynomial complete problems. This thesis aims to present a machine learning-based approach to challenge heuristic methods concerning performance. In this thesis, a novel reinforcement learning (RL) based model for the instruction scheduling problem is developed including modelling features of processors such as forwarding, resource utilisation and treatment of the action space. An efficient optimal scheduler is presented to be used for an optimal schedule length based reward function, however, this is not used in the final results as a heuristic based reward function was deemed to be sufficient and faster to compute. Furthermore, an RL agent that interacts with the model of the problem is presented using three different types of graph neural networks for the state processing: graph conventional networks, graph attention networks, and graph attention based on the work of Lee et al. A simple two-layer neural network is also used for generating embeddings for the resource utilisation stages. The proposed solution is validated against the modelled environment and favourable but not significant improvements were found compared to the most common heuristic method. Furthermore, it was found that having embeddings relating to resource utilisation was very important for the explained variance of the RL models. Additionally, a trained model was tested in an actual compiler, however, no informative results were found likely due to register allocation or other compiler stages that occur after instruction scheduling. Future work should include improving the scalability of the proposed solution. / Instruktionsschemaläggning är en optimeringsteknik för kompilatorer som kan ge betydande prestandavinster. Det handlar om att ordna instruktioner i en viss ordning för att minska latenstiden för processorer med parallellitet på instruktionsnivå. För närvarande använder vanliga kompilatorer heuristiker för att utföra schemaläggning av instruktioner och lösa andra relaterade ickepolynomiala kompletta problem. Denna avhandling syftar till att presentera en maskininlärningsbaserad metod för att utmana heuristiska metoder när det gäller prestanda. I denna avhandling utvecklas en ny förstärkningsinlärningsbaserad (RL) modell för schemaläggning av instruktioner, inklusive modellering av processorns egenskaper såsom vidarebefordran, resursutnyttjande och behandling av handlingsutrymmet. En effektiv optimal schemaläggare presenteras för att eventuellt användas för belöningsfunktionen, men denna används inte i de slutliga resultaten. Dessutom presenteras en RL-agent som interagerar med problemmodellen och använder tre olika typer av grafneurala nätverk för tillståndsprocessering: grafkonventionella nätverk, grafuppmärksamhetsnätverk och grafuppmärksamhet baserat på arbetet av Lee et al. Ett enkelt neuralt nätverk med två lager används också för att generera inbäddningar för resursanvändningsstegen. Den föreslagna lösningen valideras mot den modellerade miljön och gynnsamma men inte signifikanta förbättringar hittades jämfört med den vanligaste heuristiska metoden. Dessutom visade det sig att det var mycket viktigt för den förklarade variansen i RL-modellerna att ha inbäddningar relaterade till resursutnyttjande. Dessutom testades en tränad modell i en verklig kompilator, men inga informativa resultat hittades, sannolikt på grund av registerallokering eller andra kompilatorsteg som inträffar efter schemaläggning av instruktioner. Framtida arbete bör inkludera att förbättra skalbarheten hos den föreslagna lösningen.
|
14 |
Deep reinforcement learning approach to portfolio management / Deep reinforcement learning metod för portföljförvaltningJama, Fuaad January 2023 (has links)
This thesis evaluates the use of a Deep Reinforcement Learning (DRL) approach to portfolio management on the Swedish stock market. The idea is to construct a portfolio that is adjusted daily using the DRL algorithm Proximal policy optimization (PPO) with a multi perceptron neural network. The input to the neural network was historical data in the form of open, high, and low price data. The portfolio is evaluated by its performance against the OMX Stockholm 30 index (OMXS30). Furthermore, three different approaches for optimization are going to be studied, in that three different reward functions are going to be used. These functions are Sharp ratio, cumulative reward (Daily return) and Value at risk reward (which is a daily return with a value at risk penalty). The historival data that is going to be used is from the period 2010-01-01 to 2015-12-31 and the DRL approach is then tested on two different time periods which represents different marked conditions, 2016-01-01 to 2018-12-31 and 2019-01-01 to 2021-12-31. The results show that in the first test period all three methods (corresponding to the three different reward functions) outperform the OMXS30 benchmark in returns and sharp ratio, while in the second test period none of the methods outperform the OMXS30 index. / Målet med det här arbetet var att utvärdera användningen av "Deep reinforcement learning" (DRL) metod för portföljförvaltning på den svenska aktiemarknaden. Idén är att konstruera en portfölj som justeras dagligen med hjälp av DRL algoritmen "Proximal policy optimization" (PPO) med ett neuralt nätverk med flera perceptroner. Inmatningen till det neurala nätverket var historiska data i form av öppnings, lägsta och högsta priser. Portföljen utvärderades utifrån dess prestation mot OMX Stockholm 30 index (OMXS30). Dessutom studerades tre olika tillvägagångssätt för optimering, genom att använda tre olika belöningsfunktioner. Dessa funktioner var Sharp ratio, kumulativ belöning (Daglig avkastning) och Value at risk-belöning (som är en daglig avkastning minus Value at risk-belöning). Den historiska data som användes var från perioden 2010-01-01 till 2015-12-31 och DRL-metoden testades sedan på två olika tidsperioder som representerar olika marknadsförhållanden, 2016-01-01 till 2018-12-31 och 2019-01-01 till 2021-12-31. Resultatet visar att i den första testperioden så överträffade alla tre metoder (vilket motsvarar de tre olika belöningsfunktionerna) OMXS30 indexet i avkastning och sharp ratio, medan i den andra testperioden så överträffade ingen av metoderna OMXS30 indexet.
|
15 |
Using Deep Reinforcement Learning For Adaptive Traffic Control in Four-Way IntersectionsJörneskog, Gustav, Kandelan, Josef January 2019 (has links)
The consequences of traffic congestion include increased travel time, fuel consumption, and the number of crashes. Studies suggest that most traffic delays are due to nonrecurring traffic congestion. Adaptive traffic control using real-time data is effective in dealing with nonrecurring traffic congestion. Many adaptive traffic control algorithms used today are deterministic and prone to human error and limitation. Reinforcement learning allows the development of an optimal traffic control policy in an unsupervised manner. We have implemented a reinforcement learning algorithm that only requires information about the number of vehicles and the mean speed of each incoming road to streamline traffic in a four-way intersection. The reinforcement learning algorithm is evaluated against a deterministic algorithm and a fixed-time control schedule. Furthermore, it was tested whether reinforcement learning can be trained to prioritize emergency vehicles while maintaining good traffic flow. The reinforcement learning algorithm obtains a lower average time in the system than the deterministic algorithm in eight out of nine experiments. Moreover, the reinforcement learning algorithm achieves a lower average time in the system than the fixed-time schedule in all experiments. At best, the reinforcement learning algorithm performs 13% better than the deterministic algorithm and 39% better than the fixed-time schedule. Moreover, the reinforcement learning algorithm could prioritize emergency vehicles while maintaining good traffic flow.
|
16 |
Deep Reinforcement Learning for Intelligent Road Maintenance in Small Island Developing States Vulnerable to Climate Change : Using Artificial Intelligence to Adapt Communities to Climate ChangeElvira, Boman January 2018 (has links)
The consequences of climate change are already noticeable in small island developing states. Road networks are crucial for a functioning society, and are particularly vulnerable to extreme weather, floods, landslides and other effects of climate change. Road systems in small island developing states are therefore in special need of climate adaptation efforts. Climate adaptation of road systems also has to be cost-efficient since these small island states have limited economical resources. Recent advances in deep reinforcement learning, a subfield of artificial intelligence, has proven that intelligent agents can achieve superhuman level at a number of tasks, setting hopes high for possible future applications of the algorithms. To investigate wether deep reinforcement learning is suitable for climate adaptation of road maintenance systems a simulator has been set up, together with three deep reinforcement learning agents, and two non-intelligent agents for performance comparisons. The results of the project indicate that deep reinforcement learning is suitable for use in intelligent road maintenance systems for climate adaptation in small island developing states.
|
17 |
Extension on Adaptive MAC Protocol for Space CommunicationsLi, Max Hongming 06 December 2018 (has links)
This work devises a novel approach for mitigating the effects of Catastrophic Forgetting in Deep Reinforcement Learning-based cognitive radio engine implementations employed in space communication applications. Previous implementations of cognitive radio space communication systems utilized a moving window- based online learning method, which discards part of its understanding of the environment each time the window is moved. This act of discarding is called Catastrophic Forgetting. This work investigated ways to control the forgetting process in a more systematic manner, both through a recursive training technique that implements forgetting in a more controlled manner and an ensemble learning technique where each member of the ensemble represents the engine's understanding over a certain period of time. Both of these techniques were integrated into a cognitive radio engine proof-of-concept, and were delivered to the SDR platform on the International Space Station. The results were then compared to the results from the original proof-of-concept. Through comparison, the ensemble learning technique showed promise when comparing performance between training techniques during different communication channel contexts.
|
18 |
Zero-Knowledge Agent Trained for the Game of RiskBethdavid, Simon January 2020 (has links)
Recent developments in deep reinforcement learning applied to abstract strategy games such as Go, chess and Hex have sparked an interest within military planning. This Master thesis explores if it is possible to implement an algorithm similar to Expert Iteration and AlphaZero to wargames. The studied wargame is Risk, which is a turn-based multiplayer game played on a simplified political map of the world. The algorithms consist of an expert, in the form of a Monte Carlo tree search algorithm, and an apprentice, implemented through a neural network. The neural network is trained by imitation learning, trained to mimic expert decisions generated from self-play reinforcement learning. The apprentice is then used as heuristics in forthcoming tree searches. The results demonstrated that a Monte Carlo tree search algorithm could, to some degree, be employed on a strategy game as Risk, dominating a random playing agent. The neural network, fed with a state representation in the form of a vector, had difficulty in learning expert decisions and could not beat a random playing agent. This led to a halt in the expert/apprentice learning process. However, possible solutions are provided as future work.
|
19 |
A Study on Resolution and Retrieval of Implicit Entity References in Microblogs / マイクロブログにおける暗黙的な実体参照の解決および検索に関する研究Lu, Jun-Li 23 March 2020 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(情報学) / 甲第22580号 / 情博第717号 / 新制||情||123(附属図書館) / 京都大学大学院情報学研究科社会情報学専攻 / (主査)教授 吉川 正俊, 教授 黒橋 禎夫, 教授 田島 敬史, 教授 田中 克己(京都大学 名誉教授) / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
|
20 |
Deep Reinforcement Learning For Distributed Fog Network ProbingGuan, Xiaoding 01 September 2020 (has links)
The sixth-generation (6G) of wireless communication systems will significantly rely on fog/edge network architectures for service provisioning. To satisfy stringent quality of service requirements using dynamically available resources at the edge, new network access schemes are needed. In this paper, we consider a cognitive dynamic edge/fog network where primary users (PUs) may temporarily share their resources and act as fog nodes for secondary users (SUs). We develop strategies for distributed dynamic fog probing so SUs can find out available connections to access the fog nodes. To handle the large-state space of the connectivity availability that includes availability of channels, computing resources, and fog nodes, and the partial observability of the states, we design a novel distributed Deep Q-learning Fog Probing (DQFP) algorithm. Our goal is to develop multi-user strategies for accessing fog nodes in a distributed manner without any centralized scheduling or message passing. By using cooperative and competitive utility functions, we analyze the impact of the multi-user dynamics on the connectivity availability and establish design principles for our DQFP algorithm.
|
Page generated in 0.0859 seconds