Spelling suggestions: "subject:"deepreinforcement learning"" "subject:"lessreinforcement learning""
61 |
Deep Reinforcement Learning for Open Multiagent SystemZhu, Tianxing 20 September 2022 (has links)
No description available.
|
62 |
A multi-layered defence strategy against DDoS attacks in SDN/NFV-based 5G mobile networksSheibani, Morteza, Konur, Savas, Awan, Irfan, Qureshi, Amna 16 August 2024 (has links)
Yes / Software-defined networking (SDN) and network functions virtualisation (NFV) are crucial technologies for integration in the fifth generation of cellular networks (5G). However, they also pose new security challenges, and a timely research subject is working on intrusion detection systems (IDSs) for 5G networks. Current IDSs suffer from several limitations, resulting in a waste of resources and some security threats. This work proposes a new three-layered solution that includes forwarding and data transport, management and control, and virtualisation layers, emphasising distributed controllers in the management and control layer. The proposed solution uses entropy detection to classify arriving packets as normal or suspicious and then forwards the suspicious packets to a centralised controller for further processing using a self-organising map (SOM). A dynamic OpenFlow switch relocation method is introduced based on deep reinforcement learning to address the unbalanced burden among controllers and the static allocation of OpenFlow switches. The proposed system is analysed using the Markov decision process, and a Double Deep Q-Network (DDQN) is used to train the system. The experimental results demonstrate the effectiveness of the proposed approach in mitigating DDoS attacks, efficiently balancing controller workloads, and reducing the duration of the balancing process in 5G networks.
|
63 |
Cooperative Payload Transportation by UAVs: A Model-Based Deep Reinforcement Learning (MBDRL) ApplicationKhursheed, Shahwar Atiq 20 August 2024 (has links)
We propose a Model-Based Deep Reinforcement Learning (MBDRL) framework for collaborative paylaod transportation using Unmanned Aerial Vehicles (UAVs) in Search and Rescue (SAR) missions, enabling heavier payload conveyance while maintaining vehicle agility.
Our approach extends the single-drone application to a novel multi-drone one, using the Probabilistic Ensembles with Trajectory Sampling (PETS) algorithm to model the unknown stochastic system dynamics and uncertainty. We use the Multi-Agent Reinforcement Learning (MARL) framework via a centralized controller in a leader-follower configuration. The agents utilize the approximated transition function in a Model Predictive Controller (MPC) configured to maximize the reward function for waypoint navigation, while a position-based formation controller ensures stable flights of these physically linked UAVs. We also developed an Unreal Engine (UE) simulation connected to an offboard planner and controller via a Robot Operating System (ROS) framework that is transferable to real robots. This work achieves stable waypoint navigation in a stochastic environment with a sample efficiency following that seen in single UAV work.
This work has been funded by the National Science Foundation (NSF) under Award No.
2046770. / Master of Science / We apply the Model-Based Deep Reinforcement Learning (MBDRL) framework to the novel application of a UAV team transporting a suspended payload during Search and Rescue missions.
Collaborating UAVs can transport heavier payloads while staying agile, reducing the need for human involvement. We use the Probabilistic Ensemble with Trajectory Sampling (PETS) algorithm to model uncertainties and build on the previously used single UAVpayload system. By utilizing the Multi-Agent Reinforcement Learning (MARL) framework via a centralized controller, our UAVs learn to transport the payload to a desired position while maintaining stable flight through effective cooperation. We also develop a simulation in Unreal Engine (UE) connected to a controller using a Robot Operating System (ROS) architecture, which can be transferred to real robots. Our method achieves stable navigation in unpredictable environments while maintaining the sample efficiency observed in single UAV scenarios.
|
64 |
Robust longitudinal velocity control for advanced vehicles: A deep reinforcement learning approachIslam, Fahmida 13 August 2024 (has links) (PDF)
Longitudinal velocity control, or adaptive cruise control (ACC), is a common advanced driving feature aimed at assisting the driver and reducing fatigue. It maintains the velocity of a vehicle and ensures a safe distance from the preceding vehicle. Many models for ACC are available, such as Proportional, Integral, and Derivative (PID) and Model Predictive Control (MPC). However, conventional models have some limitations as they are designed for simplified driving scenarios. Artificial intelligence (AI) and machine learning (ML) have made robust navigation and decision-making possible in complex environments. Recent approaches, such as reinforcement learning (RL), have demonstrated remarkable performance in terms of faster processing and effective navigation through unknown environments. This dissertation explores an RL approach, deep deterministic policy gradient (DDPG), for longitudinal velocity control. The baseline DDPG model has been modified in two different ways. In the first method, an attention mechanism has been applied to the neural network (NN) of the DDPG model. Integrating the attention mechanism into the DDPG model helps in decreasing focus on less important features and enhances overall model effectiveness. In the second method, the inputs of the actor and critic networks of DDPG are replaced with outputs of the self-supervised network. The self-supervised learning process allows the model to accurately predict future states from current states and actions. A custom reward function has been designed for the RL algorithm considering overall safety, efficiency, and comfort. The proposed models have been trained with human car-following data, and evaluated on multiple datasets, including publicly available data, simulated data, and sensor data collected from real-world environments. The analyses demonstrate that the new architectures can maintain strong robustness across various datasets and outperform the current state-of-the-art models.
|
65 |
Learning sensori-motor mappings using little knowledge : application to manipulation robotics / Apprentissage de couplages sensori-moteur en utilisant très peu d'informations : application à la robotique de manipulationDe La Bourdonnaye, François 18 December 2018 (has links)
La thèse consiste en l'apprentissage d'une tâche complexe de robotique de manipulation en utilisant très peu d'aprioris. Plus précisément, la tâche apprise consiste à atteindre un objet avec un robot série. L'objectif est de réaliser cet apprentissage sans paramètres de calibrage des caméras, modèles géométriques directs, descripteurs faits à la main ou des démonstrations d'expert. L'apprentissage par renforcement profond est une classe d'algorithmes particulièrement intéressante dans cette optique. En effet, l'apprentissage par renforcement permet d’apprendre une compétence sensori-motrice en se passant de modèles dynamiques. Par ailleurs, l'apprentissage profond permet de se passer de descripteurs faits à la main pour la représentation d'état. Cependant, spécifier les objectifs sans supervision humaine est un défi important. Certaines solutions consistent à utiliser des signaux de récompense informatifs ou des démonstrations d'experts pour guider le robot vers les solutions. D'autres consistent à décomposer l'apprentissage. Par exemple, l'apprentissage "petit à petit" ou "du simple au compliqué" peut être utilisé. Cependant, cette stratégie nécessite la connaissance de l'objectif en termes d'état. Une autre solution est de décomposer une tâche complexe en plusieurs tâches plus simples. Néanmoins, cela n'implique pas l'absence de supervision pour les sous tâches mentionnées. D'autres approches utilisant plusieurs robots en parallèle peuvent également être utilisés mais nécessite du matériel coûteux. Pour notre approche, nous nous inspirons du comportement des êtres humains. Ces derniers généralement regardent l'objet avant de le manipuler. Ainsi, nous décomposons la tâche d'atteinte en 3 sous tâches. La première tâche consiste à apprendre à fixer un objet avec un système de deux caméras pour le localiser dans l'espace. Cette tâche est apprise avec de l'apprentissage par renforcement profond et un signal de récompense faiblement supervisé. Pour la tâche suivante, deux compétences sont apprises en parallèle : la fixation d'effecteur et une fonction de coordination main-oeil. Comme la précédente tâche, un algorithme d'apprentissage par renforcement profond est utilisé avec un signal de récompense faiblement supervisé. Le but de cette tâche est d'être capable de localiser l'effecteur du robot à partir des coordonnées articulaires. La dernière tâche utilise les compétences apprises lors des deux précédentes étapes pour apprendre au robot à atteindre un objet. Cet apprentissage utilise les mêmes aprioris que pour les tâches précédentes. En plus de la tâche d'atteinte, un predicteur d'atteignabilité d'objet est appris. La principale contribution de ces travaux est l'apprentissage d'une tâche de robotique complexe en n'utilisant que très peu de supervision. / The thesis is focused on learning a complex manipulation robotics task using little knowledge. More precisely, the concerned task consists in reaching an object with a serial arm and the objective is to learn it without camera calibration parameters, forward kinematics, handcrafted features, or expert demonstrations. Deep reinforcement learning algorithms suit well to this objective. Indeed, reinforcement learning allows to learn sensori-motor mappings while dispensing with dynamics. Besides, deep learning allows to dispense with handcrafted features for the state spacerepresentation. However, it is difficult to specify the objectives of the learned task without requiring human supervision. Some solutions imply expert demonstrations or shaping rewards to guiderobots towards its objective. The latter is generally computed using forward kinematics and handcrafted visual modules. Another class of solutions consists in decomposing the complex task. Learning from easy missions can be used, but this requires the knowledge of a goal state. Decomposing the whole complex into simpler sub tasks can also be utilized (hierarchical learning) but does notnecessarily imply a lack of human supervision. Alternate approaches which use several agents in parallel to increase the probability of success can be used but are costly. In our approach,we decompose the whole reaching task into three simpler sub tasks while taking inspiration from the human behavior. Indeed, humans first look at an object before reaching it. The first learned task is an object fixation task which is aimed at localizing the object in the 3D space. This is learned using deep reinforcement learning and a weakly supervised reward function. The second task consists in learning jointly end-effector binocular fixations and a hand-eye coordination function. This is also learned using a similar set-up and is aimed at localizing the end-effector in the 3D space. The third task uses the two prior learned skills to learn to reach an object and uses the same requirements as the two prior tasks: it hardly requires supervision. In addition, without using additional priors, an object reachability predictor is learned in parallel. The main contribution of this thesis is the learning of a complex robotic task with weak supervision.
|
66 |
Knowledge reuse for deep reinforcement learning. / Reutilização do conhecimento para aprendizado por reforço profundo.Glatt, Ruben 12 June 2019 (has links)
With the rise of Deep Learning the field of Artificial Intelligence (AI) Research has entered a new era. Together with an increasing amount of data and vastly improved computing capabilities, Machine Learning builds the backbone of AI, providing many of the tools and algorithms that drive development and applications. While we have already achieved many successes in the fields of image recognition, language processing, recommendation engines, robotics, or autonomous systems, most progress was achieved when the algorithms were focused on learning only a single task with little regard to effort and reusability. Since learning a new task from scratch often involves an expensive learning process, in this work, we are considering the use of previously acquired knowledge to speed up the learning of a new task. For that, we investigated the application of Transfer Learning methods for Deep Reinforcement Learning (DRL) agents and propose a novel framework for knowledge preservation and reuse. We show, that the knowledge transfer can make a big difference if the source knowledge is chosen carefully in a systematic approach. To get to this point, we provide an overview of existing literature of methods that realize knowledge transfer for DRL, a field which has been starting to appear frequently in the relevant literature only in the last two years. We then formulate the Case-based Reasoning methodology, which describes a framework for knowledge reuse in general terms, in Reinforcement Learning terminology to facilitate the adaption and communication between the respective communities. Building on this framework, we propose Deep Case-based Policy Inference (DECAF) and demonstrate in an experimental evaluation the usefulness of our approach for sequential task learning with knowledge preservation and reuse. Our results highlight the benefits of knowledge transfer while also making aware of the challenges that come with it. We consider the work in this area as an important step towards more stable general learning agents that are capable of dealing with the most complex tasks, which would be a key achievement towards Artificial General Intelligence. / Com a evolução da Aprendizagem Profunda (Deep Learning), o campo da Inteligência Artificial (IA) entrou em uma nova era. Juntamente com uma quantidade crescente de dados e recursos computacionais cada vez mais aprimorados, o Aprendizado de Máquina estabelece a base para a IA moderna, fornecendo muitas das ferramentas e algoritmos que impulsionam seu desenvolvimento e aplicações. Apesar dos muitos sucessos nas áreas de reconhecimento de imagem, processamento de linguagem natural, sistemas de recomendação, robótica e sistemas autônomos, a maioria dos avanços foram feitos focando no aprendizado de apenas uma única tarefa, sem muita atenção aos esforços dispendidos e reusabilidade da solução. Como o aprendizado de uma nova tarefa geralmente envolve um processo de aprendizado despendioso, neste trabalho, estamos considerando o reúso de conhecimento para acelerar o aprendizado de uma nova tarefa. Para tanto, investigamos a aplicação dos métodos de Transferência de Aprendizado (Transfer Learning) para agentes de Aprendizado por Reforço profundo (Deep Reinforcement Learning - DRL) e propomos um novo arcabouço para preservação e reutilização de conhecimento. Mostramos que a transferência de conhecimento pode fazer uma grande diferença no aprendizado se a origem do conhecimento for escolhida cuidadosa e sistematicamente. Para chegar a este ponto, nós fornecemos uma visão geral da literatura existente de métodos que realizam a transferência de conhecimento para DRL, um campo que tem despontado com frequência na literatura relevante apenas nos últimos dois anos. Em seguida, formulamos a metodologia Raciocínio baseado em Casos (Case-based Reasoning), que descreve uma estrutura para reutilização do conhecimento em termos gerais, na terminologia de Aprendizado por Reforço, para facilitar a adaptação e a comunicação entre as respectivas comunidades. Com base nessa metodologia, propomos Deep Casebased Policy Inference (DECAF) e demonstramos, em uma avaliação experimental, a utilidade de nossa proposta para a aprendizagem sequencial de tarefas, com preservação e reutilização do conhecimento. Nossos resultados destacam os benefícios da transferência de conhecimento e, ao mesmo tempo, conscientizam os desafios que a acompanham. Consideramos o trabalho nesta área como um passo importante para agentes de aprendizagem mais estáveis, capazes de lidar com as tarefas mais complexas, o que seria um passo fundamental para a Inteligência Geral Artificial.
|
67 |
Reinforcement learning and reward estimation for dialogue policy optimisationSu, Pei-Hao January 2018 (has links)
Modelling dialogue management as a reinforcement learning task enables a system to learn to act optimally by maximising a reward function. This reward function is designed to induce the system behaviour required for goal-oriented applications, which usually means fulfilling the user’s goal as efficiently as possible. However, in real-world spoken dialogue systems, the reward is hard to measure, because the goal of the conversation is often known only to the user. Certainly, the system can ask the user if the goal has been satisfied, but this can be intrusive. Furthermore, in practice, the reliability of the user’s response has been found to be highly variable. In addition, due to the sparsity of the reward signal and the large search space, reinforcement learning-based dialogue policy optimisation is often slow. This thesis presents several approaches to address these problems. To better evaluate a dialogue for policy optimisation, two methods are proposed. First, a recurrent neural network-based predictor pre-trained from off-line data is proposed to estimate task success during subsequent on-line dialogue policy learning to avoid noisy user ratings and problems related to not knowing the user’s goal. Second, an on-line learning framework is described where a dialogue policy is jointly trained alongside a reward function modelled as a Gaussian process with active learning. This mitigates the noisiness of user ratings and minimises user intrusion. It is shown that both off-line and on-line methods achieve practical policy learning in real-world applications, while the latter provides a more general joint learning system directly from users. To enhance the policy learning speed, the use of reward shaping is explored and shown to be effective and complementary to the core policy learning algorithm. Furthermore, as deep reinforcement learning methods have the potential to scale to very large tasks, this thesis also investigates the application to dialogue systems. Two sample-efficient algorithms, trust region actor-critic with experience replay (TRACER) and episodic natural actor-critic with experience replay (eNACER), are introduced. In addition, a corpus of demonstration data is utilised to pre-train the models prior to on-line reinforcement learning to handle the cold start problem. Combining these two methods, a practical approach is demonstrated to effectively learn deep reinforcement learning-based dialogue policies in a task-oriented information seeking domain. Overall, this thesis provides solutions which allow truly on-line and continuous policy learning in spoken dialogue systems.
|
68 |
Quoting behaviour of a market-maker under different exchange fee structures / Quoting behaviour of a market-maker under different exchange fee structuresKiseľ, Rastislav January 2018 (has links)
During the last few years, market micro-structure research has been active in analysing the dependence of market efficiency on different market character istics. Make-take fees are one of those topics as they might modify the incen tives for participating agents, e.g. broker-dealers or market-makers. In this thesis, we propose a Hawkes process-based model that captures statistical differences arising from different fee regimes and we estimate the differences on limit order book data. We then use these estimates in an attempt to measure the execution quality from the perspective of a market-maker. We appropriate existing theoretical market frameworks, however, for the pur pose of hireling optimal market-making policies we apply a novel method of deep reinforcement learning. Our results suggest, firstly, that maker-taker exchanges provide better liquidity to the markets, and secondly, that deep reinforcement learning methods may be successfully applied to the domain of optimal market-making. JEL Classification Keywords Author's e-mail Supervisor's e-mail C32, C45, C61, C63 make-take fees, Hawkes process, limit order book, market-making, deep reinforcement learn ing kiselrastislavSgmail.com barunik@f sv.cuni.cz
|
69 |
Deep Reinforcement Learning for the Optimization of Combining Raster Images in Forest PlanningWen, Yangyang January 2021 (has links)
Raster images represent the treatment options of how the forest will be cut. Economic benefits from cutting the forest will be generated after the treatment is selected and executed. Existing raster images have many clusters and small sizes, this becomes the principal cause of overhead. If we can fully explore the relationship among the raster images and combine the old data sets according to the optimization algorithm to generate a new raster image, then this result will surpass the existing raster images and create higher economic benefits. The question of this project is can we create a dynamic model that treats the updating pixel’s status as an agent selecting options for an empty raster image in response to neighborhood environmental and landscape parameters. This project is trying to explore if it is realistic to use deep reinforcement learning to generate new and superior raster images. Finally, this project aims to explore the feasibility, usefulness, and effectiveness of deep reinforcement learning algorithms in optimizing existing treatment options. The problem was modeled as a Markov decision process, in which the pixel to be updated was an agent of the empty raster image, which would determine the choice of the treatment option for the current empty pixel. This project used the Deep Q learning neural network model to calculate the Q values. The temporal difference reinforcement learning algorithm was applied to predict future rewards and to update model parameters. After the modeling was completed, this project set up the model usefulness experiment to test the usefulness of the model. Then the parameter correlation experiment was set to test the correlation between the parameters and the benefit of the model. Finally, the trained model was used to generate a larger size raster image to test its effectiveness.
|
70 |
Simulated Fixed-Wing Aircraft Attitude Control using Reinforcement Learning MethodsDavid Jona Richter (11820452) 20 December 2021 (has links)
<div>Autonomous transportation is a research field that has gained huge interest in recent years, with autonomous electric or hydrogen cars coming ever closer to seeing everyday use. Not just cars are subject to autonomous research though, the field of aviation is also being explored for fully autonomous flight. One very important aspect for making autonomous flight a reality is attitude control, the control of roll, pitch, and sometimes yaw. Traditional approaches for automated attitude control use PID (proportional-integral-derivative) controllers, which use hand-tuned parameters to fulfill the task. In this work, however, the use of Reinforcement Learning algorithms for attitude control will be explored. With the surge of more and more powerful artificial neural networks, which have proven to be universally usable function approximators, Deep Reinforcement Learning also becomes an intriguing option. </div><div>A software toolkit will be developed and used to allow for the use of multiple flight simulators to train agents with Reinforcement Learning as well as Deep Reinforcement Learning. Experiments will be run using different hyperparamters, algorithms, state representations, and reward functions to explore possible options for autonomous attitude control using Reinforcement Learning.</div>
|
Page generated in 0.0938 seconds