• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

LEARNING OBJECTIVE FUNCTIONS FOR AUTONOMOUS SYSTEMS

Zihao Liang (18966976) 03 July 2024 (has links)
<p dir="ltr">In recent years, advancements in robotics and computing power have enabled robots to master complex tasks. Nevertheless, merely executing tasks isn't sufficient for robots. To achieve higher robot autonomy, learning the objective function is crucial. Autonomous systems can effectively eliminate the need for explicit programming by autonomously learning the control objective and deriving their control policy through the observation of task demonstrations. Hence, there's a need to develop a method for robots to learn the desired objective functions. In this thesis, we address several challenges in objective learning for autonomous systems, enhancing the applicability of our method in real-world scenarios. The ultimate objective of the thesis is to create a universal objective learning approach capable of addressing a range of existing challenges in the field while emphasizing data efficiency and robustness. Hence, building upon the previously mentioned intuition, we present a framework for autonomous systems to address a variety of objective learning tasks in real-time, even in the presence of noisy data. In addition to objective learning, this framework is capable of handling various other learning and control tasks.</p><p dir="ltr">The first part of this thesis concentrates on objective learning methods, specifically inverse optimal control (IOC). Within this domain, we have made three significant contributions aimed at addressing three existing challenges in IOC: 1) learning from minimal data, 2) learning without prior knowledge of system dynamics, and 3) learning with system outputs. </p><p dir="ltr">The second part of this thesis aims to develop a unified IOC framework to address all the challenges previously mentioned. It introduces a new paradigm for autonomous systems, referred to as Online Control-Informed Learning. This paradigm aims to tackle various of learning and control tasks online with data efficiency and robustness to noisy data. Integrating optimal control theory, online state estimation techniques, and machine learning methods, our proposed paradigm offers an online learning framework capable of tackling a diverse array of learning and control tasks. These include online imitation learning, online system identification, and policy tuning on-the-fly, all with efficient use of data and computation resources while ensuring robust performance.</p>
2

Control-Induced Learning for Autonomous Robots

Wanxin Jin (11013834) 23 July 2021 (has links)
<div>The recent progress of machine learning, driven by pervasive data and increasing computational power, has shown its potential to achieve higher robot autonomy. Yet, with too much focus on generic models and data-driven paradigms while ignoring inherent structures of control systems and tasks, existing machine learning methods typically suffer from data and computation inefficiency, hindering their public deployment onto general real-world robots. In this thesis work, we claim that the efficiency of autonomous robot learning can be boosted by two strategies. One is to incorporate the structures of optimal control theory into control-objective learning, and this leads to a series of control-induced learning methods that enjoy the complementary benefits of machine learning for higher algorithm autonomy and control theory for higher algorithm efficiency. The other is to integrate necessary human guidance into task and control objective learning, leading to a series of paradigms for robot learning with minimal human guidance on the loop.</div><div><br></div><div>The first part of this thesis focuses on the control-induced learning, where we have made two contributions. One is a set of new methods for inverse optimal control, which address three existing challenges in control objective learning: learning from minimal data, learning time-varying objective functions, and learning under distributed settings. The second is a Pontryagin Differentiable Programming methodology, which bridges the concepts of optimal control theory, deep learning, and backpropagation, and provides a unified end-to-end learning framework to solve a broad range of learning and control tasks, including inverse reinforcement learning, neural ODEs, system identification, model-based reinforcement learning, and motion planning, with data- and computation- efficient performance.</div><div><br></div><div>The second part of this thesis focuses on the paradigms for robot learning with necessary human guidance on the loop. We have made two contributions. The first is an approach of learning from sparse demonstrations, which allows a robot to learn its control objective function only from human-specified sparse waypoints given in the observation (task) space; and the second is an approach of learning from</div><div>human’s directional corrections, which enables a robot to incrementally learn its control objective, with guaranteed learning convergence, from human’s directional correction feedback while it is acting.</div><div><br></div>

Page generated in 0.0868 seconds