1 |
Multisensor integration for a robotPurohit, Madhavi. January 1989 (has links)
Thesis (M.S.)--Ohio University, June, 1989. / Title from PDF t.p.
|
2 |
The development of hierarchical knowledge in robot systemsHart, Stephen W 01 January 2009 (has links)
This dissertation investigates two complementary ideas in the literature on machine learning and robotics—those of embodiment and intrinsic motivation—to address a unified framework for skill learning and knowledge acquisition. "Embodied" systems make use of structure derived directly from sensory and motor configurations for learning behavior. Intrinsically motivated systems learn by searching for native, hedonic value through interaction with the world. Psychological theories of intrinsic motivation suggest that there exist internal drives favoring open-ended cognitive development and exploration. I argue that intrinsically motivated, embodied systems can learn generalizable skills, acquire control knowledge, and form an epistemological understanding of the world in terms of behavioral affordances. I propose that the development of behavior results from the assembly of an agent's sensory and motor resources into state and action spaces that can be explored autonomously. I introduce an intrinsic reward function that can lead to the open-ended learning of hierarchical behavior. This behavior is factored into declarative "recipes" for patterned activity and common sense procedural strategies for implementing them in a variety of run-time contexts. These skills form a categorical basis for the robot to interpret and model its world in terms of the behavior it affords. Experiments conducted on a bimanual robot illustrate a progression of cumulative manipulation behavior addressing manual and visual skills. Such accumulation of skill over the long-term by a single robot is a novel contribution that has yet to be demonstrated in the literature.
|
3 |
WPCA| The Wreath Product Cognitive ArchitectureJoshi, Anshul 11 February 2017 (has links)
<p> We propose to examine a representation which features combined action and perception signals, i.e., instead of having a purely geometric representation of the perceptual data, we include the motor actions, e.g., aiming a camera at an object, which are also actions that generate the particular shape. This generative perception-action representation uses Leyton’s cognitive representation based on wreath products. The wreath product is a special kind of group which captures information through symmetries on the sensorimotor data. The key insight is the bundling of actuation and perception data together in order to capture the cognitive structure of interactions with the world. This involves developing algorithms and methods: (1) to perform symmetry detection and parsing, (2) to represent and characterize uncertainties in the data and representations, and (3) to provide an overall cognitive architecture for a robot agent. We demonstrate these functions in 2D text classification, as well as on 3D data, on a real robot operating according to a well-defined experimental protocol for benchmarking indoor navigation, along with capabilities for multirobot communication and knowledge sharing. A cognitive architecture called the <i>Wreath Product Cognitive Architecture</i> is developed to support this approach.</p>
|
4 |
Supportive Behaviors for Human-Robot TeamingHayes, Bradley 17 September 2016 (has links)
<p> While robotics has made considerable strides toward more robust and adaptive manipulation, perception, and planning, robots in the near future are unlikely to be as dexterous, competent, and versatile as human workers. Rather than try to create fully autonomous systems that accomplish tasks independently, a more practical approach is to construct robots that work alongside people. This allows human and robot workers to concentrate on the tasks for which they are each best suited, while simultaneously providing the capability to assist each other during tasks that one worker lacks the ability to complete independently in a safe or maximally proficient manner. Human-robot teaming advances have the potential to extend applications of autonomous robots well beyond their current, limited roles in factory automation settings. Much of modern robotics remains inapplicable in many domains where tasks are either too complex, beyond modern hardware limitations, too sensitive for non-human completion, or too flexible for static automation practices. In these situations human-robot teaming can be leveraged to improve the efficiency, quality-of-life, and safety of human partners.</p><p> In this thesis, I describe algorithms that can create collaborative robots that call provide assistance when useful, remove dull or undesirable responsibilities when possible, and assist with dangerous tasks when feasible. In doing so, I present a novel method for autonomously constructing hierarchical task networks that factor complex tasks in was that make theism approachable by modern planning and coordination algorithms. In particular, within these complex cooperative tasks I focus on facilitating collaboration between a lead worker and robotic assistant within a shared space, defining and investigating a class of actions I term supportive behaviors: actions that serve to reduce the cognitive or kinematic complexity of tasks for teammates. The majority of contributions within this work center around discovering, learning, and executing these types of behaviors in multi-agent domains with asymmetric authority. I provide an examination of supportive behavior learning and execution from the perspective of task and motion planning, as well as that of learning directly from interactions with humans. These algorithms provide a collaborative robot with the capability to anticipate the needs of a human teammate and proactively offer help as needed or desired. This work enables to creation of robots that provide tools just-in-time, robots that alter workspaces to make more optimal task orderings more obvious and more feasible, and robots that recognize when a user is delayed in a complex task and offer assistance.</p><p> Combining these algorithms provides a basis for a robot with both a capacity for rich task comprehension and a theory of mind about its collaborators, enabling methods to allow such a robot to leverage knowledge it acquires to transition between the role of learner, able assistant, and informative instructor during interactions with teammates.</p>
|
5 |
Interactive Learning for Sequential Decisions and PredictionsRoss, Stephane 11 December 2013 (has links)
<p>Sequential prediction problems arise commonly in many areas of robotics and information processing: e.g., predicting a sequence of actions over time to achieve a goal in a control task, interpreting an image through a sequence of local image patch classifications, or translating speech to text through an iterative decoding procedure. </p><p> Learning predictors that can reliably perform such sequential tasks is challenging. Specifically, as predictions influence future inputs in the sequence, the data-generation process and executed predictor are inextricably intertwined. This can often lead to a significant mismatch between the distribution of examples observed during training (induced by the predictor used to generate training instances) and test executions (induced by the learned predictor). As a result, naively applying standard supervised learning methods—that assume independently and identically distributed training and test examples—often leads to poor test performance and compounding errors: inaccurate predictions lead to untrained situations where more errors are inevitable. </p><p> This thesis proposes general iterative learning procedures that leverage interactions between the learner and teacher to provably learn good predictors for sequential prediction tasks. Through repeated interactions, our approaches can efficiently learn predictors that are robust to their own errors and predict accurately during test executions. Our main approach uses existing no-regret online learning methods to provide strong generalization guarantees on test performance. </p><p> We demonstrate how to apply our main approach in various sequential prediction settings: imitation learning, model-free reinforcement learning, system identification, structured prediction and submodular list predictions. Its efficiency and wide applicability are exhibited over a large variety of challenging learning tasks, ranging from learning video game playing agents from human players and accurate dynamic models of a simulated helicopter for controller synthesis, to learning predictors for scene understanding in computer vision, news recommendation and document summarization. We also demonstrate the applicability of our technique on a real robot, using pilot demonstrations to train an autonomous quadrotor to avoid trees seen through its onboard camera (monocular vision) when flying at low-altitude in natural forest environments. </p><p> Our results throughout show that unlike typical supervised learning tasks where examples of good behavior are sufficient to learn good predictors, interaction is a fundamental part of learning in sequential tasks. We show formally that some level of interaction is necessary, as without interaction, no learning algorithm can guarantee good performance in general. </p>
|
6 |
Aspects of behavior design for learning by demonstrationOlenderski, Adam P. January 2007 (has links)
Thesis (M.S.)--University of Nevada, Reno, 2007. / "August, 2007." Includes bibliographical references (leaves 88-90). Online version available on the World Wide Web.
|
7 |
Machine Learning Applications to Robot ControlAbdul-hadi, Omar 11 September 2018 (has links)
<p> Control of robot manipulators can be greatly improved with the use of velocity and torque feedforward control. However, the effectiveness of feedforward control greatly relies on the accuracy of the model. In this study, kinematics and dynamics analysis is performed on a six axis arm, a Delta2 robot, and a Delta3 robot. Velocity feedforward calculation is performed using the traditional means of using the kinematics solution for velocity. However, a neural network is used to model the torque feedforward equations. For each of these mechanisms, we first solve the forward and inverse kinematics transformations. We then derive a dynamic model. Later, unlike traditional methods of obtaining the dynamics parameters of the dynamics model, the dynamics model is used to infer dependencies between the input and output variables for neural network torque estimation. The neural network is trained with joint positions, velocities, and accelerations as inputs, and joint torques as outputs. After training is complete, the neural network is used to estimate the feedforward torque effort. Additionally, an investigation is done on the use of neural networks for deriving the inverse kinematics solution of a six axis arm. Although the neural network demonstrated outstanding ability to model complex mathematical equations, the inverse kinematics solution was not accurate enough for practical use.</p><p>
|
8 |
Learning to Learn with GradientsFinn, Chelsea B. 21 November 2018 (has links)
<p> Humans have a remarkable ability to learn new concepts from only a few examples and quickly adapt to unforeseen circumstances. To do so, they build upon their prior experience and prepare for the ability to adapt, allowing the combination of previous observations with small amounts of new evidence for fast learning. In most machine learning systems, however, there are distinct train and test phases: training consists of updating the model using data, and at test time, the model is deployed as a rigid decision-making engine. In this thesis, we discuss gradient-based algorithms for learning to learn, or meta-learning, which aim to endow machines with flexibility akin to that of humans. Instead of deploying a fixed, non-adaptable system, these meta-learning techniques explicitly train for the ability to quickly adapt so that, at test time, they can learn quickly when faced with new scenarios.</p><p> To study the problem of learning to learn, we first develop a clear and formal definition of the meta-learning problem, its terminology, and desirable properties of meta-learning algorithms. Building upon these foundations, we present a class of model-agnostic meta-learning methods that embed gradient-based optimization into the learner. Unlike prior approaches to learning to learn, this class of methods focus on acquiring a transferable representation rather than a good learning rule. As a result, these methods inherit a number of desirable properties from using a fixed optimization as the learning rule, while still maintaining full expressivity, since the learned representations can control the update rule.</p><p> We show how these methods can be extended for applications in motor control by combining elements of meta-learning with techniques for deep model-based reinforcement learning, imitation learning, and inverse reinforcement learning. By doing so, we build simulated agents that can adapt in dynamic environments, enable real robots to learn to manipulate new objects by watching a video of a human, and allow humans to convey goals to robots with only a few images. Finally, we conclude by discussing open questions and future directions in meta-learning, aiming to identify the key shortcomings and limiting assumptions of our existing approaches.</p><p>
|
9 |
Automatic Snooker-Playing Robot with Speech Recognition Using Deep LearningBhagat, Kunj H. 13 December 2018 (has links)
<p> Research on natural language processing, such as for image and speech recognition, is rapidly changing focus from statistical methods to neural networks. In this study, we introduce speech recognition capabilities along with computer vision to allow a robot to play snooker completely by itself. The color of the ball to be pocketed is provided as an audio input using an audio device such as a microphone. The system is able to recognize the color from the input using a trained deep learning network. The system then commands the camera to locate the ball of the identified color on a snooker table by using computer vision. To pocket the target ball, the system then predicts the best shot using an algorithm. This activity can be executed accurately based on the efficiency of the trained deep learning model.</p><p>
|
10 |
Interactive perception of articulated objects for autonomous manipulationKatz, Dov 01 January 2011 (has links)
This thesis develops robotic skills for manipulating novel articulated objects. The degrees of freedom of an articulated object describe the relationship among its rigid bodies, and are often relevant to the object's intended function. Examples of everyday articulated objects include scissors, pliers, doors, door handles, books, and drawers. Autonomous manipulation of articulated objects is therefore a prerequisite for many robotic applications in our everyday environments. Already today, robots perform complex manipulation tasks, with impressive accuracy and speed, in controlled environments such as factory floors. An important characteristic of these environments is that they can be engineered to reduce or even eliminate perception. In contrast, in unstructured environments such as our homes and offices, perception is typically much more challenging. Indeed, manipulation in these unstructured environments remains largely unsolved. We therefore assume that to enable autonomous manipulation of objects in our everyday environments, robots must be able to acquire information about these objects, making as few assumption about the environment as possible. Acquiring information about the world from sensor data is a challenging problem. Because there is so much information that could be measured about the environment, considering all of it is impractical given current computational speeds. Instead, we propose to leverage our understanding of the task, in order to determine the relevant information. In our case, this information consists of the object's shape and kinematic structure. Perceiving this task-specific information is still challenging. This is because in order to understand the object's degrees of freedom, we must observe relative motion between its rigid bodies. And, as relative motion is not guaranteed to occur, this information may not be included in the sensor stream. The main contribution of this thesis is the design and implementation of a robotic system capable of perceiving and manipulating articulated objects. This system relies on Interactive Perception, an approach which exploits the synergies that arise when crossing the boundary between action and perception. In interactive perception, the emphasis of perception shifts from object appearance to object function. To enable the perception and manipulation of articulated objects, this thesis develops algorithms for perceiving the kinematic structure and shape of objects. The resulting perceptual capabilities are used within a relational reinforcement learning framework, enabling a robot to obtain general domain knowledge for manipulation. This composition enables our robot to reliably and efficiently manipulate novel articulated objects. To verify the effectiveness of the proposed robotic system, simulated and real-world experiments were conducted with a variety of everyday objects.
|
Page generated in 0.1565 seconds