Spelling suggestions: "subject:"robotics"" "subject:"cobotics""
31 |
Coordination of Multiple Dynamic Programming Policies for Control of BipedalWalkingWhitman, Eric C. 01 September 2013 (has links)
Walking is a core task for humanoid robots. Most existing walking controllers fall into one of two categories. One category plans ahead and walks precisely; they can place the feet in desired locations to avoid obstacles but react poorly to unexpected disturbances. The other category is more reactive; they can respond to unexpected disturbances but can not place the feet in specific locations. In this thesis, we present a walking controller that has many of the strengths of each category: it can place the feet to avoid obstacles as well as respond successfully to unexpected disturbances.
Dynamic programming is a powerful algorithm that generates policies for a large region of state space, but is limited by the “Curse of Dimensionality” to low dimensional state spaces. We extend dynamic programming to higher dimensions by introducing a framework for optimally coordinating multiple low-dimensional policies to form a policy for a single higher-dimensional system. This framework can be applied to a class of systems, which we call Instantaneously Coupled Systems, where the full dynamics can be broken into multiple subsystems that only interact at specific instants. The subsystems are augmented by coordination variables, then solved individually. The augmented systems can then be coordinated optimally by using the value functions to manage tradeoffs of the coordination variables.
We apply this framework to walking on both the Sarcos hydraulic humanoid robot and a simulation of it. We use the framework to control the linear inverted pendulum model, a commonly used simple model of walking. We then use inverse dynamics to generate joint torques based on the desired simple model behavior, which are then applied directly to either the simulation or the Sarcos robot. We discuss the differences between the hardware and the simulation as well as the controller modifications necessary to cope with them, including higher order policies and the inclusion of inverse kinematics.
Our controller produces stable walking at up to 1.05 m/s in simulation and at up to 0.22 m/s on the Sarcos robot. We also demonstrate the robustness of this method to disturbances with experiments including pushes (both impulsive and continuous), trips, ground elevation changes, slopes, regions where it is prohibited from stepping, and other obstacles.
|
32 |
Inference Machines: Parsing Scenes via Iterated PredictionsMunoz, Daniel 06 June 2013 (has links)
Extracting a rich representation of an environment from visual sensor readings canbenefit many tasks in robotics, e.g., path planning, mapping, and object manipulation.While important progress has been made, it remains a difficult problem to effectivelyparse entire scenes, i.e., to recognize semantic objects, man-made structures, and landforms.This process requires not only recognizing individual entities but also understandingthe contextual relations among them.
The prevalent approach to encode such relationships is to use a joint probabilistic orenergy-based model which enables one to naturally write down these interactions. Unfortunately,performing exact inference over these expressive models is often intractableand instead we can only approximate the solutions. While there exists a set of sophisticatedapproximate inference techniques to choose from, the combination of learning andapproximate inference for these expressive models is still poorly understood in theoryand limited in practice. Furthermore, using approximate inference on any learned modeloften leads to suboptimal predictions due to the inherent approximations.
As we ultimately care about predicting the correct labeling of a scene, and notnecessarily learning a joint model of the data, this work proposes to instead view theapproximate inference process as a modular procedure that is directly trained in orderto produce a correct labeling of the scene. Inspired by early hierarchical models in thecomputer vision literature for scene parsing, the proposed inference procedure is structuredto incorporate both feature descriptors and contextual cues computed at multipleresolutions within the scene. We demonstrate that this inference machine frameworkfor parsing scenes via iterated predictions offers the best of both worlds: state-of-the-artclassification accuracy and computational efficiency when processing images and/orunorganized 3-D point clouds. Additionally, we address critical problems that arise inpractice when parsing scenes on board real-world systems: integrating data from multiplesensor modalities and efficiently processing data that is continuously streaming fromthe sensors.
|
33 |
Representation, Planning, and Learning of Dynamic Ad Hoc Robot TeamsLiemhetcharat, Somchaya 01 August 2013 (has links)
Forming an effective multi-robot team to perform a task is a key problem in many domains. The performance of a multi-robot team depends on the robots the team is composed of, where each robot has different capabilities. Team performance has previously been modeled as the sum of single-robot capabilities, and these capabilities are assumed to be known.
Is team performance just the sum of single-robot capabilities? This thesis is motivated by instances where agents perform differently depending on their teammates, i.e., there is synergy in the team. For example, in human sports teams, a well-trained team performs better than an allstars team composed of top players from around the world. This thesis introduces a novel model of team synergy — the Synergy Graph model — where the performance of a team depends on each robot’s individual capabilities and a task-based relationship among them.
Robots are capable of learning to collaborate and improving team performance over time, and this thesis explores how such robots are represented in the Synergy Graph Model. This thesis contributes a novel algorithm that allocates training instances for the robots to improve, so as to form an effective multi-robot team.
The goal of team formation is the optimal selection of a subset of robots to perform the task, and this thesis contributes team formation algorithms that use a Synergy Graph to form an effective multi-robot team with high performance. In particular, the performance of a team is modeled with a Normal distribution to represent the nondeterminism of the robots’ actions in a dynamic world, and this thesis introduces the concept of a δ-optimal team that trades off risk versus reward. Further, robots may fail from time to time, and this thesis considers the formation of a robust multi-robot team that attains high performance even if failures occur. This thesis considers ad hoc teams, where the robots of the team have not collaborated together, and so their capabilities and synergy are initially unknown.
This thesis contributes a novel learning algorithm that uses observations of team performance to learn a Synergy Graph that models the capabilities and synergy of the team. Further, new robots may become available, and this thesis introduces an algorithm that iteratively updates a Synergy Graph with new robots.
|
34 |
Soft Inflatable Robots for Safe Physical Human InteractionSanan, Siddharth 01 August 2013 (has links)
Robots that can operate in human environments in a safe and robust manner would be of great benefit to society, due to their immense potential for providing assistance to humans. However, robots have seen limited application outside of the industrial setting in environments such as homes and hospitals.
We believe a very important factor preventing the cross over of robotic technology from the factory to the house is the issue of safety. The safety issue is usually bypassed in the industrial setting by separation of human and robot workspaces. Such a solution is clearly infeasible for robots that provide assistance to humans. This thesis aims to develop intrinsically safe robots that are suitable for providing assistance to humans. We believe intrinsic safety is important in physical human robot interaction because unintended interactions will occur between humans and robots due to: (a) sharing of workspace, (b) hardware failure (computer crashes, actuator failures), (c) limitations on perception, and (d) limitations on cognition. When such unintended interactions are very fast (collisions), they are beyond the bandwidth limits of practical controllers and only the intrinsic safety characteristics of the system govern the interaction forces that occur. The effects of such interactions with traditional robots could range from persistent discomfort to bone fracture to even serious injuries. Therefore robots that serve in the application domain of human assistance should be able to function with a high tolerance for unintended interactions. This calls for a new design paradigm where operational safety is the primary concern and task accuracy/precision though important are secondary.
In this thesis, we address this new design paradigm by developing robots that have a soft inflatable structure, i.e, inflatable robots. Inflatable robots can improve intrinsic safety characteristics by being extremely lightweight and by including surface compliance (due to the compressibility of air) as well as distributed structural compliance (due to the lower Young’s modulus of the materials used) in the structure. This results in a lower effective inertia during collisions which implies a lower impact force between the inflatable robot and human. Inflatable robots can essentially be manufactured like clothes and can therefore also potentially lower the cost of robots to an extent where personal robots can be an affordable reality.
In this thesis, we present a number of inflatable robot prototypes to address challenges in the area of design and control of such systems. Specific areas addressed are: structural and joint design, payload capacity, pneumatic actuation, state estimation and control. The CMU inflatable arm is used in tasks like wiping and feeding a human to successfully demonstrate the use of inflatable robots for tasks involving close physical human interaction.
|
35 |
Rapid prototyping of robotic systemsSmuda, William James. January 2007 (has links)
Dissertation (Ph.D. in Software Engineering)--Naval Postgraduate School, June 2007. / Dissertation Advisor(s): Mikhail Auguston. "June 2007." Title from title page of PDF document (viewed on: Mar 21, 2008). Includes bibliographical references (p. 221-226).
|
36 |
A robotics testbed the design & implementation with applications /Riggs, Travis Alan. January 2006 (has links) (PDF)
Thesis (M.Eng.)--University of Louisville, 2006. / Title and description from thesis home page (viewed Dec. 22, 2006). Department of Electrical Engineering. Vita. "December 2006." Includes bibliographical references (p. 173-174).
|
37 |
Rapid prototyping of robotic systemsSmuda, William James. January 2007 (has links) (PDF)
Dissertation (Ph.D. in Software Engineering)--Naval Postgraduate School, June 2007. / Dissertation Advisor(s): Mikhail Auguston. "June 2007." Includes bibliographical references (p. 221-226). Also available in print.
|
38 |
Using Simplified Models and Limited-Horizon Planning to React to Time-Critical ProblemsLurz, Joshua Paul 20 December 2018 (has links)
<p> A longstanding goal of robotics is to use robots to perform tasks that require physically interacting with humans. These tasks often require robots to physically manipulate humans, for example in the task of guiding an elderly human and preventing the human from falling. This particular task is of significant importance due to the prevalence of falls and the expanding need for elderly care as the elderly cohort expands in many developed countries. At present, robots have very limited capabilities to support these types of tasks. Current planning approaches are challenged by the intrinsic features of these problems: the control policies of the dynamic agent are unknown, the state information is incomplete, and a rapid reaction time is required. </p><p> This thesis describes an approach to solving these challenges by using simplified models of the dynamic agents and environments that are reasonably accurate over brief time frames. It couples these models with limited-horizon planning. My approach allows for rapid updates of execution plans, which are required due to the short time horizons over which the plans are accurate. This dissertation validates my approach using a series of tasks that require robots to interact with dynamic agents, including a simulation of catching a falling human.</p><p>
|
39 |
A control architecture and human interface for agile, reconfigurable micro aerial vehicle formationsZhou, Dingjiang 10 March 2017 (has links)
This thesis considers the problem of controlling a group of micro aerial vehicles for agile maneuvering cooperatively, or distributively. We first introduce the background and motivation for micro aerial vehicles, especially for the popular multi-rotor aerial vehicle platform. Then, we discuss the dynamics of quadrotor helicopters. A quadrotor is a specific kind of multi-rotor aerial vehicle with a special property called differential flatness, which simplifies the algorithm of trajectory planning, such that, instead of planning a trajectory in a 12-dimensional state space and 4-dimensional input space, we only need to plan the trajectory in 4-dimensional, so called, flat output space, while the 12-dimensional state and 4-dimensional input can be recovered from a mapping called endogenous transformation.
We propose a series of approaches to achieve agile maneuvering of a dynamic quadrotor formation, from controlling a single quadrotor in an artificial vector field, to controlling a group of quadrotors in a Virtual Rigid Body (VRB) framework, to balancing the effect between the human control and autonomy for collision avoidance, and to fast on-line distributed collision avoidance with Buffered Voronoi Cells (BVC).
In the vector field method, we generate velocity, acceleration, jerk and snap fields, depending on the tasks, or the positions of obstacles, such that a single quadrotor can easily find its required state and input from the endogenous transformation in order to track the artificial vector field.
Next, with a Virtual Rigid Body framework, we let a group of quadrotors follow a single control command while also keeping a required formation, or even reconfigure from one formation to another. The Virtual Rigid Body framework decouples the trajectory planning problem into two sub-problems.
Then we consider the problem of collision avoidance of the quadrotor formation when it is meanwhile tele-operated by a single human operator. The autonomy with collision avoidance algorithm, based on the vector field methods for a single quadrotor, is an assistive portion of the quadrotor formation controller, such that the human operator can focus on his/her high-level tasks, leaving the low-level collision avoidance task be handled automatically.
We also consider the full autonomy problem of quadrotor formations when reconfiguring from one formation to another by developing a fast, on-line distributed collision avoidance algorithm using Buffered Voronoi Cells (BVCs). Our BVC based collision avoidance algorithm only requires sensed relative position, rather than relative position and velocity, while the computational complexity is comparable to other methods like velocity obstacles.
At last, we introduce our experimental quadrotor platform which is built from PixHawk flight controller and Odroid-XU4 single-board computer. The hardware and software architecture of this multiple-quadrotor platform is described in detail so that our platform can easily be adopted and extended with different purposes.
Our conclusion remark and discussion of future work are also given in this thesis
|
40 |
Annotation Scaffolds for RoboticsFrank Bolton, Pablo 05 September 2018 (has links)
<p>Having a human in the control loop of a robot plays an important role in today?s robotics applications. Whether for teleoperation, interactive processing, or as a learning resource for automation, human-robot interaction is in need of well-designed interfaces to allow the human-in-the-loop to be as effective as possible for robotic applications with the least effort.
One general framework for human-in-the-loop interaction with robots is annotation, which refers to the inclusion of supplementary information to a dataset or a robot?s perceptual stream that, when properly interpreted, produces valuable semantic information that is difficult for algorithms to infer directly and is also available for repeated use in the future. We focus on annotating 3D vision, a popular and rich means for robot perception in which robots use depth sensors to perceive the environment for recognition, navigation and scene understanding.
Annotation of 3D-scanned environments has been shown to be successful in employing humans-in-the-loop to improve a robot?s extraction of meaningful structure from the visual stream. By relying on human cognition, these semi-autonomous systems
may utilize hints - expressed through the annotated cues - as informed suggestions that reduce the complexity of a task and help focus the context of a given situation. These annotations may be used immediately as hints for operation or stored for later use and analysis.
In this work, we present a new scheme for constructing and storing annotation cues, called Point Cloud Scaffolds. Point Cloud Scaffolds are designed to allow fast and precise specification of object shape and manipulation constraints. In addition, we present the Point Cloud Prototyper, a simple annotation tool designed for constructing Point Cloud Scaffolds and studying how best to design annotation capabilities for three classic tasks in robotics: object reconstruction, Pick-and-Place, and articulated-object manipulation.
We present evidence that this approach is precise and simple enough even for novice users to master quickly. The annotation paradigm is well suited for three critical task types and compares well to other similar techniques developed in the field of annotation for robotics. Point Cloud Scaffolds are versatile tools that show promise as a shared-control counterpart to continuous teleoperation, interactive scene analysis and navigation, and the construction of rich repositories of annotations for
complex robotic tasks.
|
Page generated in 0.0408 seconds