Spelling suggestions: "subject:"wholebody control"" "subject:"wholebody coontrol""
1 |
Humanoid Robot Friction Estimation in Multi-Contact ScenariosRidgewell, Cameron Patrick 18 August 2017 (has links)
This paper will present an online approach for friction approximation to be utilized in con- cert with whole body control on humanoid robots. This approach allows humanoid robots with ankle mounted force-torque sensors to extrapolate information about the friction constraints at the hands during multi-contact poses without the addition of hardware to the platform. This is achieved by utilizing disturbance detection as a method of monitoring active forces at a single external point and deriving available friction force at said contact point in accordance with Coulomb's Law of Friction. First, the rigid body dynamics and required compliant humanoid model optimization are established which allow incorporation of friction constraints. These friction constraints are then informed by monitoring of external forces, which can be used as an indicator of slip based on tangential force. In practice, the robot with operational multi-contact whole body control is navigated to the desired contact surface and normal force only contact is initiated. Using an iterative coefficient estimation based on the achieved system forces, the robot tests the boundaries of its operable force range by inducing slip. Slip detection is utilized as the basis for coefficient estimation, which allows the robot to further understand its environment and apply appropriate forces to its contact points. This approach was implemented on a simple 3 link model to verify expected performance, and then on both the simulated model of Virginia Tech's ESCHER robot and in practice on the actual ESCHER platform. The proposed approach was able to achieve estimation of slip parameters, based largely on time spent measuring, actual friction coefficient, and the available contact force. Though the performance of the proposed approach is dependent on a number of variables, it was able to provide an operational parameter for the robot's whole body controller, allowing expansion of the support region without risking multi-contact slip. / Master of Science / This paper presents an approach for humanoid robots to use their hands to approximate the friction parameters of contact surfaces without prior knowledge of those parameters. This is accomplished as part of the robot’s control system and integrated into its balancing and movement operating system so that it may determine these parameters without ceasing operation. The proposed approach relies on the force sensors typically embedded in the ankles of bipedal robots as its sole force input, so no additional hardware need be added to the robot in order to employ this functionality. Once placed in contact, the robot is able to approximate the forces at its hand with these sensors, and use those approximate values as the basis for estimating the static friction coefficient of the system, in accordance with Coulomb’s Law of Friction. The robot’s onboard controller is able to utilize this information to ensure that it does not overestimate the available force that may be applied at the contact point, using prior knowledge of the robot model’s range of motion. In practice, the robot with this functionality is navigated to the desired contact surface and a hand contact that does not risk slip is initiated. Using an iterative coefficient estimation based on the achieved system forces, the robot tests the boundaries of its operable force range by inducing slip. Slip detection is utilized as the basis for coefficient estimation, which allows the robot to further understand its environment and apply appropriate forces to its contact points. This approach was implemented on a simple 3 link robot model to verify expected performance, and then on both the simulated model of Virginia Tech’s ESCHER robot and in practice on the actual ESCHER platform. The proposed approach was able to achieve estimation of slip parameters, based largely on time spent measuring, actual friction coefficient, and the available contact force. Though the performance of the proposed approach is dependent on a number of variables, it was able to provide an operational parameter for the robot’s whole body controller, allowing expansion of the support region without risking multi-contact slip.
|
2 |
Multi-Objective Control for Physical and Cognitive Human-Exoskeleton InteractionBeiter, Benjamin Christopher 09 May 2024 (has links)
Powered exoskeletons have the potential to revolutionize the labor workplace across many disciplines, from manufacturing to agriculture. However, there are still many barriers to adoption and widespread implementation of exoskeletons. One major research gap of powered exoskeletons currently is the development of a control framework to best cooperate with the user. This limitation is first in understanding the physical and cognitive interaction between the user and exoskeleton, and then in designing a controller that addresses this interaction in a way that provides both physical assistance towards completing a task, and a decrease in the cognitive demand of operating the device. This work demonstrates that multi-objective, optimization-based control can be used to provide a coincident implementation of autonomous robot control, and human-input driven control. A parameter called 'acceptance' can be added to the weights of the cost functions to allow for an automatic trade-off in control priority between the user and robot objectives. This is paired with an update function that allows for the exoskeleton control objectives to track the user objectives over time. This results in a cooperative, powered exoskeleton controller that is responsive to user input, dynamically adjusting control autonomy to allow the user to act to complete a task, learn the control objective, and then offload all effort required to complete the task to the autonomous controller. This reduction in effort is physical assistance directly towards completing the task, and should reduce the cognitive load the user experiences when completing the task.
To test the hypothesis of whether high task assistance lowers the cognitive load of the user, a study is designed and conducted to test the effect of the shared autonomy controller on the user's experience operating the robot. The user operates the robot under zero-, full-, and shared-autonomy control cases. Physical workload, measured through the force they exert to complete the task, and cognitive workload, measured through pupil dilation, are evaluated to significantly show that high-assistance operation can lower the cognitive load experienced by a user alongside the physical assistance provided. Automatic adjustment in autonomy works to allow this assistance while allowing the user to be responsive to changing objectives and disturbances. The controller does not remove all mental effort from operation, but shows that high acceptance does lead to less mental effort.
When implementing this control beyond the simple reaching task used in the study, however, the controller must be able to both track to the user's desired objective and converge to a high-assistance state to lead to the reduction in cognitive load. To achieve this behavior, first is presented a method to design and enforce Lyapunov stability conditions of individual tasks within a multi-objective controller. Then, with an assumption on the form of the input the user will provide to accomplish their intended task, it is shown that the exoskeleton can stably track an acceptance-weighted combination of the user and robot desired objectives. This guarantee of following the proper trajectory at corresponding autonomy levels results in comparable accuracy in tracking a simulated objective as the base shared autonomy approach, but with a much higher acceptance level, indicating a better match between the user and exoskeleton control objectives, as well as a greater decrease in cognitive load. This process of enforcing stability conditions to shape human-exoskeleton system behavior is shown to be applicable to more tasks, and is in preparation for validation with further user studies. / Doctor of Philosophy / Powered exoskeletons are robots that can be worn by users to physically aid them in accomplishing tasks. These robots differ in scale, from single-joint devices like powered ankle supports or lower-back braces for lifting, to large, multi-joint devices with a broad range of capabilities and potential applications. These multi-joint exoskeletons have been used in many applications such as medical rehabilitation robots, and labor-assisting devices for enhancing strength and avoiding injury. Broader use and adoption in industry could have a great positive impact on the experience of workers performing any heavy-labor tasks. There are still barriers to widespread adoption, however. When closely interacting with machinery like a powered exoskeleton, workers want guarantees of saftey, trust, and cooperation that current exoskeletons have not been able to provide. In fact, studies have shown that industrial devices capable of providing significant assistive force when accomplishing a task, also tend to impart additional, uncomfortable disturbance forces on the user. For example, a lower-body exoskeleton meant to help in lifting tasks might make the simple act of walking more difficult, both physically and mentally. There is a need for exoskeletons that are intuitively cooperative, and can provide both physical assistance towards completing a task and cognitive assistance that makes coordinating with the human user easier.
In this dissertation we examine the control problem of powered exoskeletons. In the past, many powered exoskeleton controllers are direct, scripted controllers with exact objectives, or actions tied only to human input. To go beyond this, we leverage "multi-objective-control", originally designed for humanoid robots, which is capable of controlling the robot to accomplish multiple goals at the same time. This approach is the base on which a more complex controller can be created.
We show first that the multi-objective control can be used to achieve human desired actions and robot autonomous control tasks at the same time, with a parameter to trade-off which actor, the human or the robot, has the priority control at that time. This framework has the capacity to allow the human to instruct the robot in tasks to accomplish, and then robot can fully mimic the user, offloading the physical effort required to accomplish the task. It is proposed that this offloading of effort from the user will also lower the cognitive load the user is under when actively commanding the exoskeleton. To test this hypothesis, a user study is conducted where human operators work with an upper-body powered exoskeleton to complete a simple reaching task. This study shows that on average, the more assistance the exoskeleton provides to the user, the lower their mental demand is. Additionally, when responding to new challenges or sudden disturbances, the robot can easily cooperate, balancing its own autonomy with the user's to allow the user to respond as they need to their changing environment, then resume active assistance when the change is resolved. Finally, to guarantee that the exoskeleton responds quickly and accurately to the user's intentions, a new strategy is derived to update the robot's internal objectives to match the users' goals. This strategy is based on the assumption that the exoskeleton knows what type of task the user is trying to complete. If this is true, then the exoskeleton can estimate the users objectives from the actions they task, and ensure assistance towards completing the task. This control design is proven in simulation, and in preparation for followup studies to evaluate the user experience of this improved strategy.
|
3 |
Dynamic Locomotion and Whole-Body Control for Compliant HumanoidsHopkins, Michael Anthony 26 January 2015 (has links)
With the ability to navigate natural and man-made environments and utilize standard human tools, humanoid robots have the potential to transform emergency response and disaster relief applications by serving as first responders in hazardous scenarios. Such applications will require major advances in humanoid control, enabling robots to traverse difficult, cluttered terrain with both speed and stability. To advance the state of the art, this dissertation presents a complete dynamic locomotion and whole-body control framework for compliant (torque-controlled) humanoids. We develop low-level, mid-level, and high-level controllers to enable low-impedance balancing and walking on compliant and uneven terrain.
For low-level control, we present a cascaded joint impedance controller for series elastic humanoids with parallel actuation. A distributed controller architecture is implemented using a dual-axis motor controller that computes desired actuator forces and motor currents using simple models of the joint mechanisms and series elastic actuators. An inner-loop force controller is developed using feedforward and PID control with a model-based disturbance observer, enabling naturally compliant behaviors with low joint impedance.
For mid-level control, we implement an optimization-based whole-body control strategy assuming a rigid body model of the robot. Joint torque setpoints are computed using an efficient quadratic program (QP) given desired joint accelerations, spatial accelerations, and momentum rates of change. Constraints on the centroidal dynamics, contact forces, and joint limits ensure admissibility of the optimized setpoints. Using this approach, we develop compliant standing and stepping behaviors based on simple feedback controllers.
For high-level control, we present a dynamic planning and control approach for humanoid locomotion using a novel time-varying extension of the Divergent Component of Motion (DCM). By varying the natural frequency of the DCM, we are able to achieve generic vertical center of mass (CoM) trajectories during walking. Complementary reverse-time integration and model predictive control (MPC) strategies are proposed to generate dynamically feasible DCM plans over a multi-step preview window, supporting locomotion on uneven terrain.
The proposed approach is validated through experimental results obtained using THOR, a 34 degree of freedom (DOF) series elastic humanoid. Rough terrain locomotion is demonstrated in simulation, and compliant locomotion and push recovery are demonstrated in hardware. We discuss practical considerations that led to a successful implementation on the THOR hardware platform and conclude with an application of the presented control framework for humanoid firefighting onboard the ex-USS Shadwell, a decommissioned Navy ship. / Ph. D.
|
4 |
Exploitation du Retour de Force pour l'Estimation et le Contrôle des Robots Marcheurs / Exploitation of Force Feedback for the Estimation and Control of Walking RobotsFlayols, Thomas 12 October 2018 (has links)
Dans cette thèse, on s’intéresse à la commande des robots marcheurs. Contrôler ces systèmes naturellement instables, de dynamique non linéaire, non convexe, de grande dimension, et dépendante des contacts représente un défi majeur en robotique mobile. Les approches classiques formulent une chaîne de contrôle formée d’une cascade de sous problèmes tels que la perception, le planning, la commande du corps complet et l’asservissement articulaire. Les contributions rapportées ici ont toutes pour but d’introduire une rétroaction au niveau de la commande du corps complet ou du planning. Précisément, une première contribution technique est la formulation et la comparaison expérimentale de deux estimateurs de la base du robot. Une seconde contribution est l’implémentation d’un contrôleur par dynamique inverse pour contrôler en couple le robot HRP-2. Une variante de ce contrôleur est aussi formulée et testée en simulation pour stabiliser un robot en contact flexible avec son environnement. Finalement un générateur de marche par commande pré-dictive et couplé à un contrôleur corps complet est présenté. / In this thesis, we are interested in the control of walking robots. Controlling these naturally unstable, non-linear, non-convex, large and contact-dependent systems is a major challenge in mobile robotics. Traditional approaches formulate a chain of control formed by a cascade of sub-problems such as perception, planning, full body control and joint servoing. The contributions reported here are all intended to provide state feedback at the whole body control stage or at the planning stage. Specifically, a first technical contribution is the formulation and experimental comparison of two estimators of the robot base. A second contribution is the implementation of a reverse dynamic controller to control the HRP-2 robot in torque. A variant of this controller is also formulated and tested in simulation to stabilize a robot in flexible contact with its environment. Finally, a predictive control operation generator coupled to a whole body controller is presented.
|
Page generated in 0.0546 seconds