Spelling suggestions: "subject:"amobile manipulation"" "subject:"0mobile manipulation""
1 |
Reactive control and coordination of redundant robotic systemsWang, Yuquan January 2016 (has links)
Redundant robotic systems, in terms of manipulators with one or twoarms, mobile manipulators, and multi-agent systems, have received an in-creasing amount of attention in recent years. In this thesis we describe severalways to improve robotic system performance by exploiting the redundancy. As the robot workspace becomes increasingly dynamic, it is common towork with imperfect geometric models of the robots or its workspace. Inorder to control the robot in a robust way in the presence of geometric uncer-tainties, we propose to assess the stability of our controller with respect to acertain task by deriving bounds on the geometric uncertainties. Preliminaryexperimental results support the fact that stability is ensured if the proposedbounds on the geometric uncertainties are fulfilled. As a non-contact measurement, computer vision could provide rich infor-mation for robot control. We introduce a two step method that transformsthe position-based visual servoing problem into a quadratic optimization prob-lem with linear constraints. This method is optimal in terms of minimizinggeodesic distance and allows us to integrate constraints, e.g. visibility con-straints, in a natural way. In the case of a single robot with redundant degrees of freedom, we canspecify a family of complex robotic tasks using constraint based programming(CBP). CBP allows us to represent robotic tasks with a set of equality andinequality constraints. Using these constraints we can formulate quadraticprogramming problems that exploit the redundancy of the robot and itera-tively resolve the trade-off between the different constraints. For example, wecould improve the velocity or force transmission ratios along a task-dependent direction using the priorities between different constraints in real time. Using the reactiveness of CBP, we formulated and implemented a dual-armpan cleaning task. If we mount a dual-arm robot on a mobile base, we proposeto use a virtual kinematic chain to specify the coordination between the mobilebase and two arms. Using the modularity of the CBP, we can integrate themobility and dual-arm manipulation by adding coordination constraints intoan optimization problem where dual-arm manipulation constraints are alreadyspecified. We also found that the reactiveness and modularity of the CBPapproach is important in the context of teleoperation. Inspired by the 3Ddesign community, we proposed a teleoperation interface control mode thatis identical to the ones being used to locally navigate the virtual viewpoint ofmost Computer Aided Design (CAD) softwares. In the case of multiple robots, we combine ideas from multi-agent coopera-tive coverage control, with problem formulations from the resource allocationfield, to create a distributed convergent approach to the resource positioningproblem. / <p>QC 20160224</p>
|
2 |
Towards Supervisory Control for Remote Mobile Manipulation: Designing, Building, and Testing a Mobile Telemanipulation Test-BedHernandez Herdocia, Alejandro Unknown Date
No description available.
|
3 |
Autonome Instrumentierung von Altbergbau durch einen mobilen ManipulatorGrehl, Steve 27 October 2021 (has links)
Im Fokus dieser Arbeit steht die Konzeption, Entwicklung und Erprobung eines autonomen Roboters zur Instrumentierung eines untertägigen Bergwerks. Der exemplarische Anwendungsfall umfasst das selbstständige Absetzen intelligenter Sensorstationen durch einen Roboterarm. Der Roboter ist einer der ersten mobilen Manipulatoren für den langfristigen Einsatz unter Tage. Ziel ist es, die Sicherheit für den Bergmann zu erhöhen, indem in gefährlichen Situationen der mobile Manipulator als echte Alternative zur Verfügung steht. Das fordert von dem Roboter selbstständiges und adaptives Handeln in einer Komplexität, die mobile Manipulatoren bisher lediglich in strukturierten Umgebungen leisten. Exemplarisch dafür ist das Platzieren von Technik im Altbergbau - Dunkelheit, Nässe und enge Querschnitte gestalten dies sehr herausfordernd. Der Roboter nutzt seine anthropomorphe Hand, um verschiedene Objekte abzusetzen. Das sind im konkreten Fall Sensorboxen, die diese Arbeit für die Instrumentierung des Bergwerks vorschlägt. Wichtig ist, dass das Absetzen autonom geschieht. Der Roboter trifft die Entscheidungen, wo er etwas platziert, welche Trajektorie sein Arm wählt und welchen Planungsalgorithmus er nutzt, vollkommen selbstständig. In dem Zusammenhang entwirft diese Dissertation eine variable Absetzroutine. Der mobile Manipulator baut dafür ein Kollisionsmodell der Umgebung auf, sucht eine geeignete Absetzposition, greift ein vordefiniertes Objekt und platziert dies im Bergwerk. Sicherheit und Robustheit stehen dabei an vorderster Stelle. Entsprechend schließt die Absetzroutine nach dem Absetzen nicht ab, sondern führt eine unabhängige Überprüfung durch. Dabei vergleicht der mobile Manipulator über Sensoren die wahrgenommene mit der angestrebten Objektposition. Hier kommen auf Deep Learning basierende Methoden zum Einsatz, die eine Überprüfung auch in vollkommener Dunkelheit erlauben. In insgesamt 60 Experimenten gelingt das Absetzen in 97% der Fälle mit einer Genauigkeit im Zentimeterbereich. Dabei beschränkt sich diese Evaluierung nicht auf das untertägige Bergwerk, sondern wertet auch Experimente in strukturierten und offenen Umgebungen aus. Diese Breite erlaubt eine qualitative Diskussion von Aspekten wie: Autonomie, Sicherheit und Einfluss der Umgebung auf das Verfahren. Das Ergebnis ist die Erkenntnis, dass der hier vorgestellte Roboter die Lücke zwischen Untertagerobotern und den mobilen Manipulatoren aus Industrieanwendungen schließt. Er steht in gefährlichen Situationen als Alternative zur Verfügung.:Inhaltsverzeichnis
Abbildungsverzeichnis
Tabellenverzeichnis
Abkürzungsverzeichnis
Notation
Variablenverzeichnis
1. Einleitung
1.1. Motivation
1.2. Forschungsfrage und Beitrag der Arbeit
1.3. Aufbau der Arbeit
2. Forschungstrends bei mobilen Manipulatoren und Untertagerobotern
2.1. Mobile Manipulatoren in verschiedenen Einsatzszenarien
2.2. Wettbewerbe mobiler Manipulatoren und Trends im Forschungsgebiet
2.3. Hard- und Software Komponenten für mobile Manipulatoren
2.4. Zusammenfassung
3. Aufbau von Julius - dem Roboter für den Einsatz im Altbergbau
3.1. Umgebungsbedingungen im untertägigen Altbergbau
3.2. Physischer Aufbau des Roboters
3.3. Softwaretechnische Grundlagen für ein autonomes Handeln
3.4. Zusammenfassung
4. Entwurf der autonomen Absetzroutine für Julius
4.1. Planen von Armbewegungen
4.2. Umgebungsmodell detaillieren
4.3. Bodenfläche identifizieren
4.4. Absetzposition berechnen
4.5. SSB greifen
4.6. Absetzrotation festlegen
4.7. SSB absetzen
4.8. Absetzpose überprüfen
4.9. Zusammenfassung
5. Experimentelle Validierung von Julius und der Absetzroutine
5.1. Beschreibung des Experiments und der generellen Rahmenbedingungen
5.2. Referenzexperimente im Innenbereich
5.3. Experimente im Außenbereich
5.4. Experimente im Forschungs- und Lehrbergwerk
5.5. Diskussion
5.6. Zusammenfassung
6. Zusammenfassung
6.1. Erkenntnisse dieser Arbeit
6.2. Fazit
6.3. Ausblick
Literatur
Anhang
A. Berechnung der Absetzrotation
B. Übersicht technischer Daten
C. Weiterführende Abbildungen
D. Messdaten
|
4 |
Mobile manipulation in unstructured environments with haptic sensing and compliant jointsJain, Advait 22 August 2012 (has links)
We make two main contributions in this thesis. First, we present our approach to robot manipulation, which emphasizes the benefits of making contact with the world across all the surfaces of a manipulator with whole-arm tactile sensing and compliant actuation at the joints. In contrast, many current approaches to mobile manipulation assume most contact is a failure of the system, restrict contact to only occur at well modeled end effectors, and use stiff, precise control to avoid contact.
We develop a controller that enables robots with whole-arm tactile sensing and compliant actuation at the joints to reach to locations in high clutter while regulating contact forces. We assume
that low contact forces are benign and our controller does not place any penalty on contact forces below a threshold. Our controller only requires haptic sensing, handles multiple contacts across the surface of the manipulator, and does not need an explicit model of the environment prior to contact. It uses model predictive control with a time horizon of length one, and a linear quasi-static mechanical model that it constructs at each time step.
We show that our controller enables both a real and simulated robots to reach goal locations in high clutter with low contact forces. While doing so, the robots bend, compress, slide, and pivot around objects. To enable experiments on real robots, we also developed an inexpensive, flexible, and stretchable tactile sensor and covered large surfaces of two robot arms with these sensors. With an informal experiment, we show that our controller and sensor have the potential to enable robots to manipulate in close proximity to, and in contact with humans while keeping the contact forces low.
Second, we present an approach to give robots common sense about everyday forces in the form of probabilistic data-driven object-centric models of haptic interactions. These models can be shared by different robots for improved manipulation performance. We use pulling open doors, an important task for service robots, as an example to demonstrate our approach.
Specifically, we capture and model the statistics of forces while pulling open doors and drawers. Using a portable custom force and motion capture system, we create a database of forces as human operators pull open doors and drawers in six homes and one office. We then build data-driven
models of the expected forces while opening a mechanism, given knowledge of either its class (e.g, refrigerator) or the mechanism identity (e.g, a particular cabinet in Advait's kitchen). We demonstrate that these models can enable robots to detect anomalous conditions such as a locked door, or collisions between the door and the environment faster and with lower excess force applied to the door compared to methods that do not use a database of forces.
|
5 |
Autonomous Mobility and Manipulation of a 9-DoF WMRAPence, William Garrett 01 January 2011 (has links)
The wheelchair-mounted robotic arm (WMRA) is a 9-degree of freedom (DoF) assistive system that consists of a 2-DoF modified commercial power wheelchair and a custom 7-DoF robotic arm. Kinematics and control methodology for the 9-DoF system that combine mobility and manipulation have been previously developed and implemented. This combined control allows the wheelchair and robotic arm to follow a single trajectory based on weighted optimizations. However, for the execution of activities of daily living (ADL) in the real-world environment, modified control techniques have been implemented.
In order to execute macro ADL tasks, such as a "go to and pick up" task, this work has implemented several control algorithms on the WMRA system. Visual servoing based on template matching and feature extraction allows the mobile platform to approach the desired goal object. Feature extraction based on scale-invariant feature transform (SIFT) gives the system object detection capabilities to recommend actions to the user and to orient the arm to grasp the goal object using visual servoing. Finally, a collision avoidance system is implemented to detect and avoid obstacles when the wheelchair platform is moving towards the goal object. These implementations allow the WMRA system to operate autonomously from the beginning of the task where the user selects the goal object, all the way to the end of the task where the task has been fully completed.
|
6 |
Multistage Localization for High Precision Mobile Manipulation TasksMobley, Christopher James 03 March 2017 (has links)
This paper will present a multistage localization approach for an autonomous industrial mobile manipulator (AIMM). This approach allows tasks with an operational scope outside the range of the robot's manipulator to be completed without having to recalibrate the position of the end-effector each time the robot's mobile base moves to another position. This is achieved by localizing the AIMM within its area of operation (AO) using adaptive Monte Carlo localization (AMCL), which relies on the fused odometry and sensor messages published by the robot, as well as a 2-D map of the AO, which is generated using an optimization-based smoothing simultaneous localization and mapping (SLAM) technique. The robot navigates to a predefined start location in the map incorporating obstacle avoidance through the use of a technique called trajectory rollout. Once there, the robot uses its RGB-D sensor to localize an augmented reality (AR) tag in the map frame. Once localized, the identity and the 3-D position and orientation, collectively known as pose, of the tag are used to generate a list of initial feature points and their locations based on a priori knowledge. After the end-effector moves to the approximate location of a feature point provided by the AR tag localization, the feature point's location, as well as the end-effector's pose are refined to within a user specified tolerance through the use of a control loop, which utilizes images from a calibrated machine vision camera and a laser pointer, simulating stereo vision, to localize the feature point in 3-D space using computer vision techniques and basic geometry. This approach was implemented on two different ROS enabled robots, the Clearpath Robotics' Husky and the Fetch Robotics' Fetch, in order to show the utility of the multistage localization approach in executing two tasks which are prevalent in both manufacturing and construction: drilling and sealant application. The proposed approach was able to achieve an average accuracy of ± 1 mm in these operations, verifying its efficacy for tasks which have a larger operational scope than that of the range of the AIMM's manipulator and its robustness to general applications in manufacturing. / Master of Science / This paper will present a multistage localization approach for an autonomous industrial mobile manipulator (AIMM). This approach allows for tasks with an operational scope outside the range of the robot’s manipulator to be completed without having to recalibrate the position of the end-effector each time the robot’s mobile base moves to another position. This is achieved by first localizing the AIMM within its area of operation (AO) using a probabilistic state estimator. The robot navigates to a predefined start location in the map incorporating obstacle avoidance through the use of a technique called trajectory rollout, which samples the space of feasible controls, generates trajectories through forward simulation, and chooses the simulated trajectory that minimizes a cost function. Once there, the robot uses a depth camera to localize an augmented reality (AR) tag in the map frame. Once localized, the identity and the 3-D position and orientation, collectively known as pose, of the tag are used to generate a list of initial feature points and their locations based on a <i>priori</i> knowledge of the operation, which was associated with the AR tag’s identity. After the end-effector moves to the approximate location of a feature point provided by the AR tag localization, the feature point’s location, as well as the end-effector’s pose, are refined to within a user specified tolerance through the use of a control loop. This approach was implemented on two different ROS-enabled robots, the Clearpath Robotics’ Husky and the Fetch Robotics’ Fetch, in order to show the utility of the multistage localization approach in executing two tasks which are prevalent in both manufacturing and construction: drilling and sealant application. The proposed approach was able to achieve an average accuracy of ± 1 mm in these operations, verifying its efficacy for tasks which have a larger operational scope than that of the range of the AIMM’s manipulator and its robustness to general applications in manufacturing.
|
7 |
Mechatronics of holonomic mobile base for compliant manipulationGupta, Somudro 08 February 2012 (has links)
In order to operate safely and naturally in human-centered environments, robots need to respond compliantly to force and contact interactions. While advanced robotic torsos and arms have been built that successfully achieve this, a somewhat neglected research area is the construction of compliant wheeled mobile bases. This thesis describes the mechatronics behind Trikey, a holonomic wheeled mobile base employing torque sensing at each of its three omni wheels so that it can detect and respond gracefully to force interactions. Trikey's mechanical design, kinematic and dynamic models, and control architecture are described, as well as simple experiments demonstrating compliant control. Trikey is designed to support a force-controlled humanoid upper body, and eventually, the two will be controlled together using whole-body control algorithms that utilize the external and internal dynamics of the entire system. / text
|
8 |
Constructing mobile manipulation behaviors using expert interfaces and autonomous robot learningNguyen, Hai Dai 13 January 2014 (has links)
With current state-of-the-art approaches, development of a single mobile manipulation capability can be a labor-intensive process that presents an impediment to the creation of general purpose household robots. At the same time, we expect that involving a larger community of non-roboticists can accelerate the creation of new novel behaviors. We introduce the use of a software authoring environment called ROS Commander (ROSCo) allowing end-users to create, refine, and reuse robot behaviors with complexity similar to those currently created by roboticists. Akin to Photoshop, which provides end-users with interfaces for advanced computer vision algorithms, our environment provides interfaces to mobile manipulation algorithmic building blocks that can be combined and configured to suit the demands of new tasks and their variations.
As our system can be more demanding of users than alternatives such as using kinesthetic guidance or learning from demonstration, we performed a user study with 11 able-bodied participants and one person with quadriplegia to determine whether computer literate non-roboticists will be able to learn to use our tool. In our study, all participants were able to successfully construct functional behaviors after being trained. Furthermore, participants were able to produce behaviors that demonstrated a variety of creative manipulation strategies, showing the power of enabling end-users to author robot behaviors.
Additionally, we introduce how using autonomous robot learning, where the robot captures its own training data, can complement human authoring of behaviors by freeing users from the repetitive task of capturing data for learning. By taking advantage of the robot's embodiment, our method creates classifiers that predict using visual appearances 3D locations on home mechanisms where user constructed behaviors will succeed. With active learning, we show that such classifiers can be learned using a small number of examples. We also show that this learning system works with behaviors constructed by non-roboticists in our user study. As far as we know, this is the first instance of perception learning with behaviors not hand-crafted by roboticists.
|
9 |
Decoupled Controllers for Mobile Manipulation with Aerial Robots : Design, Implementation and TestRiccardo Zanella, Riccardo January 2016 (has links)
This work considers an aerial robot system composed of an Unmanned Aerial Vehicle (UAV) and a rigid manipulator, to be employed in mobile manipulation tasks. The strategy adopted for accomplishing the aerial manipulation is a decomposition of the previous system in two decoupled subsystems: one concerning the center of mass of the aerial robot; and another concerning the manipulator's orientation. Two Lyapunov-based controllers are developed, using a back stepping procedure, for solving the trajectory tracking problems related to the two subsystems. In the controller design, three inputs are assumed available: a translational acceleration along a body direction of the UAV; an angular velocity vector of this body rotation; and, finally, a torque at the spherical, or revolute, joint connecting the UAV and the manipulator. The first two inputs are generated by the same controller in order to drive the center of mass on a desired trajectory; while a second controller drives, through the third input, the manipulator's orientation to track a desired orientation. Formal stability proofs are provided that guarantee asymptotic trajectory tracking. Finally, the proposed control strategy is experimentally tested and validated.
|
10 |
Optical Satellite/Component Tracking and Classification via Synthetic CNN Image Processing for Hardware-in-the-Loop testing and validation of Space Applications using free flying drone platformsPeterson, Marco Anthony 21 April 2022 (has links)
The proliferation of reusable space vehicles has fundamentally changed how we inject assets into orbit and beyond, increasing the reliability and frequency of launches. Leading to the rapid development and adoption of new technologies into the Aerospace sector, such as computer vision (CV), machine learning (ML), and distributed networking. All these technologies are necessary to enable genuinely autonomous decision-making for space-borne platforms as our spacecraft travel further into the solar system, and our missions sets become more ambitious, requiring true ``human out of the loop" solutions for a wide range of engineering and operational problem sets. Deployment of systems proficient at classifying, tracking, capturing, and ultimately manipulating orbital assets and components for maintenance and assembly in the persistent dynamic environment of space and on the surface of other celestial bodies, tasks commonly referred to as On-Orbit Servicing and In Space Assembly, have a unique automation potential. Given the inherent dangers of manned space flight/extravehicular activity (EVAs) methods currently employed to perform spacecraft construction and maintenance tasking, coupled with the current limitation of long-duration human flight outside of low earth orbit, space robotics armed with generalized sensing and control machine learning architectures is a tremendous enabling technology. However, the large amounts of sensor data required to adequately train neural networks for these space domain tasks are either limited or non-existent, requiring alternate means of data collection/generation. Additionally, the wide-scale tools and methodologies required for hardware in the loop simulation, testing, and validation of these new technologies outside of multimillion-dollar facilities are largely in their developmental stages. This dissertation proposes a novel approach for simulating space-based computer vision sensing and robotic control using both physical and virtual reality testing environments. This methodology is designed to both be affordable and expandable, enabling hardware in the loop simulation and validation of space systems at large scale across multiple institutions. While the focus of the specific computer vision models in this paper are narrowly focused on solving imagery problems found on orbit, this work can be expanded to solve any problem set that requires robust onboard computer vision, robotic manipulation, and free flight capabilities. / Doctor of Philosophy / The lack of real-world imagery of space assets and planetary surfaces required to train neural networks to autonomously identify, classify, and perform decision-making in these environments is either limited, none existent, or prohibitively expensive to obtain. Leveraging the power of the unreal engine, motion capture, and theatre projections technologies combined with robotics, computer vision, and machine learning to provide a means to recreate these worlds for the purpose of optical machine learning testing and validation for space and other celestial applications. This dissertation also incorporates domain randomization methods to increase neural network performance for the above mentioned applications.
|
Page generated in 0.1188 seconds