Spelling suggestions: "subject:"dose"" "subject:"pose""
261 |
Optimal pose selection for the identification of geometric and elastostatic parameters of machining robotsWu, Yier 15 January 2014 (has links) (PDF)
The thesis deals with the optimal pose selection for geometric and elastostatic calibration for industrial robots employed in machining of large parts. Particular attention is paid to the improvement of robot positioning accuracy after compensation of the geometric and elastostatic errors. To meet the industrial requirements of machining operations, a new approach for calibration experiments design for serial and quasi-serial industrial robots is proposed. This approach is based on a new industry-oriented performance measure that evaluates the quality of calibration experiment plan via the manipulator positioning accuracy after error compensation, and takes into account the particularities of prescribed manufacturing task by introducing manipulator test-poses. Contrary to previous works, the developed approach employs an enhanced partial pose measurement method, which uses only direct position measurements from an external device and allows us to avoid the non-homogeneity of relevant identification equations. In order to consider the impact of gravity compensator that creates closed-loop chains, the conventional stiffness model is extended by including in it some configuration dependent elastostatic parameters, which are assumed to be constant for strictly serial robots. Corresponding methodology for calibration of the gravity compensator models is also proposed. The advantages of the developed calibration techniques are validated via experimental study, which deals with geometric and elastostatic calibration of a KUKA KR-270 industrial robot.
|
262 |
Visual object perception in unstructured environmentsChoi, Changhyun 12 January 2015 (has links)
As robotic systems move from well-controlled settings to increasingly unstructured environments, they are required to operate in highly dynamic and cluttered scenarios. Finding an object, estimating its pose, and tracking its pose over time within such scenarios are challenging problems. Although various approaches have been developed to tackle these problems, the scope of objects addressed and the robustness of solutions remain limited. In this thesis, we target a robust object perception using visual sensory information, which spans from the traditional monocular camera to the more recently emerged RGB-D sensor, in unstructured environments. Toward this goal, we address four critical challenges to robust 6-DOF object pose estimation and tracking that current state-of-the-art approaches have, as yet, failed to solve.
The first challenge is how to increase the scope of objects by allowing visual perception to handle both textured and textureless objects. A large number of 3D object models are widely available in online object model databases, and these object models provide significant prior information including geometric shapes and photometric appearances. We note that using both geometric and photometric attributes available from these models enables us to handle both textured and textureless objects. This thesis presents our efforts to broaden the spectrum of objects to be handled by combining geometric and photometric features.
The second challenge is how to dependably estimate and track the pose of an object despite the clutter in backgrounds. Difficulties in object perception rise with the degree of clutter. Background clutter is likely to lead to false measurements, and false measurements tend to result in inaccurate pose estimates. To tackle significant clutter in backgrounds, we present two multiple pose hypotheses frameworks: a particle filtering framework for tracking and a voting framework for pose estimation.
Handling of object discontinuities during tracking, such as severe occlusions, disappearances, and blurring, presents another important challenge. In an ideal scenario, a tracked object is visible throughout the entirety of tracking. However, when an object happens to be occluded by other objects or disappears due to the motions of the object or the camera, difficulties ensue. Because the continuous tracking of an object is critical to robotic manipulation, we propose to devise a method to measure tracking quality and to re-initialize tracking as necessary.
The final challenge we address is performing these tasks within real-time constraints. Our particle filtering and voting frameworks, while time-consuming, are composed of repetitive, simple and independent computations. Inspired by that observation, we propose to run massively parallelized frameworks on a GPU for those robotic perception tasks which must operate within strict time constraints.
|
263 |
From Human to Robot GraspingRomero, Javier January 2011 (has links)
Imagine that a robot fetched this thesis for you from a book shelf. How doyou think the robot would have been programmed? One possibility is thatexperienced engineers had written low level descriptions of all imaginabletasks, including grasping a small book from this particular shelf. A secondoption would be that the robot tried to learn how to grasp books from yourshelf autonomously, resulting in hours of trial-and-error and several bookson the floor.In this thesis, we argue in favor of a third approach where you teach therobot how to grasp books from your shelf through grasping by demonstration.It is based on the idea of robots learning grasping actions by observinghumans performing them. This imposes minimum requirements on the humanteacher: no programming knowledge and, in this thesis, no need for specialsensory devices. It also maximizes the amount of sources from which therobot can learn: any video footage showing a task performed by a human couldpotentially be used in the learning process. And hopefully it reduces theamount of books that end up on the floor. This document explores the challenges involved in the creation of such asystem. First, the robot should be able to understand what the teacher isdoing with their hands. This means, it needs to estimate the pose of theteacher's hands by visually observing their in the absence of markers or anyother input devices which could interfere with the demonstration. Second,the robot should translate the human representation acquired in terms ofhand poses to its own embodiment. Since the kinematics of the robot arepotentially very different from the human one, defining a similarity measureapplicable to very different bodies becomes a challenge. Third, theexecution of the grasp should be continuously monitored to react toinaccuracies in the robot perception or changes in the grasping scenario.While visual data can help correcting the reaching movement to the object,tactile data enables accurate adaptation of the grasp itself, therebyadjusting the robot's internal model of the scene to reality. Finally,acquiring compact models of human grasping actions can help in bothperceiving human demonstrations more accurately and executing them in a morehuman-like manner. Moreover, modeling human grasps can provide us withinsights about what makes an artificial hand design anthropomorphic,assisting the design of new robotic manipulators and hand prostheses. All these modules try to solve particular subproblems of a grasping bydemonstration system. We hope the research on these subproblems performed inthis thesis will both bring us closer to our dream of a learning robot andcontribute to the multiple research fields where these subproblems arecoming from. / QC 20111125
|
264 |
Visual homing for a car-like vehicleUsher, Kane January 2005 (has links)
This thesis addresses the pose stabilization of a car-like vehicle using omnidirectional visual feedback. The presented method allows a vehicle to servo to a pre-learnt target pose based on feature bearing angle and range discrepancies between the vehicle's current view of the environment and that seen at the learnt location. The best example of such a task is the use of visual feedback for autonomous parallel-parking of an automobile. Much of the existing work in pose stabilization is highly theoretical in nature with few examples of implementations on 'real' vehicles, let alone vehicles representative of those found in industry. The work in this thesis develops a suitable test platform and implements vision-based pose stabilization techniques. Many of the existing techniques were found to fail due to vehicle steering and velocity loop dynamics, and more significantly, with steering input saturation. A technique which does cope with the characteristics of 'real' vehicles is to divide the task into predefined stages, essentially dividing the state space into sub-manifolds. For a car-like vehicle, the strategy used is to stabilize the vehicle to the line which has the correct orientation and contains the target location. Once on the line, the vehicle then servos to the desired pose. This strategy can accommodate velocity and steering loop dynamics, and input saturation. It can also allow the use of linear control techniques for system analysis and tuning of control gains. To perform pose stabilization, good estimates of vehicle pose are required. A simple, yet robust, method derived from the visual homing literature is to sum the range vectors to all the landmarks in the workspace and divide by the total number of landmarks--the Improved Average Landmark Vector. By subtracting the IALV at the target location from the currently calculated IALV, an estimate of vehicle pose is obtained. In this work, views of the world are provided by an omnidirectional camera, while a magnetic compass provides a reference direction. The landmarks used are red road cones which are segmented from the omnidirectional colour images using a pre-learnt, two-dimensional lookup table of their colour profile. Range to each landmark is estimated using a model of the optics of the system, based on a flat-Earth assumption. A linked-list based method is used to filter the landmarks over time. Complementary filtering techniques, which combine the vision data with vehicle odometry, are used to improve the quality of the measurements.
|
265 |
The design and implementation of vision-based autonomous rotorcraft landingDe Jager, Andries Matthys 03 1900 (has links)
Thesis (MScEng (Electrical and Electronic Engineering))--University of Stellenbosch, 2011. / ENGLISH ABSTRACT: This thesis presents the design and implementation of all the subsystems required to
perform precision autonomous helicopter landings within a low-cost framework.
To obtain high-accuracy state estimates during the landing phase a vision-based approach,
with a downwards facing camera on the helicopter and a known landing target, was used.
An e cient monocular-view pose estimation algorithm was developed to determine the
helicopter's relative position and attitude during the landing phase. This algorithm was
analysed and compared to existing algorithms in terms of sensitivity, robustness and
runtime.
An augmented kinematic state estimator was developed to combine measurements from
low-cost GPS and inertial measurement units with the high accuracy measurements from
the camera system. High-level guidance algorithms, capable of performing waypoint navigation
and autonomous landings, were developed.
A visual position and attitude measurement (VPAM) node was designed and built to perform
the pose estimation and execute the associated algorithms. To increase the node's
throughput, a compression scheme is used between the image sensor and the processor
to reduce the amount of data that needs to be processed. This reduces processing requirements
and allows the entire system to remain on-board with no reliance on radio
links. The functionality of the VPAM node was con rmed through a number of practical
tests. The node is able to provide measurements of su cient accuracy for the subsequent
systems in the autonomous landing system.
The functionality of the full system was con rmed in a software environment, as well as
through testing using a visually augmented hardware-in-the-loop environment. / AFRIKAANSE OPSOMMING: Hierdie tesis beskryf die ontwikkeling van die substelsels wat vir akkurate outonome helikopter
landings benodig word. 'n Onderliggende doel was om al die ontwikkeling binne
'n lae-koste raamwerk te voltooi.
Hoe-akkuraatheid toestande word benodig om akkurate landings te verseker. Hierdie
metings is verkry deur middel van 'n optiese stelsel, bestaande uit 'n kamera gemonteer
op die helikopter en 'n bekende landingsteiken, te ontwikkel. 'n Doeltreffende mono-visie
posisie-en-orientasie algoritme is ontwikkel om die helikopter se posisie en orientasie, relatief
tot die landingsteiken, te bepaal. Hierdie algoritme is deeglik ondersoek en vergelyk
met bestaande algoritmes in terme van sensitiwiteit, robuustheid en uitvoertyd.
'n Optimale kinematiese toestandswaarnemer, wat metings van GPS en inersiele sensore
kombineer met die metings van die optiese stelsel, is ontwikkel en deur simulasie bevestig.
Hoe-vlak leidingsalgoritmes is ontwikkel wat die helikopter in staat stel om punt-tot-punt
navigasie en die landingsprosedure uit te voer.
'n Visuele posisie-en-orientasie meetnodus is ontwikkel om die mono-visie posisie-en orientasie algoritmes uit te voer. Om die deurset te verhoog is 'n saampersingsalgoritme
gebruik wat die hoeveelheid data wat verwerk moet word, te verminder. Dit het die
benodigde verwerkingskrag verminder, wat verseker het dat alle verwerking op aanboord
stelsels kan geskied. Die meetnodus en mono-visie algoritmes is deur middel van praktiese
toetse bevestig en is in staat om metings van voldoende akkuraatheid aan die outonome
landingstelsel te verskaf.
Die werking van die volledige stelsel is, deur simulasies in 'n sagteware en hardeware-indie-
lus omgewing, bevestig.
|
266 |
Vision-based trailer pose estimation for articulated vehiclesde Saxe, Christopher Charles January 2017 (has links)
Articulated Heavy Goods Vehicles (HGVs) are more efficient than conventional rigid lorries, but exhibit reduced low-speed manoeuvrability and high-speed stability. Technologies such as autonomous reversing and path-following trailer steering can mitigate this, but practical limitations of the available sensing technologies restrict their commercialisation potential. This dissertation describes the development of practical vision-based articulation angle and trailer off-tracking sensing for HGVs. Chapter 1 provides a background and literature review, covering important vehicle technologies, existing commercial and experimental sensors for articulation angle and off-tracking measurement, and relevant vision-based technologies. This is followed by an introduction to pertinent computer vision theory and terminology in Chapter 2. Chapter 3 describes the development and simulation-based assessment of an articulation angle sensing concept. It utilises a rear-facing camera mounted behind the truck or tractor, and one of two proposed image processing methods: template-matching and Parallel Tracking and Mapping (PTAM). The PTAM-based method was shown to be the more accurate and versatile method in full-scale vehicle tests. RMS measurement errors of 0.4-1.6° were observed in tests on a tractor semi-trailer (Chapter 4), and 0.8-2.4° in tests on a Nordic combination with two articulation points (Chapter 5). The system requires no truck-trailer communication links or artificial markers, and is compatible with multiple trailer shapes, but was found to have increasing errors at higher articulation angles. Chapter 6 describes the development and simulation-based assessment of a trailer off-tracking sensing concept, which utilises a trailer-mounted stereo camera pair and visual odometry. The concept was evaluated in full-scale tests on a tractor semi-trailer combination in which camera location and stereo baseline were varied, presented in Chapter 7. RMS measurement errors of 0.11-0.13 m were obtained in some tests, but a sensitivity to camera alignment was discovered in others which negatively affected results. A very stiff stereo camera mount with a sub-0.5 m baseline is suggested for future experiments. A summary of the main conclusions, a review of the objectives, and recommendations for future work are given in Chapter 8. Recommendations include further refinement of both sensors, an investigation into lighting sensitivity, and alternative applications of the sensors.
|
267 |
Motion synthesis for high degree-of-freedom robots in complex and changing environmentsYang, Yiming January 2018 (has links)
The use of robotics has recently seen significant growth in various domains such as unmanned ground/underwater/aerial vehicles, smart manufacturing, and humanoid robots. However, one of the most important and essential capabilities required for long term autonomy, which is the ability to operate robustly and safely in real-world environments, in contrast to industrial and laboratory setup is largely missing. Designing robots that can operate reliably and efficiently in cluttered and changing environments is non-trivial, especially for high degree-of-freedom (DoF) systems, i.e. robots with multiple actuators. On one hand, the dexterity offered by the kinematic redundancy allows the robot to perform dexterous manipulation tasks in complex environments, whereas on the other hand, such complex system also makes controlling and planning very challenging. To address such two interrelated problems, we exploit robot motion synthesis from three perspectives that feed into each other: end-pose planning, motion planning and motion adaptation. We propose several novel ideas in each of the three phases, using which we can efficiently synthesise dexterous manipulation motion for fixed-base robotic arms, mobile manipulators, as well as humanoid robots in cluttered and potentially changing environments. Collision-free inverse kinematics (IK), or so-called end-pose planning, a key prerequisite for other modules such as motion planning, is an important and yet unsolved problem in robotics. Such information is often assumed given, or manually provided in practice, which significantly limiting high-level autonomy. In our research, by using novel data pre-processing and encoding techniques, we are able to efficiently search for collision-free end-poses in challenging scenarios in the presence of uneven terrains. After having found the end-poses, the motion planning module can proceed. Although motion planning has been claimed as well studied, we find that existing algorithms are still unreliable for robust and safe operations in real-world applications, especially when the environment is cluttered and changing. We propose a novel resolution complete motion planning algorithm, namely the Hierarchical Dynamic Roadmap, that is able to generate collision-free motion trajectories for redundant robotic arms in extremely complicated environments where other methods would fail. While planning for fixed-base robotic arms is relatively less challenging, we also investigate into efficient motion planning algorithms for high DoF (30 - 40) humanoid robots, where an extra balance constraint needs to be taken into account. The result shows that our method is able to efficiently generate collision-free whole-body trajectories for different humanoid robots in complex environments, where other methods would require a much longer planning time. Both end-pose and motion planning algorithms compute solutions in static environments, and assume the environments stay static during execution. While human and most animals are incredibly good at handling environmental changes, the state-of-the-art robotics technology is far from being able to achieve such an ability. To address this issue, we propose a novel state space representation, the Distance Mesh space, in which the robot is able to remap the pre-planned motion in real-time and adapt to environmental changes during execution. By utilizing the proposed end-pose planning, motion planning and motion adaptation techniques, we obtain a robotic framework that significantly improves the level of autonomy. The proposed methods have been validated on various state-of-the-art robot platforms, such as UR5 (6-DoF fixed-base robotic arm), KUKA LWR (7-DoF fixed-base robotic arm), Baxter (14-DoF fixed-base bi-manual manipulator), Husky with Dual UR5 (15-DoF mobile bi-manual manipulator), PR2 (20-DoF mobile bi-manual manipulator), NASA Valkyrie (38-DoF humanoid) and many others, showing that our methods are truly applicable to solve high dimensional motion planning for practical problems.
|
268 |
Volume Estimation of Airbags: A Visual Hull ApproachAnliot, Manne January 2005 (has links)
This thesis presents a complete and fully automatic method for estimating the volume of an airbag, through all stages of its inflation, with multiple synchronized high-speed cameras. Using recorded contours of the inflating airbag, its visual hull is reconstructed with a novel method: The intersections of all back-projected contours are first identified with an accelerated epipolar algorithm. These intersections, together with additional points sampled from concave surface regions of the visual hull, are then Delaunay triangulated to a connected set of tetrahedra. Finally, the visual hull is extracted by carving away the tetrahedra that are classified as inconsistent with the contours, according to a voting procedure. The volume of an airbag's visual hull is always larger than the airbag's real volume. By projecting a known synthetic model of the airbag into the cameras, this volume offset is computed, and an accurate estimate of the real airbag volume is extracted. Even though volume estimates can be computed for all camera setups, the cameras should be specially posed to achieve optimal results. Such poses are uniquely found for different airbag models with a separate, fully automatic, simulated annealing algorithm. Satisfying results are presented for both synthetic and real-world data.
|
269 |
Object detection and pose estimation of randomly organized objects for a robotic bin picking systemSkalski, Tomasz, Zaborowski, Witold January 2013 (has links)
Today modern industry systems are almost fully automated. The high requirements regarding speed, flexibility, precision and reliability makes it in some cases very difficult to create. One of the most willingly researched solution to solve many processes without human influence is bin-picking. Bin picking is a very complex process which integrates devices such as: robotic grasping arm, vision system, collision avoidance algorithms and many others. This paper describes the creation of a vision system - the most important part of the whole bin-picking system. Authors propose a model-based solution for estimating a best pick-up candidate position and orientation. In this method database is created from 3D CAD model, compared with processed image from the 3D scanner. Paper widely describes database creation from 3D STL model, Sick IVP 3D scanner configuration and creation of the comparing algorithm based on autocorrelation function and morphological operators. The results shows that proposed solution is universal, time efficient, robust and gives opportunities for further work. / +4915782529118
|
270 |
Simultaneous real-time object recognition and pose estimation for artificial systems operating in dynamic environmentsVan Wyk, Frans Pieter January 2013 (has links)
Recent advances in technology have increased awareness of the necessity for automated systems in
people’s everyday lives. Artificial systems are more frequently being introduced into environments
previously thought to be too perilous for humans to operate in. Some robots can be used to extract
potentially hazardous materials from sites inaccessible to humans, while others are being developed
to aid humans with laborious tasks.
A crucial aspect of all artificial systems is the manner in which they interact with their immediate surroundings.
Developing such a deceivingly simply aspect has proven to be significantly challenging, as
it not only entails the methods through which the system perceives its environment, but also its ability
to perform critical tasks. These undertakings often involve the coordination of numerous subsystems,
each performing its own complex duty. To complicate matters further, it is nowadays becoming
increasingly important for these artificial systems to be able to perform their tasks in real-time.
The task of object recognition is typically described as the process of retrieving the object in a database
that is most similar to an unknown, or query, object. Pose estimation, on the other hand, involves
estimating the position and orientation of an object in three-dimensional space, as seen from an observer’s
viewpoint. These two tasks are regarded as vital to many computer vision techniques and and
regularly serve as input to more complex perception algorithms.
An approach is presented which regards the object recognition and pose estimation procedures as
mutually dependent. The core idea is that dissimilar objects might appear similar when observed
from certain viewpoints. A feature-based conceptualisation, which makes use of a database, is implemented
and used to perform simultaneous object recognition and pose estimation. The design
incorporates data compression techniques, originally suggested by the image-processing community,
to facilitate fast processing of large databases.
System performance is quantified primarily on object recognition, pose estimation and execution time
characteristics. These aspects are investigated under ideal conditions by exploiting three-dimensional
models of relevant objects. The performance of the system is also analysed for practical scenarios
by acquiring input data from a structured light implementation, which resembles that obtained from
many commercial range scanners.
Practical experiments indicate that the system was capable of performing simultaneous object recognition
and pose estimation in approximately 230 ms once a novel object has been sensed. An average
object recognition accuracy of approximately 73% was achieved. The pose estimation results were
reasonable but prompted further research. The results are comparable to what has been achieved using
other suggested approaches such as Viewpoint Feature Histograms and Spin Images. / Dissertation (MEng)--University of Pretoria, 2013. / gm2014 / Electrical, Electronic and Computer Engineering / unrestricted
|
Page generated in 0.0506 seconds