Return to search

Visual servoing path-planning for generalized cameras and objects

Visual servoing (VS) is an automatic control technique which uses vision feedback to control the robot motion. Eye-in-hand VS systems, with the vision sensor mounted directly on the robot end-effector have received significant attention, in particular for the task of steering the vision sensor (usually a camera) from the present position to the desired one identified by image features shown in advance. The servo uses the difference between the present and the desired views (shown a priori) of some objects to develop real-time driving signals. This approach is also known as “teach-by-showing” method. To accomplish such a task, many constraints and limits are required such as camera field of view (FOV), robot joint limits, collision and occlusion avoidance, and etc. Path-planning technologies, as one branch of high-level control strategies, are explored in this thesis to impose these constraints for VS tasks with respect to different types of cameras and objects.

First, a VS path-planning strategy is proposed for a class of cameras that include conventional perspective cameras, fisheye cameras, and catadioptric systems. These cameras are described by adopting a unified mathematical model and the strategy consists of designing image trajectories that allow the camera to reach the desired position while satisfying the camera FOV limit and the end-effector collision avoidance. To this end, the proposed strategy introduces the projection of the available image features onto a virtual plane and the computation of a feasible camera trajectory through polynomial programming. The computed image trajectory is hence tracked by an image-based visual servoing (IBVS) controller. Experimental results with a fisheye camera mounted on a 6-degree-of-freedom (6-DoF) robot arm illustrate the proposed strategy.

Second, this thesis proposes a path-planning strategy for visual servoing with image moments, in the case of which the observed features are not restrained to points. Image moments of some solid objects such as circle, sphere, and etc. are more intuitive features than the dominant feature points in VS applications. The problem consists of planning a trajectory in order to ensure the convergence of the robot end-effector to the desired position while satisfying workspace (Cartesian space) constraints of the robot end-effector and visibility constraints of these solid objects, in particular including collision and occlusion avoidance. A solution based on polynomial parametrization is proposed and validated by some simulation and experiment results.

Third, constrained optimization is combined with robot teach-by-demonstration to address simultaneously visibility constraint, joint limits and whole-arm collisions for robust vision-based control of a robot manipulator. User demonstration data generates safe regions for robot motion with respect to joint limits and potential whole-arm collisions. Constrained optimization uses these safe regions to generate new feasible trajectories under visibility constraint that achieve the desired view of the target (e.g., a pre-grasping location) in new, undemonstrated locations. To fulfill these requirements, camera trajectories that traverse a set of selected control points are modeled and optimized using either quintic Hermite splines or polynomials with C2 continuity. Experiments with a 7-DoF articulated arm validate the proposed method. / published_or_final_version / Electrical and Electronic Engineering / Doctoral / Doctor of Philosophy

Identiferoai:union.ndltd.org:HKU/oai:hub.hku.hk:10722/192842
Date January 2013
CreatorsShen, Tiantian., 沈添天.
ContributorsChesi, G, Hung, YS
PublisherThe University of Hong Kong (Pokfulam, Hong Kong)
Source SetsHong Kong University Theses
LanguageEnglish
Detected LanguageEnglish
TypePG_Thesis
Sourcehttp://hub.hku.hk/bib/B50899879
RightsThe author retains all proprietary rights, (such as patent rights) and the right to use in future works., Creative Commons: Attribution 3.0 Hong Kong License
RelationHKU Theses Online (HKUTO)

Page generated in 0.002 seconds