A Comparison of Vision-Based Autonomous Navigation for Target Grasping of Humanoid Robot by Enhanced SIFT and Traditional HT Algorithms / 加強型SIFT與傳統型Hough Transform於人形機器人視覺自動導引的目標抓取之比較

碩士 / 淡江大學 / 電機工程學系碩士班 / 101 / This thesis realizes the humanoid robotic system to execute the target grasping (TG) in the unknown 3-D world coordinate, which is far away from the recognizable distance of vision system or is invisible by the block of building. Suitable landmarks with known 3-D world coordinates are arranged to appropriate locations or learned along the path of experimental environment. Before detecting and recognizing the landmark (LM), the HR is navigated by the pre-planned trajectory to reach the vicinity of arranged LMs. After the recognition of the specific LM via scale-invariant feature transform (SIFT), the corresponding pre-trained multilayer neural network (MLNN) is employed to on-line obtain the relative distance between the HR and the specific LM. Based on the modification of localization through LMs and the target search, the HR can be correctly navigated to the neighbor of the target. Because the inverse kinematics (IK) of two arms is time consuming, another off-line modeling by MLNN is also applied to approximate the transform between the estimated ground truth of target and the joint coordinate of arm. Finally, the comparisons between the so-called enhanced SIFT and traditional Hough transform (HT) for the detection of straight line to navigate the HR the execution of target grasping confirm the effectiveness and efficiency of the proposed method.

Identiferoai:union.ndltd.org:TW/101TKU05442001
Date January 2013
CreatorsShang-Kai Su, 蘇上凱
ContributorsChih-Lyang Hwang, 黃志良
Source SetsNational Digital Library of Theses and Dissertations in Taiwan
Languagezh-TW
Detected LanguageEnglish
Type學位論文 ; thesis
Format80

Page generated in 0.0085 seconds