Assistive robots have the potential to enable persons with motor disabilities to live more independent lives. Object retrieval has been rated a high-priority task for assistive robots. A key challenge in creating effective assistive robots lies in designing control interfaces that enable the human user to control the robot. This thesis builds on prior work that uses a laser pointer to allow the person to intuitively communicate their goals to a robot by creating a `clickable world'. Specifically, this thesis reduces the infrastructure needed for the robot to recognize the user's goal by augmenting the laser pointer with a small camera, an inertial measurement unit (IMU), and a laser rangefinder to estimate the location of the object to be grasped. The robot then drives to the approximate target location given by input from the laser pointer while using an onboard camera to detect an object near the target location. Local autonomy on the robot is used to visually navigate to the detected object to enable object retrieval.
Results show a successful proof of concept in demonstrating reasonable detection of user intent on a 1.23 x 1.83 meters squared test grid. Testing of the estimation of object location in the odometry frame fell within range of successful local autonomy object retrieval for an environment with a single object. Future work includes testing on a wide variety of dropped objects and in cluttered environments which is needed to validate the effectiveness of the system for potential end users.
Identifer | oai:union.ndltd.org:bu.edu/oai:open.bu.edu:2144/40939 |
Date | 15 May 2020 |
Creators | Hamilton, Kali |
Contributors | Khurshid, Rebecca P. |
Source Sets | Boston University |
Language | en_US |
Detected Language | English |
Type | Thesis/Dissertation |
Page generated in 0.0085 seconds