Robots have become increasingly adept at performing a wide variety of tasks in the world. However, many of these tasks can benefit tremendously from having more than a single robot simultaneously working on the problem. Multiple robots can aid in a search and rescue mission each scouting a subsection of the entire area in order to cover it quicker than a single robot can. Alternatively, robots with different abilities can collaborate in order to achieve goals that individually would be more difficult, if not impossible, to achieve. In these cases, multi-robot collaboration can provide benefits in terms of shortening search times, providing a larger mix of sensing, computing, and manipulation capabilities, or providing redundancy to the system for communications or mission accomplishment. One principle drawback of multi-robot systems is how to efficiently and effectively generate plans that use each of the team members to their fullest extent, particularly with a heterogeneous mix of capabilities. Towards this goal, I have developed a series of planning algorithms that incorporate this collaboration into the planning process. Starting with systems that use collaboration in an exploration task I show teams of homogeneous ground robots planning to efficiently explore an initially unknown space. These robots share map information and in a centralized fashion determine the best goal location for each taking into account the information gained by other robots as they move. This work is followed up with a similar exploration scheme but this time expanded to a heterogeneous air-ground robot team operating in a full 3-dimensional environment. The extra dimension adds the requirement for the robots to reason about what portions of the environment they can sense during the planning process. With an air-ground team, there are portions of the environment that can only be sensed by one of the two robots and that information informs the algorithm during the planning process. Finally, I extend the air-ground robot team to moving beyond merely collaboratively constructing the map to actually using the other robots to provide pose information for the sensor and computationally limited team members. By explicitly reasoning about when and where the robots must collaborate during the planning process, this approach can generate trajectories that are not feasible to execute if planning occurred on an individual robot basis. An additional contribution of this thesis is the development of the State Lattice Planning with Controller-based Motion Primitives (SLC) framework. While SLC was developed to support the collaborative localization of multiple robots, it can also be used by a single robot to provide a more robust means of planning. For example, using the SLC algorithm to plan using a combination of vision-based and metric-based motion primitives allows a robot to traverse a GPS-denied region.
Identifer | oai:union.ndltd.org:cmu.edu/oai:repository.cmu.edu:dissertations-2158 |
Date | 01 November 2017 |
Creators | Butzke, Jonathan Michael |
Publisher | Research Showcase @ CMU |
Source Sets | Carnegie Mellon University |
Detected Language | English |
Type | text |
Format | application/pdf |
Source | Dissertations |
Page generated in 0.0019 seconds