Return to search

Robust color-based vision for mobile robots

An intelligent agent operating in the real world needs to be
fully aware of the surrounding environment to make the best decision
possible at any given point of time. There are many forms of input
devices for a robot that gather real-time information of the
surroundings, such as video cameras, laser/sonar range finders, and GPS
to name a few. In this thesis, a vision system for a mobile robot
navigating through different illumination conditions is investigated.

Many state-of-the-art object recognition algorithms employ methods
running on grayscale images, because using color is difficult for
several reasons: (a) The object-of-interest's true colors may not be
recorded by the camera hardware due to illumination artifacts, and (b)
colors are often too ambiguous to be a robust visual descriptor of an
object. In this dissertation, we address these two challenges and
present new color-based vision algorithms for mobile robots that are
robust and efficient.

The first part of this dissertation focuses on the problem of color
constancy for mobile robots under different lighting conditions.
Specifically, We use a generate-and-test methodology to evaluate which
simulated global illumination condition leads to the generated view that
most closely matches what the robot actually sees. We assume the
diagonal color model when generating views of the object of interest
under previously unseen conditions.

In the second part of the dissertation, we present a vision framework
for mobile robots that enables observation of illumination artifacts in
a scene and reasoning about the lighting conditions to achieve robust
color-based object tracking. Before implementing this framework, we
first devise a novel vision-based localization correction algorithm with
graphics hardware support, and present how to find possibly shaded
regions in the recorded scene by using techniques from 3D computer
graphics. We then demonstrate how to integrate a color-based object
tracker from the first part of this dissertation with our vision
framework.

Even with the contributions from the first two parts of the
dissertation, there remains some degree of uncertainty in robot's
assessment of an object's true color. The final part of this
dissertation introduces a novel algorithm to overcome this uncertainty
in color-based object tracking. We show how an agent can achieve robust
color-based object tracking by combining multiple different visual
characteristics of an object for more accurate robot vision in the real
world. / text

Identiferoai:union.ndltd.org:UTEXAS/oai:repositories.lib.utexas.edu:2152/ETD-UT-2011-12-4770
Date31 January 2012
CreatorsLee, Juhyun
Source SetsUniversity of Texas
LanguageEnglish
Detected LanguageEnglish
Typethesis
Formatapplication/pdf

Page generated in 0.0015 seconds