Return to search

Enabling the evaluation of learning in instructable software agents

An Instructable Software Agent (ISA) is a software agent that humans can teach through Natural Instruction Methods (NIMs)—methods humans naturally use to teach one another. Some examples of NIMs include giving demonstrations, guided practice sessions, and definitions of concepts. If software agents were instructable, humans would be able to impart knowledge to software systems though a more natural interface.

In this dissertation, I address generating benchmarks for evaluating the learning ability of ISAs despite the important differences that may exist between human learners and ISAs. I first present three years of case studies uncovering the challenges of such a comparison and then make recommendations for future studies. The main contributions of this dissertation are

1. a theory of using humans to evaluate the learning ability of Instructable Software Agents (ISAs),

2. a refined method for developing curricula and benchmarks for evaluating ISAs, including a scalable lab configuration for performing human benchmarking and a suite of accompanying software tools, and

3. the case studies themselves, amounting to an in-depth ethnographic study of the issues involved in using humans to develop curricula and benchmarks for ISAs. / text

Identiferoai:union.ndltd.org:UTEXAS/oai:repositories.lib.utexas.edu:2152/ETD-UT-2012-08-6229
Date11 October 2012
CreatorsGrant, Robert David
Source SetsUniversity of Texas
LanguageEnglish
Detected LanguageEnglish
Typethesis
Formatapplication/pdf

Page generated in 0.002 seconds