Plan recognition in cooperative or adversarial situations requires the ability to reason with beliefs. The problem is complicated in adversarial situations because an opponent may employ deception. In this case, an agent must also be able to reason about an opponent's beliefs (nested beliefs) as well as his own beliefs. A system has been developed that permits agents to reason with nested beliefs using possible worlds semantics; consistency maintenance is employed to allow agents to revise their beliefs when an inconsistency occurs. The advantages over prior systems are that belief revision occurs without user interaction and that beliefs are treated as objects having equal status with facts; this permits complex interaction between beliefs and actions. The theory includes treatment of many areas necessary to create a system of multiple autonomous reasoning agents. These agents are given the ability to deceive each other and to predict when they are being deceived. The system is shown to be practical by its implementation in a simulation. / Master of Science
Identifer | oai:union.ndltd.org:VTETD/oai:vtechworks.lib.vt.edu:10919/42193 |
Date | 25 April 2009 |
Creators | Klock, Brian Lee |
Contributors | Electrical Engineering, Roach, John W., Nutter, Jane Terry, Abbott, A. Lynn |
Publisher | Virginia Tech |
Source Sets | Virginia Tech Theses and Dissertation |
Language | English |
Detected Language | English |
Type | Thesis, Text |
Format | x, 349 leaves, BTD, application/pdf, application/pdf |
Rights | In Copyright, http://rightsstatements.org/vocab/InC/1.0/ |
Relation | OCLC# 23663419, LD5655.V855_1990.K576.pdf |
Page generated in 0.0029 seconds