This dissertation investigates high-level decision making for agents that are both goal and utility
driven. We develop a partially observable Markov decision process (POMDP) planner which
is an extension of an agent programming language called DTGolog, itself an extension of the
Golog language. Golog is based on a logic for reasoning about action—the situation calculus.
A POMDP planner on its own cannot cope well with dynamically changing environments
and complicated goals. This is exactly a strength of the belief-desire-intention (BDI) model:
BDI theory has been developed to design agents that can select goals intelligently, dynamically
abandon and adopt new goals, and yet commit to intentions for achieving goals. The contribution
of this research is twofold: (1) developing a relational POMDP planner for cognitive
robotics, (2) specifying a preliminary BDI architecture that can deal with stochasticity in action
and perception, by employing the planner. / Computing / M. Sc. (Computer Science)
Identifer | oai:union.ndltd.org:netd.ac.za/oai:union.ndltd.org:unisa/oai:umkn-dsp01.int.unisa.ac.za:10500/3517 |
Date | 02 1900 |
Creators | Rens, Gavin B. |
Contributors | Van der Poel, E., Ferrein, A. |
Source Sets | South African National ETD Portal |
Language | English |
Detected Language | English |
Type | Dissertation |
Format | 1 online resource (73 p.) |
Page generated in 0.0025 seconds