In real-time control systems, the value of a control decision depends not
only on the correctness of the decision but also on the time when that decision
is available. Recent work in real-time decision making has used machine learning
techniques to automatically construct reactive controllers, that is, controllers
with little or no internal state and low time complexity pathways between sensors
and effectors. This paper presents research on 1) how a problem representation
affects the trade-offs between space and performance, and 2) off -line versus on-line
approaches for collecting training examples when using machine learning techniques
to construct reactive controllers. Empirical results show that for a partially
observable problem both the inclusion of history information in the problem representation
and the use of on-line rather than off -line learning can improve the
performance of the reactive controller. / Graduation date: 1994
Identifer | oai:union.ndltd.org:ORGSU/oai:ir.library.oregonstate.edu:1957/37128 |
Date | 21 May 1993 |
Creators | Westerberg, Caryl J. |
Contributors | D'Ambrosio, Bruce |
Source Sets | Oregon State University |
Language | en_US |
Detected Language | English |
Type | Thesis/Dissertation |
Page generated in 0.0019 seconds