Traditionally, the fuzzy rules for a fuzzy controller are provided by experts. They cannot be trained from a set of input-output training examples because the correct response of the plant being controlled is delayed and cannot be obtained immediately. In this paper, we propose a novel approach to construct fuzzy rules for a fuzzy controller based on reinforcement learning. Our task is to learn from the delayed reward to choose sequences of actions that result in the best control. A neural network with delays is used to model the evaluation function Q. Fuzzy rules are constructed and added as the learning proceeds. Both the weights of the Q-learning network and the parameters of the fuzzy rules are tuned by gradient descent. Experimental results have shown that the fuzzy rules
obtained perform effectively for control.
Identifer | oai:union.ndltd.org:NSYSU/oai:NSYSU:etd-0910107-111129 |
Date | 10 September 2007 |
Creators | Pei, Shan-cheng |
Contributors | Shie-Jue Lee, Chih-Hong Wu, Shing-Tai Pan, Chen-Sen Ouyang, Chao-He Hsieh |
Publisher | NSYSU |
Source Sets | NSYSU Electronic Thesis and Dissertation Archive |
Language | Cholon |
Detected Language | English |
Type | text |
Format | application/pdf |
Source | http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0910107-111129 |
Rights | unrestricted, Copyright information available at source archive |
Page generated in 0.0023 seconds