Design of a fuzzy logic controller with Evolutionary Q-Learning

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 483
  • Download : 0
In this paper, an Evolutionary Q-Learning(EQL) algorithm is proposed, which is based on the modified Q-learning and evolutionary algorithm. The objective of the proposed EQL algorithm is to find a fuzzy logic controller (FLC) when only a binary reinforcement signal is available from un unknown target environment. The proposed EQL algorithm utilizes and evolves a group of FLCs simultaneously to obtain more feasible solution set. By defining Q-values as functional values of states and FLCs, whole FLCs in the group experience Q-learning process together during the same generation. The Q-learning process assists the proposed EQL algorithm in finding better FLCs with good quality consequent parts. At the end of each generation, the best FLC is constructed by the unique elite construction algorithm. In usual case where evolutionary process, which is basically parallel process, is used with reinforcement learning, multiple instances of target systems are necessary to make the algorithm applied on-line. Otherwise, series experimentation for each individual should be performed in turn with a single target system. However, the proposed EQL algorithm alleviates those necessities and makes it applicable on-line with only a single target system. The feasibility of the proposed EQL algorithm is shown through simulations on the well-known cart-pole balancing problem.
Publisher
AUTOSOFT PRESS
Issue Date
2006
Language
English
Article Type
Article
Keywords

NEURAL-NETWORK; REINFORCEMENTS; SYSTEM

Citation

INTELLIGENT AUTOMATION AND SOFT COMPUTING, v.12, no.4, pp.369 - 381

ISSN
1079-8587
URI
http://hdl.handle.net/10203/90724
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0