Solving Continuous Action/State Problem in Q-Learning Using Extended Rule Based Fuzzy Inference System

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 311
  • Download : 1
Q-learning is a kind of reinforcement learning where the agent solves the given task based on rewards received from the environment. Most research done in the field of Q-learning has focused on discrete domains, although the environment with which the agent must interact is generally continuous. Thus we need to devise some methods that enable Q-learning to be applicable to the continuous problem domain. In this paper, an extended fuzzy rule is proposed so that it can incorporate Q-learning. The interpolation technique, which is widely used in memory-based learning, is adopted to represent the appropriate Q value for current state and action pair in each extended fuzzy rule. The resulting structure based on the fuzzy inference system has the capability of solving the continuous state about the environment. The effectiveness of the proposed structure is shown through simulation on the cart-pole system.
Publisher
The Institute of Control, Automation and Systems Engineers
Issue Date
2001-09
Language
English
Citation

JOURNAL OF THE INSTITUTE OF CONTROL, AUTOMATION AND SYSTEMS ENGINEER, v.3, no.3, pp.170 - 175

ISSN
1229-5140
URI
http://hdl.handle.net/10203/8420
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0