Building a Fuzzy Inference System (FIS) generally requires experts’ knowledge. However, experts’ knowledge is not
always available. When there is few experts’ knowledge, it becomes hard to build a FIS using one of supervised learning
methods.
Meanwhile, Q-learning is a kind of reinforcement learning where an agent can acquire knowledge from its experiences
even without the model of the environment and experts’ knowledge. The Q-learning, however, has weakness that the
original algorithm cannot deal with the continuous states and continuous actions.
In this paper, we proposed a FIS that can do Q-learning. The proposed FIS structure is made up of several extended rules.
Based on these extended rules, Q-learning algorithm for the proposed structure is developed. It is shown that this
combination results in a FIS that can learn through its experience without experts’ knowledge. Also the proposed structure
can resolve the continuous state/action problem in Q-learning by virtue of a FIS. The effectiveness of the proposed
structure is shown through simulation on the cart-pole system