Self-organizing fuzzy inference system by Q-learningQ-학습에 의한 자기 형성 퍼지 추론 시스템

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 411
  • Download : 0
Q-learning is a kind of reinforcement learning where the agent solves the given task based on rewards received from the environment. The most of researches done in the field of reinforcement learning has focused on the discrete domains. But the environment with which the agent should interact is continuous. Thus a method that is able to make Q-learning applicable to the continuous problem domain is needed. In this thesis, basic fuzzy rule is extended so that it can incorporate the Q-learning. And the interpolation technique which is widely used in memory-based learning is adopted to represent the appropriate Q value for current state and action pair. The resulting structure based on fuzzy inference system has the capability of solving the continuous state and action problem in Q-learning by virtue of fuzzy inference system. In addition, the resulting Self-Organizing Fuzzy Inference System by Q-learning(SOFIS-Q) can generate fuzzy rules via interacting with the environment without a priori knowledge about the environment. The effectiveness of proposed structure is shown thorough simulation on cart-pole system.
Advisors
Lee, Ju-Jangresearcher이주장researcher
Description
한국과학기술원 : 전기및전자공학과,
Publisher
한국과학기술원
Issue Date
1999
Identifier
150829/325007 / 000973090
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 전기및전자공학과, 1999.2, [ v, 57 p. ]

Keywords

Fuzzy inference system; Reinforcement learning; Q-learning; Learning; 학습; 퍼지 추론 시스템; 강화학습; Q-학습

URI
http://hdl.handle.net/10203/37140
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=150829&flag=dissertation
Appears in Collection
EE-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0