Design of a fuzzy logic controller with Evolutionary Q-Learning

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 482
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorKim, MSko
dc.contributor.authorLee, Ju-Jangko
dc.date.accessioned2013-03-07T16:50:54Z-
dc.date.available2013-03-07T16:50:54Z-
dc.date.created2012-02-06-
dc.date.created2012-02-06-
dc.date.issued2006-
dc.identifier.citationINTELLIGENT AUTOMATION AND SOFT COMPUTING, v.12, no.4, pp.369 - 381-
dc.identifier.issn1079-8587-
dc.identifier.urihttp://hdl.handle.net/10203/90724-
dc.description.abstractIn this paper, an Evolutionary Q-Learning(EQL) algorithm is proposed, which is based on the modified Q-learning and evolutionary algorithm. The objective of the proposed EQL algorithm is to find a fuzzy logic controller (FLC) when only a binary reinforcement signal is available from un unknown target environment. The proposed EQL algorithm utilizes and evolves a group of FLCs simultaneously to obtain more feasible solution set. By defining Q-values as functional values of states and FLCs, whole FLCs in the group experience Q-learning process together during the same generation. The Q-learning process assists the proposed EQL algorithm in finding better FLCs with good quality consequent parts. At the end of each generation, the best FLC is constructed by the unique elite construction algorithm. In usual case where evolutionary process, which is basically parallel process, is used with reinforcement learning, multiple instances of target systems are necessary to make the algorithm applied on-line. Otherwise, series experimentation for each individual should be performed in turn with a single target system. However, the proposed EQL algorithm alleviates those necessities and makes it applicable on-line with only a single target system. The feasibility of the proposed EQL algorithm is shown through simulations on the well-known cart-pole balancing problem.-
dc.languageEnglish-
dc.publisherAUTOSOFT PRESS-
dc.subjectNEURAL-NETWORK-
dc.subjectREINFORCEMENTS-
dc.subjectSYSTEM-
dc.titleDesign of a fuzzy logic controller with Evolutionary Q-Learning-
dc.typeArticle-
dc.identifier.wosid000242981700001-
dc.identifier.scopusid2-s2.0-33846551798-
dc.type.rimsART-
dc.citation.volume12-
dc.citation.issue4-
dc.citation.beginningpage369-
dc.citation.endingpage381-
dc.citation.publicationnameINTELLIGENT AUTOMATION AND SOFT COMPUTING-
dc.contributor.localauthorLee, Ju-Jang-
dc.contributor.nonIdAuthorKim, MS-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorfuzzy logic controller-
dc.subject.keywordAuthorQ-learning-
dc.subject.keywordAuthorreinforcement learning-
dc.subject.keywordPlusNEURAL-NETWORK-
dc.subject.keywordPlusREINFORCEMENTS-
dc.subject.keywordPlusSYSTEM-
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0