A Sensor-Based Navigation for a Mobile Robot using Fuzzy Logic and Reinforcement Learning

Cited 166 time in webofscience Cited 0 time in scopus
  • Hit : 338
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorCho, Hyungsuckko
dc.contributor.authorh.r. beomko
dc.date.accessioned2013-02-27T22:26:15Z-
dc.date.available2013-02-27T22:26:15Z-
dc.date.created2012-02-06-
dc.date.created2012-02-06-
dc.date.issued1995-01-
dc.identifier.citationIEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS, v.25, no.3, pp.464 - 477-
dc.identifier.issn1083-4427-
dc.identifier.urihttp://hdl.handle.net/10203/71198-
dc.description.abstractThis paper proposes a sensor-based navigation method which utilizes fuzzy logic and reinforcement learning for navigation of mobile robot in uncertain environments. The proposed navigator consists of an avoidance behavior and goal-seeking behavior. Two behaviors are independently designed at design stage and then combined by a behavior selector at running stage. A behavior selector using a bistable switching function chooses a behavior at each action step so that the mobile robot can go for the goal position without colliding with obstacles. Fuzzy logic maps the input fuzzy sets representing the mobile robot's state space determined by sensor readings to the output fuzzy sets representing mobile robot's action space. Fuzzy rule bases are built through the reinforcement learning which requires simple evaluation data rather than thousands of input-output training data. Since fuzzy rules for each behavior are learned through reinforcement learning method, fuzzy rule bases can be easily constructed for more complex environments. In order to find mobile robot's present state, the ultrasonic sensors mounted at the mobile robot are used. The effectiveness of the proposed method is verified by a series of simulations.-
dc.languageEnglish-
dc.publisherIEEE-Inst Electrical Electronics Engineers Inc-
dc.subjectTIME OBSTACLE AVOIDANCE-
dc.titleA Sensor-Based Navigation for a Mobile Robot using Fuzzy Logic and Reinforcement Learning-
dc.typeArticle-
dc.identifier.wosidA1995QH11400007-
dc.identifier.scopusid2-s2.0-0029277469-
dc.type.rimsART-
dc.citation.volume25-
dc.citation.issue3-
dc.citation.beginningpage464-
dc.citation.endingpage477-
dc.citation.publicationnameIEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS-
dc.contributor.localauthorCho, Hyungsuck-
dc.contributor.nonIdAuthorh.r. beom-
dc.type.journalArticleArticle-
dc.subject.keywordPlusTIME OBSTACLE AVOIDANCE-
Appears in Collection
ME-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 166 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0