Landing posture control of a robotic cat using fast converging reinforcement learning빠르게 수렴하는 강화학습을 이용한 고양이 로봇의 착지자세 제어

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 887
  • Download : 0
DC FieldValueLanguage
dc.contributor.advisorLee, Ju-Jang-
dc.contributor.advisor이주장-
dc.contributor.authorShin, Bong-Gun-
dc.contributor.author신봉근-
dc.date.accessioned2011-12-14T02:07:46Z-
dc.date.available2011-12-14T02:07:46Z-
dc.date.issued2009-
dc.identifier.urihttp://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=308823&flag=dissertation-
dc.identifier.urihttp://hdl.handle.net/10203/38703-
dc.description학위논문(석사) - 한국과학기술원 : 전기및전자공학전공, 2009.2, [ vi, 37 p. ]-
dc.description.abstractIn this thesis, reinforcement learning controller for a robotic cat problem is introduced. When a cat is dropped in an upside down posture, it can rotate its body to land on its feet. Basically, this research is inspired from the posture control ability of cats. Robot`s posture control ability in the air is required when designing advanced robots that can run, jump and land, which can perform tasks in workplaces where ordinary robots cannot go. Also, when space robots operate in the space, it is hard for them to rotate the desired direction because there is no external forces. if the space robot has cat`s rotating ability, it can easily rotate its orientation using internal motion. In addition, the principle of a falling cat can be used in dextrous manipulation by multi-fingered robotic hands and path planning for mobile robots subject to nonholonomic constraints. In the previous research, they considered only the simple shape of robot, which consists of only two symmetric rigid bodies that enables easy dynamics calculation due to cancelations of many terms in the equation. Although their approaches provide accurate controller, it works only for a specific initial condition. However with proposed method, once algorithm is designed it can be applied to any kind of robot structures. Actually the complex asymmetric robot structure is used throughout the research. In addition, the learned controller can find solution with a random initial condition. The controller is based on gradient decent Sarsa($\lambda$), and Selective Experience Replay (SER) is added to it. By adding SER, the algorithm converges faster than the plain Sarsa($\lambda$). For simulation, the robotic cat simulator is developed, and all of these works are verified through the simulator.eng
dc.languageeng-
dc.publisher한국과학기술원-
dc.subjectreinforcement learning-
dc.subjectexperience replay-
dc.subjectrobotic cat-
dc.subjectnonholonomic motion planing-
dc.subject강화학습-
dc.subject경험된 재연정보-
dc.subject고양이 로봇-
dc.subject비홀로노믹 운동 계획-
dc.subjectreinforcement learning-
dc.subjectexperience replay-
dc.subjectrobotic cat-
dc.subjectnonholonomic motion planing-
dc.subject강화학습-
dc.subject경험된 재연정보-
dc.subject고양이 로봇-
dc.subject비홀로노믹 운동 계획-
dc.titleLanding posture control of a robotic cat using fast converging reinforcement learning-
dc.title.alternative빠르게 수렴하는 강화학습을 이용한 고양이 로봇의 착지자세 제어-
dc.typeThesis(Master)-
dc.identifier.CNRN308823/325007 -
dc.description.department한국과학기술원 : 전기및전자공학전공, -
dc.identifier.uid020073269-
dc.contributor.localauthorLee, Ju-Jang-
dc.contributor.localauthor이주장-
Appears in Collection
EE-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0