Bridging the gap between deterministic and probabilistic recurrent neural networks in a predictive coding framework예측코딩 기반 회귀신경망의 결정론적 모형과 확률론적 모형의 간극 해소

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 369
  • Download : 0
DC FieldValueLanguage
dc.contributor.advisorYoo, Hoi-Jun-
dc.contributor.advisor유회준-
dc.contributor.authorAhmadi, Ahmadreza-
dc.date.accessioned2019-08-25T02:45:56Z-
dc.date.available2019-08-25T02:45:56Z-
dc.date.issued2019-
dc.identifier.urihttp://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=842228&flag=dissertationen_US
dc.identifier.urihttp://hdl.handle.net/10203/265239-
dc.description학위논문(박사) - 한국과학기술원 : 전기및전자공학부, 2019.2,[v, 85 p. :]-
dc.description.abstractThis thesis takes two approaches into consideration to deal with fluctuations in sequential patterns: the deterministic and probabilistic approaches. A deterministic Recurrent Neural Network (RNN) is first proposed using the predictive-coding framework, and the proposed model is tested in both simulation and robotic experiments. It is shown that when fluctuating sequential patterns are provided to the proposed model during learning process, the transient states are formed in the internal dynamic of the model along with the limit-cycle attractor. The deterministic model can deal with fluctuations in unknown test patterns by using the transient states during the error regression, which is a method based on the predictive-coding framework. However, the model's capability is limited when number of training patterns increases. The limitation is addressed by proposing a stochastic RNN based on the predictive-coding framework and recent advancements in variational Bayes. It is shown that the stochastic model can deal with fluctuating patterns in both simulation and robotic experiments by balancing between prediction errors and divergence between its prior and posterior models. We also consider the linking between deterministic and stochastic RNNs by shifting a network parameter, referred as a meta prior, during learning process of the stochastic model. It is observed that the network becomes more deterministic resulting in exact learning by a larger value of the meta prior, whereas it becomes more random resulting in losing details by a smaller meta prior. The generalization is maximized by an intermediate meta prior. We compare the performance of the deterministic and stochastic RNNs for dealing with fluctuating patterns in a robotic experiment where we increase number of training patterns compared with the first robotic experiment of the deterministic model. It is shown that the stochastic model outperforms the deterministic one significantly by using time-varying variance. So, we can overcome limitation of the deterministic model by introducing uncertainty in its hidden layers.-
dc.languageeng-
dc.publisher한국과학기술원-
dc.subjectpredictive coding▼arecurrent neural networks▼avariational Bayes▼agenerative model▼aerror regression-
dc.subject예측 부호화▼a회귀신경망▼a변분 베이지안▼a생성 모델▼a오류 회귀-
dc.titleBridging the gap between deterministic and probabilistic recurrent neural networks in a predictive coding framework-
dc.title.alternative예측코딩 기반 회귀신경망의 결정론적 모형과 확률론적 모형의 간극 해소-
dc.typeThesis(Ph.D)-
dc.identifier.CNRN325007-
dc.description.department한국과학기술원 :전기및전자공학부,-
dc.contributor.alternativeauthor아하마디 아하마드레자-
Appears in Collection
EE-Theses_Ph.D.(박사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0