(The) disharmony between batch normalization and ReLU causes the gradient explosion, but is offset by the correlation between activations배치 정규화와 정류 선형 유닛 간의 부조화로 인한 기울기 폭발과 입력 신호 간의 상관관계로 인한 상쇄

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 4
  • Download : 0
Deep neural networks, which employ batch normalization and ReLU-like activation functions, suffer from instability in the early stages of training due to the high gradient induced by temporal gradient explosion. In this study, we analyze the occurrence and mitigation of gradient explosion both theoretically and empirically, and discover that the correlation between activations plays a key role in preventing the gradient explosion from persisting throughout the training. Finally, based on our observations, we propose an improved adaptive learning rate algorithm to effectively control the training instability
Advisors
최재식researcher
Description
한국과학기술원 :김재철AI대학원,
Publisher
한국과학기술원
Issue Date
2023
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 김재철AI대학원, 2023.8,[iv, 26 p. :]

Keywords

심층학습▼a기울기 폭발▼a학습 불안정성▼aWarmUp▼aLARS; Deep learning▼aGradient explosion▼aTraining instability▼aWarmUp▼aLARS

URI
http://hdl.handle.net/10203/320551
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1045739&flag=dissertation
Appears in Collection
AI-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0