Revisiting Softmax masking for stability in continual learning비대칭적 소프트맥스 함수를 활용한 이미지 연속 학습 안정성 개선

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 4
  • Download : 0
DC FieldValueLanguage
dc.contributor.advisor김준모-
dc.contributor.authorKwon, Min-Chan-
dc.contributor.author권민찬-
dc.date.accessioned2024-07-30T19:30:37Z-
dc.date.available2024-07-30T19:30:37Z-
dc.date.issued2024-
dc.identifier.urihttp://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1096058&flag=dissertationen_US
dc.identifier.urihttp://hdl.handle.net/10203/321353-
dc.description학위논문(석사) - 한국과학기술원 : 김재철AI대학원, 2024.2,[iii, 16 p. :]-
dc.description.abstractIn continual learning, many classifiers use softmax function to learn confidence. However, numerous studies have pointed out its inability to accurately determine confidence distributions for outliers, often referred to as epistemic uncertainty. This inherent limitation also curtails the accurate decisions for selecting what to forget and keep in previously trained confidence distributions over continual learning process. To address the issue, we revisit the effects of masking softmax function. While this method is both simple and prevalent in literature, its implication for retaining confidence distribution during continual learning, also known as stability, has been under-investigated. In this paper, we revisit the impact of softmax masking, and introduce a methodology to utilize its confidence preservation effects. In class- and task-incremental learning benchmarks with and without memory replay, our approach significantly increases stability while maintaining sufficiently large plasticity. In the end, our methodology shows better overall performance than state-of-the-art methods, particularly in the use with zero or small memory. This lays a simple and effective foundation of strongly stable replay-based continual learning.-
dc.languageeng-
dc.publisher한국과학기술원-
dc.subject연속 학습▼a비대칭적 소프트맥스▼a이미지 분류▼a기계 학습▼a심층 학습-
dc.subjectContinual learning▼aSoftmax masking▼aImage classification▼aMachine learning▼aDeep learning-
dc.titleRevisiting Softmax masking for stability in continual learning-
dc.title.alternative비대칭적 소프트맥스 함수를 활용한 이미지 연속 학습 안정성 개선-
dc.typeThesis(Master)-
dc.identifier.CNRN325007-
dc.description.department한국과학기술원 :김재철AI대학원,-
dc.contributor.alternativeauthorKim, Junmo-
Appears in Collection
AI-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0