Rethinking the value of representation pairs via supervised momentum contrastive learning지도 모멘텀 대조 학습을 통한 표현 쌍의 가치 고찰

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 120
  • Download : 0
The recent development of contrastive learning has resulted in significant advancements in self-supervised representation learning, with particularly strong results in the fully-supervised context setting with label information. The number of representations utilized in the loss computation has a direct impact on the performance of this batch contrastive approach. In this paper, we present supervised momentum contrastive learning that can arbitrarily increase the number of representations by applying a memory queue to supervised contrastive learning. It was shown that the performance of existing supervised contrastive learning can be exceeded by increasing the number of representation pairs under GPU memory or time constraint. This offers us the possibility to increase performance without high-performance computing equipment in various contrastive learning.
Advisors
Lee, Taesikresearcher이태식researcher
Description
한국과학기술원 :산업및시스템공학과,
Publisher
한국과학기술원
Issue Date
2022
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 산업및시스템공학과, 2022.2,[iii, 22 p. :]

URI
http://hdl.handle.net/10203/308806
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=997784&flag=dissertation
Appears in Collection
IE-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0