Supervised vs. Self-supervised Pre-trained models for Hand Pose Estimation

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 34
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorCho, Gyusangko
dc.contributor.authorYoun, Chan-Hyunko
dc.date.accessioned2023-09-15T06:00:27Z-
dc.date.available2023-09-15T06:00:27Z-
dc.date.created2023-09-15-
dc.date.issued2022-10-
dc.identifier.citation13th International Conference on Information and Communication Technology Convergence, ICTC 2022, pp.467 - 470-
dc.identifier.urihttp://hdl.handle.net/10203/312666-
dc.description.abstractFully-supervised learning and self-supervised learning are two standard learning frameworks for training visual representations. While the superiority and inferiority of the two frameworks are not obscured when pre-training is performed, this paper aims to compare the transferability performance for the hand posture estimation task. We conduct the experiment on a supervised pre-trained model and 5 self-supervised pre-trained models. To this end, we conclude that self-supervised pre-trained models do not necessarily outperform their supervised pre-trained counterparts, while self-supervised pre-trained models lead to faster convergence of the neural network.-
dc.languageEnglish-
dc.publisherIEEE Computer Society-
dc.titleSupervised vs. Self-supervised Pre-trained models for Hand Pose Estimation-
dc.typeConference-
dc.identifier.scopusid2-s2.0-85143256808-
dc.type.rimsCONF-
dc.citation.beginningpage467-
dc.citation.endingpage470-
dc.citation.publicationname13th International Conference on Information and Communication Technology Convergence, ICTC 2022-
dc.identifier.conferencecountryKO-
dc.identifier.conferencelocationJeju Island-
dc.identifier.doi10.1109/ICTC55196.2022.9953011-
dc.contributor.localauthorYoun, Chan-Hyun-
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0