Test-Time Style Shifting: Handling Arbitrary Styles in Domain Generalization

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 39
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorPark, JungWukko
dc.contributor.authorHan, Dong Junko
dc.contributor.authorKim, Soyeongko
dc.contributor.authorMoon, Jaekyunko
dc.date.accessioned2023-12-06T05:02:49Z-
dc.date.available2023-12-06T05:02:49Z-
dc.date.created2023-11-24-
dc.date.issued2023-07-26-
dc.identifier.citation40th International Conference on Machine Learning, ICML 2023, pp.27114 - 27131-
dc.identifier.urihttp://hdl.handle.net/10203/315823-
dc.description.abstractIn domain generalization (DG), the target domain is unknown when the model is being trained, and the trained model should successfully work on an arbitrary (and possibly unseen) target domain during inference. This is a difficult problem, and despite active studies in recent years, it remains a great challenge. In this paper, we take a simple yet effective approach to tackle this issue. We propose test-time style shifting, which shifts the style of the test sample (that has a large style gap with the source domains) to the nearest source domain that the model is already familiar with, before making the prediction. This strategy enables the model to handle any target domains with arbitrary style statistics, without additional model update at test-time. Additionally, we propose style balancing, which provides a great platform for maximizing the advantage of test-time style shifting by handling the DG-specific imbalance issues. The proposed ideas are easy to implement and successfully work in conjunction with various other DG schemes. Experimental results on different datasets show the effectiveness of our methods.-
dc.languageEnglish-
dc.publisherML Research Press-
dc.titleTest-Time Style Shifting: Handling Arbitrary Styles in Domain Generalization-
dc.typeConference-
dc.identifier.scopusid2-s2.0-85174414842-
dc.type.rimsCONF-
dc.citation.beginningpage27114-
dc.citation.endingpage27131-
dc.citation.publicationname40th International Conference on Machine Learning, ICML 2023-
dc.identifier.conferencecountryUS-
dc.identifier.conferencelocationHonolulu, HI-
dc.contributor.localauthorMoon, Jaekyun-
dc.contributor.nonIdAuthorHan, Dong Jun-
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0