Self-Supervised Dense Consistency Regularization for Image-to-Image Translation

Cited 10 time in webofscience Cited 0 time in scopus
  • Hit : 78
  • Download : 0
Unsupervised image-to-image translation has gained considerable attention due to recent impressive advances in generative adversarial networks (GANs). This paper presents a simple but effective regularization technique for improving GAN-based image-to-image translation. To generate images with realistic local semantics and structures, we propose an auxiliary self-supervision loss that enforces point-wise consistency of the overlapping region between a pair of patches cropped from a single real image during training the discriminator of a GAN. Our experiment shows that the proposed dense consistency regularization improves performance substantially on various image-to-image translation scenarios. It also leads to extra performance gains through the combination with instance-level regularization methods. Furthermore, we verify that the proposed model captures domain-specific characteristics more effectively with only a small fraction of training data.
Publisher
IEEE COMPUTER SOC
Issue Date
2022-06
Language
English
Citation

IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp.18280 - 18289

ISSN
1063-6919
DOI
10.1109/CVPR52688.2022.01776
URI
http://hdl.handle.net/10203/305866
Appears in Collection
AI-Conference Papers(학술대회논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 10 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0