Sampling Strategies for GAN Synthetic Data

Cited 12 time in webofscience Cited 0 time in scopus
  • Hit : 126
  • Download : 0
Generative Adversarial Networks (GANs) have been used widely to generate large volumes of synthetic data. This data is being utilised for augmenting with real examples in order to train deep Convolutional Neural Networks (CNNs). Studies have shown that the generated examples lack sufficient realism to train deep CNNs and are poor in diversity. Unlike previous studies of randomly augmenting the synthetic data with real data, we present our simple, effective and easy to implement synthetic data sampling methods to train deep CNNs more efficiently and accurately. To this end, we propose to maximally utilise the parameters learned during training of the GAN itself. These include discriminator's realism confidence score and the confidence on the target label of the synthetic data. In addition to this, we explore reinforcement learning (RL) to automatically search a subset of meaningful synthetic examples from a large pool of GAN synthetic data. We evaluate our method on two challenging face attribute classification data sets viz. AffectNet and CelebA. Our extensive experiments clearly demonstrate the need of sampling synthetic data before augmentation, which also improves the performance of one of the state-of-the-art deep CNNs in vitro.
Publisher
The Institute of Electrical and Electronics Engineers, Signal Processing Society
Issue Date
2020-05-04
Language
English
Citation

2020 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2020, pp.2303 - 2307

ISSN
1520-6149
DOI
10.1109/ICASSP40776.2020.9054677
URI
http://hdl.handle.net/10203/289749
Appears in Collection
CS-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 12 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0