Towards Robust Neural Networks with Lipschitz Continuity

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 173
  • Download : 0
Deep neural networks have shown remarkable performance across a wide range of vision-based tasks, particularly due to the availability of large-scale datasets for training and better architectures. However, data seen in the real world are often affected by distortions that not accounted for by the training datasets. In this paper, we address the challenge of robustness and stability of neural networks and propose a general training method that can be used to make the existing neural network architectures more robust and stable to input visual perturbations while using only available datasets for training. Proposed training method is convenient to use as it does not require data augmentation or changes in the network architecture. We provide theoretical proof as well as empirical evidence for the efficiency of the proposed training method by performing experiments with existing neural network architectures and demonstrate that same architecture when trained with the proposed training method perform better than when trained with conventional training approach in the presence of noisy datasets.
Publisher
IWDW
Issue Date
2018-10
Language
English
Citation

17th International Workshop on Digital Forensics and Watermarking (IWDW), pp.373 - 389

DOI
10.1007/978-3-030-11389-6_28
URI
http://hdl.handle.net/10203/247341
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0