Shallow Neural Network can Perfectly Classify an Object following Separable Probability Distribution

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 156
  • Download : 0
Guiding the design of neural networks is of great importance to save enormous resources consumed on empirical decisions of architectural parameters. This paper constructs shallow sigmoid-type neural networks that achieve 100% accuracy in classification for datasets following a linear separability condition. The separability condition in this work is more relaxed than the widely used linear separability. Moreover, the constructed neural network guarantees perfect classification for any datasets sampled from a separable probability distribution. This generalization capability comes from the saturation of sigmoid function that exploits small margins near the boundaries of intervals formed by the separable probability distribution.
Publisher
Institute of Electrical and Electronics Engineers Inc.
Issue Date
2019-07-10
Language
English
Citation

2019 IEEE International Symposium on Information Theory, ISIT 2019, pp.1812 - 1816

DOI
10.1109/ISIT.2019.8849497
URI
http://hdl.handle.net/10203/269350
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0