Rethinking Efficacy of Softmax for Lightweight Non-local Neural Networks

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 440
  • Download : 0
Non-local (NL) block is a popular module that demonstrates the capability to model global contexts. However, NL block generally has heavy computation and memory costs, so it is impractical to apply the block to high-resolution feature maps. In this paper, to investigate the efficacy of NL block, we empirically analyze if the magnitude and direction of input feature vectors properly affect the attention between vectors. The results show the inefficacy of softmax operation that is generally used to normalize the attention map of the NL block. Attention maps normalized with softmax operation highly rely upon magnitude of key vectors, and performance is degenerated if the magnitude information is removed. By replacing softmax operation with the scaling factor, we demonstrate improved performance on CIFAR-10, CIFAR-100, and Tiny-ImageNet. In Addition, our method shows robustness to embedding channel reduction and embedding weight initialization. Notably, our method makes multi-head attention employable without additional computational cost.
Publisher
IEEE
Issue Date
2022-10-16
Language
English
Citation

IEEE International Conference on Image Processing, ICIP 2022, pp.1031 - 1035

ISSN
1522-4880
DOI
10.1109/icip46576.2022.9897905
URI
http://hdl.handle.net/10203/300288
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0