A Deep Convolutional Neural Network with Selection Units for Super-Resolution

Cited 87 time in webofscience Cited 0 time in scopus
  • Hit : 277
  • Download : 0
Rectified linear units (ReLU) are known to be effective in many deep learning methods. Inspired by linear-mapping technique used in other super-resolution (SR) methods, we reinterpret ReLU into point-wise multiplication of an identity mapping and a switch, and finally present a novel nonlinear unit, called a selection unit (SU). While conventional ReLU has no direct control through which data is passed, the proposed SU optimizes this on-off switching control, and is therefore capable of better handling nonlinearity functionality than ReLU in a more flexible way. Our proposed deep network with SUs, called SelNet, was top-5th ranked in NTIRE2017 Challenge, which has a much lower computation complexity compared to the top-4 entries. Further experiment results show that our proposed SelNet outperforms our baseline only with ReLU (without SUs), and other state-of-the-art deep-learning-based SR methods.
Publisher
IEEE Computer Society and the Computer Vision Foundation (CVF)
Issue Date
2017-07-21
Language
English
Citation

30th IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp.1150 - 1156

ISSN
2160-7516
DOI
10.1109/CVPRW.2017.153
URI
http://hdl.handle.net/10203/227659
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 87 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0