Can Maxout Units Downsize Restoration Networks? - Single Image Super-Resolution Using Lightweight CNN with Maxout Units

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 180
  • Download : 0
Rectified linear units (ReLU) are well-known to obtain higher performance for deep-learning-based applications. However, networks with ReLU tend to perform poorly when the number of parameters is constrained. To overcome, we propose a novel network utilizing maxout units (MU), and show its effectiveness on super-resolution (SR). In this paper, we first reveal that MU can make the filter sizes halved in restoration problems thus leading to compaction of the network. To the best of our knowledge, we are the first to incorporate MU into SR applications and show promising results. In MU, feature maps from a previous convolutional layer are divided into two parts along channels, which are compared element-wise and only their max values are passed to a next layer. Along with interesting properties of MU to be analyzed, we further investigate other variants of MU. Our MU-based SR method reconstructs images with comparable quality compared to previous SR methods, even with smaller parameters.
Publisher
Asian Conference on Computer Vision (ACCV)
Issue Date
2018-12-06
Language
English
Citation

14th Asian Conference on Computer Vision (ACCV), pp.471 - 487

DOI
10.1007/978-3-030-20876-9_30
URI
http://hdl.handle.net/10203/247205
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0