A Deep Motion Deblurring Network Based on Per-Pixel Adaptive Kernels With Residual Down-Up and Up-Down Modules

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 22
  • Download : 0
Due to the object motion during the camera exposure time, latent pixel information appears scattered in a blurred image. A large dataset of dynamic motion blur and blur-free frame pairs enables deep neural networks to learn de-blurring operations directly in end-to-end manners. In this paper, we propose a novel motion deblurring kernel learning network that predicts the per-pixel deblur kernel and a residual image. The learned deblur kernel filters and linearly combines neighboring pixels to restore the clean pixels in its corresponding location. The per-pixel adaptive convolution with the learned deblur kernel can effectively handle non-uniform blur. At the same time, the generated residual image is added to the adaptive convolution result to compensate for the limited receptive field of the learned deblur kernel. That is, the adaptive convolution and the residual image play different but complementary roles each other to reconstruct the latent clean images in a collaborative manner. We also propose residual down-up (RDU) and residual up-down (RUD) blocks that help improve the motion deblurring performance. The RDU and RUD blocks are designed to adjust the spatial size and the number of channels of the intermediate feature within the blocks. We demonstrate the effectiveness of our motion deblurring kernel learning network by showing intensive experimental results compared to those of the state-of-the-art methods.
Publisher
Computer Vision Foundation
Issue Date
2019-06-17
Language
English
Citation

Computer Vision and Pattern Recognition Workshops (CVPRW 2019), pp.1 - 8

URI
http://hdl.handle.net/10203/268691
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0