A Deep Motion Deblurring Network Based on Per-Pixel Adaptive Kernels With Residual Down-Up and Up-Down Modules

Cited 27 time in webofscience Cited 0 time in scopus
  • Hit : 156
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorKim, Munchurlko
dc.contributor.authorSim, Hyeonjunko
dc.date.accessioned2019-11-28T08:26:27Z-
dc.date.available2019-11-28T08:26:27Z-
dc.date.created2019-11-25-
dc.date.created2019-11-25-
dc.date.created2019-11-25-
dc.date.issued2019-06-17-
dc.identifier.citationComputer Vision and Pattern Recognition Workshops (CVPRW 2019), pp.1 - 8-
dc.identifier.issn2160-7508-
dc.identifier.urihttp://hdl.handle.net/10203/268691-
dc.description.abstractDue to the object motion during the camera exposure time, latent pixel information appears scattered in a blurred image. A large dataset of dynamic motion blur and blur-free frame pairs enables deep neural networks to learn de-blurring operations directly in end-to-end manners. In this paper, we propose a novel motion deblurring kernel learning network that predicts the per-pixel deblur kernel and a residual image. The learned deblur kernel filters and linearly combines neighboring pixels to restore the clean pixels in its corresponding location. The per-pixel adaptive convolution with the learned deblur kernel can effectively handle non-uniform blur. At the same time, the generated residual image is added to the adaptive convolution result to compensate for the limited receptive field of the learned deblur kernel. That is, the adaptive convolution and the residual image play different but complementary roles each other to reconstruct the latent clean images in a collaborative manner. We also propose residual down-up (RDU) and residual up-down (RUD) blocks that help improve the motion deblurring performance. The RDU and RUD blocks are designed to adjust the spatial size and the number of channels of the intermediate feature within the blocks. We demonstrate the effectiveness of our motion deblurring kernel learning network by showing intensive experimental results compared to those of the state-of-the-art methods.-
dc.languageEnglish-
dc.publisherComputer Vision Foundation-
dc.titleA Deep Motion Deblurring Network Based on Per-Pixel Adaptive Kernels With Residual Down-Up and Up-Down Modules-
dc.typeConference-
dc.identifier.wosid000569983600261-
dc.identifier.scopusid2-s2.0-85083288811-
dc.type.rimsCONF-
dc.citation.beginningpage1-
dc.citation.endingpage8-
dc.citation.publicationnameComputer Vision and Pattern Recognition Workshops (CVPRW 2019)-
dc.identifier.conferencecountryUS-
dc.identifier.conferencelocationLong Beach, California-
dc.identifier.doi10.1109/CVPRW.2019.00267-
dc.contributor.localauthorKim, Munchurl-
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 27 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0