Single Image Deraining Using Time-Lapse Data

Cited 7 time in webofscience Cited 0 time in scopus
  • Hit : 9
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorCho, Jaehoonko
dc.contributor.authorKim, Seungryongko
dc.contributor.authorMin, Dongboko
dc.contributor.authorSohn, Kwanghoonko
dc.date.accessioned2024-08-16T02:00:16Z-
dc.date.available2024-08-16T02:00:16Z-
dc.date.created2024-08-16-
dc.date.issued2020-
dc.identifier.citationIEEE TRANSACTIONS ON IMAGE PROCESSING, v.29, pp.7274 - 7289-
dc.identifier.issn1057-7149-
dc.identifier.urihttp://hdl.handle.net/10203/322318-
dc.description.abstractLeveraging on recent advances in deep convolutional neural networks (CNNs), single image deraining has been studied as a learning task, achieving an outstanding performance over traditional hand-designed approaches. Current CNNs based deraining approaches adopt the supervised learning framework that uses a massive training data generated with synthetic rain streaks, having a limited generalization ability on real rainy images. To address this problem, we propose a novel learning framework for single image deraining that leverages time-lapse sequences instead of the synthetic image pairs. The deraining networks are trained using the time-lapse sequences in which both camera and scenes are static except for time-varying rain streaks. Specifically, we formulate a background consistency loss such that the deraining networks consistently generate the same derained images from the time-lapse sequences. We additionally introduce two loss functions, the structure similarity loss that encourages the derained image to be similar with an input rainy image and the directional gradient loss using the assumption that the estimated rain streaks are likely to be sparse and have dominant directions. To consider various rain conditions, we leverage a dynamic fusion module that effectively fuses multi-scale features. We also build a novel large-scale time-lapse dataset providing real world rainy images containing various rain conditions. Experiments demonstrate that the proposed method outperforms state-of-the-art techniques on synthetic and real rainy images both qualitatively and quantitatively. On the high-level vision tasks under severe rainy conditions, it has been shown that the proposed method can be utilized as a pre-preprocessing step for subsequent tasks.-
dc.languageEnglish-
dc.publisherIEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC-
dc.titleSingle Image Deraining Using Time-Lapse Data-
dc.typeArticle-
dc.identifier.wosid000553851400003-
dc.identifier.scopusid2-s2.0-85088304878-
dc.type.rimsART-
dc.citation.volume29-
dc.citation.beginningpage7274-
dc.citation.endingpage7289-
dc.citation.publicationnameIEEE TRANSACTIONS ON IMAGE PROCESSING-
dc.identifier.doi10.1109/TIP.2020.3000612-
dc.contributor.localauthorKim, Seungryong-
dc.contributor.nonIdAuthorCho, Jaehoon-
dc.contributor.nonIdAuthorMin, Dongbo-
dc.contributor.nonIdAuthorSohn, Kwanghoon-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorRain-
dc.subject.keywordAuthorTraining data-
dc.subject.keywordAuthorTask analysis-
dc.subject.keywordAuthorConvolutional neural networks-
dc.subject.keywordAuthorRendering (computer graphics)-
dc.subject.keywordAuthorTraining-
dc.subject.keywordAuthorFeature extraction-
dc.subject.keywordAuthorSingle image deraining-
dc.subject.keywordAuthordynamic fusion module-
dc.subject.keywordAuthorconvolutional neural networks (CNNs)-
dc.subject.keywordAuthortime-lapse dataset-
dc.subject.keywordPlusRAIN-
dc.subject.keywordPlusREMOVAL-
dc.subject.keywordPlusMODEL-
Appears in Collection
AI-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 7 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0