Come-Closer-Diffuse-Faster: Accelerating Conditional Diffusion Models for Inverse Problems through Stochastic Contraction

Cited 63 time in webofscience Cited 0 time in scopus
  • Hit : 97
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorChung, Hyungjinko
dc.contributor.authorSim, Byeongsuko
dc.contributor.authorYe, Jong Chulko
dc.date.accessioned2023-09-19T11:01:20Z-
dc.date.available2023-09-19T11:01:20Z-
dc.date.created2023-09-19-
dc.date.issued2022-06-
dc.identifier.citation2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, pp.12403 - 12412-
dc.identifier.issn1063-6919-
dc.identifier.urihttp://hdl.handle.net/10203/312775-
dc.description.abstractDiffusion models have recently attained significant interest within the community owing to their strong performance as generative models. Furthermore, its application to inverse problems have demonstrated state-of-the-art performance. Unfortunately, diffusion models have a critical downside - they are inherently slow to sample from, needing few thousand steps of iteration to generate images from pure Gaussian noise. In this work, we show that starting from Gaussian noise is unnecessary. Instead, starting from a single forward diffusion with better initialization significantly reduces the number of sampling steps in the reverse conditional diffusion. This phenomenon is formally explained by the contraction theory of the stochastic difference equations like our conditional diffusion strategy - the alternating applications of reverse diffusion followed by a non-expansive data consistency step. The new sampling strategy, dubbed Come-Closer-Diffuse-Faster (CCDF), also reveals a new insight on how the existing feed-forward neural network approaches for inverse problems can be synergistically combined with the diffusion models. Experimental results with super-resolution, image inpainting, and compressed sensing MRI demonstrate that our method can achieve state-of-the-art reconstruction performance at significantly reduced sampling steps.-
dc.languageEnglish-
dc.publisherIEEE Computer Society-
dc.titleCome-Closer-Diffuse-Faster: Accelerating Conditional Diffusion Models for Inverse Problems through Stochastic Contraction-
dc.typeConference-
dc.identifier.wosid000870759105048-
dc.identifier.scopusid2-s2.0-85131646789-
dc.type.rimsCONF-
dc.citation.beginningpage12403-
dc.citation.endingpage12412-
dc.citation.publicationname2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022-
dc.identifier.conferencecountryUS-
dc.identifier.conferencelocationNew Orleans, LA-
dc.identifier.doi10.1109/CVPR52688.2022.01209-
dc.contributor.localauthorYe, Jong Chul-
Appears in Collection
AI-Conference Papers(학술대회논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 63 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0