Practical Sharpness-Aware Minimization Cannot Converge All the Way to Optima

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 34
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorSi, Dongkukko
dc.contributor.authorYun, Chulheeko
dc.date.accessioned2024-02-04T14:00:20Z-
dc.date.available2024-02-04T14:00:20Z-
dc.date.created2024-02-04-
dc.date.issued2023-12-12-
dc.identifier.citation37th Annual Conference on Neural Information Processing Systems-
dc.identifier.urihttp://hdl.handle.net/10203/317996-
dc.description.abstractSharpness-Aware Minimization (SAM) is an optimizer that takes a descent step based on the gradient at a perturbation yt=xt+ρ∇f(xt)∥∇f(xt)∥ of the current point xt. Existing studies prove convergence of SAM for smooth functions, but they do so by assuming decaying perturbation size ρ and/or no gradient normalization in yt, which is detached from practice. To address this gap, we study deterministic/stochastic versions of SAM with practical configurations (i.e., constant ρ and gradient normalization in yt) and explore their convergence properties on smooth functions with (non)convexity assumptions. Perhaps surprisingly, in many scenarios, we find out that SAM has limited capability to converge to global minima or stationary points. For smooth strongly convex functions, we show that while deterministic SAM enjoys tight global convergence rates of Θ~(1T2), the convergence bound of stochastic SAM suffers an inevitable additive term O(ρ2), indicating convergence only up to neighborhoods of optima. In fact, such O(ρ2) factors arise for stochastic SAM in all the settings we consider, and also for deterministic SAM in nonconvex cases; importantly, we prove by examples that such terms are unavoidable. Our results highlight vastly different characteristics of SAM with vs. without decaying perturbation size or gradient normalization, and suggest that the intuitions gained from one version may not apply to the other.-
dc.languageEnglish-
dc.publisherNeural Information Processing Systems-
dc.titlePractical Sharpness-Aware Minimization Cannot Converge All the Way to Optima-
dc.typeConference-
dc.type.rimsCONF-
dc.citation.publicationname37th Annual Conference on Neural Information Processing Systems-
dc.identifier.conferencecountryUS-
dc.identifier.conferencelocationNew Orleans, LA-
dc.contributor.localauthorYun, Chulhee-
dc.contributor.nonIdAuthorSi, Dongkuk-
Appears in Collection
AI-Conference Papers(학술대회논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0