(The) convergence analysis of sharpness-aware minimization under practical settings실제 환경에서의 첨예도 감지 최소화에 대한 수렴성 분석

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 5
  • Download : 0
DC FieldValueLanguage
dc.contributor.advisor윤철희-
dc.contributor.authorSi, Dongkuk-
dc.contributor.author시동국-
dc.date.accessioned2024-07-30T19:30:36Z-
dc.date.available2024-07-30T19:30:36Z-
dc.date.issued2024-
dc.identifier.urihttp://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1096053&flag=dissertationen_US
dc.identifier.urihttp://hdl.handle.net/10203/321348-
dc.description학위논문(석사) - 한국과학기술원 : 김재철AI대학원, 2024.2,[iv, 43 p. :]-
dc.description.abstractimportantly, we prove by examples that such terms are \emph{unavoidable}. Our results highlight vastly different characteristics of SAM with vs.\ without decaying perturbation size or gradient normalization, and suggest that the intuitions gained from one version may not apply to the other.-
dc.description.abstractSharpness-Aware Minimization (SAM) is an optimizer that takes a descent step based on the gradient at a perturbation $y_t = x_t + \rho \frac{\nabla f(x_t)}{\| \nabla f(x_t) \|}$ of the current point~$x_t$. Existing studies prove convergence of SAM for smooth functions, but they do so by assuming decaying perturbation size $\rho$ and/or no gradient normalization in $y_t$, which is detached from practice. To address this gap, we study deterministic/stochastic versions of SAM with practical configurations (i.e., constant $\rho$ and gradient normalization in $y_t$) and explore their convergence properties on smooth functions with (non)convexity assumptions. Perhaps surprisingly, in many scenarios, we find out that SAM has \emph{limited} capability to converge to global minima or stationary points. For smooth strongly convex functions, we show that while deterministic SAM enjoys tight global convergence rates of $\tilde \Theta(\frac{1}{T^2})$, the convergence bound of stochastic SAM suffers an \emph{inevitable} additive term $\mathcal O(\rho^2)$, indicating convergence only up to \emph{neighborhoods} of optima. In fact, such $\mathcal O(\rho^2)$ factors arise for stochastic SAM in all the settings we consider, and also for deterministic SAM in nonconvex cases-
dc.languageeng-
dc.publisher한국과학기술원-
dc.subject첨예도 감지 최소화▼a볼록 최적화-
dc.subjectSharpness-aware minimization▼aConvex optimization-
dc.title(The) convergence analysis of sharpness-aware minimization under practical settings-
dc.title.alternative실제 환경에서의 첨예도 감지 최소화에 대한 수렴성 분석-
dc.typeThesis(Master)-
dc.identifier.CNRN325007-
dc.description.department한국과학기술원 :김재철AI대학원,-
dc.contributor.alternativeauthorYun, Chulhee-
Appears in Collection
AI-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0