Projected variable three-term conjugate gradient algorithm for enhancing generalization performance in deep neural network training

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 119
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorKim, Sanghyukko
dc.contributor.authorKim, Hansuko
dc.contributor.authorKang, Namwooko
dc.contributor.authorLee, Tae Heeko
dc.date.accessioned2025-10-17T07:00:10Z-
dc.date.available2025-10-17T07:00:10Z-
dc.date.created2025-10-17-
dc.date.created2025-10-17-
dc.date.issued2025-12-
dc.identifier.citationNEUROCOMPUTING, v.657-
dc.identifier.issn0925-2312-
dc.identifier.urihttp://hdl.handle.net/10203/334601-
dc.description.abstractDeep learning optimization faces a fundamental trade-off between convergence efficiency and generalization. First-order methods such as stochastic gradient descent (SGD) and adaptive moment estimation (Adam) tend to find flatter minima but converge slowly, while higher-order methods converge rapidly but are often drawn to sharp minima that generalize poorly. To address this, we introduce the projected variable three-term conjugate gradient (PVTTCG) algorithm. Motivated by the geometric instabilities in modern networks that use techniques such as batch normalization (BN), PVTTCG integrates an orthogonal projection into the higher-order optimization framework. This mechanism eliminates radial components from the search direction, inherently guiding the optimization toward flatter regions without requiring additional regularization terms or hyperparameters. The effectiveness of PVTTCG is validated across diverse tasks, including language modeling, large-scale image classification, and a real-world engineering application. In complex scenarios, PVTTCG consistently improves upon its higher-order baseline, achieving up to a 3.92 percentage point gain on CIFAR-100 while remaining competitive with leading first-order methods. A systematic analysis reveals that PVTTCG demonstrates superior robustness to batch size variations, particularly excelling at larger batch sizes. This robustness enables the algorithm to process batch sizes up to 2,048 in engineering applications, achieving a 35.9% test loss reduction compared to Adam. These findings establish PVTTCG as an effective solution for bridging the convergencegeneralization trade-off.-
dc.languageEnglish-
dc.publisherELSEVIER-
dc.titleProjected variable three-term conjugate gradient algorithm for enhancing generalization performance in deep neural network training-
dc.typeArticle-
dc.identifier.wosid001578610400001-
dc.identifier.scopusid2-s2.0-105016527430-
dc.type.rimsART-
dc.citation.volume657-
dc.citation.publicationnameNEUROCOMPUTING-
dc.identifier.doi10.1016/j.neucom.2025.131568-
dc.contributor.localauthorKang, Namwoo-
dc.contributor.nonIdAuthorKim, Hansu-
dc.contributor.nonIdAuthorLee, Tae Hee-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorOptimization algorithm-
dc.subject.keywordAuthorGeneralization performance-
dc.subject.keywordAuthorConjugate gradient method-
dc.subject.keywordAuthorVehicle crashworthiness-
dc.subject.keywordAuthorImage classification-
dc.subject.keywordAuthorLanguage modeling-
Appears in Collection
GT-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0