A new error function at hidden layers for fast training of multilayer perceptrons

Cited 16 time in webofscience Cited 0 time in scopus
  • Hit : 468
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorOh, SHko
dc.contributor.authorLee, Soo-Youngko
dc.date.accessioned2013-03-02T22:09:42Z-
dc.date.available2013-03-02T22:09:42Z-
dc.date.created2012-02-06-
dc.date.created2012-02-06-
dc.date.issued1999-07-
dc.identifier.citationIEEE TRANSACTIONS ON NEURAL NETWORKS, v.10, no.4, pp.960 - 964-
dc.identifier.issn1045-9227-
dc.identifier.urihttp://hdl.handle.net/10203/75792-
dc.description.abstractThis letter proposes a:new error function at hidden layers to speed up the training of multilayer perceptrons (MLP's), With this new hidden error function, the layer-by-layer (LBL) algorithm approximately converges to the error backpropagation algorithm with optimum learning rates. Especially, the optimum learning rate for a hidden weight vector appears approximately as a multiplication of two optimum factors, one for minimizing the new hidden error function and the other for assigning hidden targets. Effectiveness of the proposed error function was: demonstrated for handwritten digit recognition and isolated-word recognition tasks. Very fast learning convergence as obtained for MLP's without the stalling problem experienced in conventional LBL algorithms.-
dc.languageEnglish-
dc.publisherIEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC-
dc.subjectBACKPROPAGATION ALGORITHM-
dc.subjectLEARNING ALGORITHM-
dc.subjectNEURAL NETWORKS-
dc.subjectBY-LAYER-
dc.subjectCONVERGENCE-
dc.titleA new error function at hidden layers for fast training of multilayer perceptrons-
dc.typeArticle-
dc.identifier.wosid000081385700024-
dc.identifier.scopusid2-s2.0-0032678016-
dc.type.rimsART-
dc.citation.volume10-
dc.citation.issue4-
dc.citation.beginningpage960-
dc.citation.endingpage964-
dc.citation.publicationnameIEEE TRANSACTIONS ON NEURAL NETWORKS-
dc.contributor.localauthorLee, Soo-Young-
dc.contributor.nonIdAuthorOh, SH-
dc.type.journalArticleLetter-
dc.subject.keywordAuthorerror function-
dc.subject.keywordAuthorfast learning-
dc.subject.keywordAuthorhidden nodes-
dc.subject.keywordAuthorlayer-by-layer algorithm-
dc.subject.keywordAuthoroptimum learning rates-
dc.subject.keywordPlusBACKPROPAGATION ALGORITHM-
dc.subject.keywordPlusLEARNING ALGORITHM-
dc.subject.keywordPlusNEURAL NETWORKS-
dc.subject.keywordPlusBY-LAYER-
dc.subject.keywordPlusCONVERGENCE-
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 16 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0