A new error function at hidden layers for fast training of multilayer perceptrons

Cited 16 time in webofscience Cited 0 time in scopus
  • Hit : 458
  • Download : 0
This letter proposes a:new error function at hidden layers to speed up the training of multilayer perceptrons (MLP's), With this new hidden error function, the layer-by-layer (LBL) algorithm approximately converges to the error backpropagation algorithm with optimum learning rates. Especially, the optimum learning rate for a hidden weight vector appears approximately as a multiplication of two optimum factors, one for minimizing the new hidden error function and the other for assigning hidden targets. Effectiveness of the proposed error function was: demonstrated for handwritten digit recognition and isolated-word recognition tasks. Very fast learning convergence as obtained for MLP's without the stalling problem experienced in conventional LBL algorithms.
Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
Issue Date
1999-07
Language
English
Article Type
Letter
Keywords

BACKPROPAGATION ALGORITHM; LEARNING ALGORITHM; NEURAL NETWORKS; BY-LAYER; CONVERGENCE

Citation

IEEE TRANSACTIONS ON NEURAL NETWORKS, v.10, no.4, pp.960 - 964

ISSN
1045-9227
URI
http://hdl.handle.net/10203/75792
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 16 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0