Self-organizing neural networks for function approximation함수 근사화를 위한 자기 구성 신경회로망

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 509
  • Download : 0
In this dissertation, we propose self-organization algorithms for RBF and MLP networks. To construct an RBF network we develop the modified orthogonal least squares (OLS) algorithm. The proposed construction algorithm uses LM learning at each step of OLS algorithm to tune the parameters of an activation function and other weights in the network. As a result, the networks use much fewer neurons for a given training goal than the original OLS algorithm. We also develop a new pruning algorithm to prevent over-fitting and get a smaller network. The proposed algorithm uses two measures together, called least squares (LS) and least squares contribution ratio (LSCR). LS measure is to find the least significant neuron by measuring the error increase when we remove the neuron, and LSCR measure is to find neurons causing over-fitting. While conventional pruning algorithms cannot find the over-fitting neuron, and they stop pruning when the pruned network cannot be trained to satisfy the training goal even if the network still has over-fitting, the proposed pruning algorithm can obtain the network without over-fitting by compelling the network to prune neurons which have high probabilities of causing over-fitting. A mathematical proof is provided for the lower bound of the probability that the generalization error increases more than $\epsilon$, and for that of the probability that the generalization error decreases more than $\epsilon$ when we prune a neuron. While the proposed algorithm shows good results for RBF networks, we cannot use the algorithms to construct or to prune MLP networks as it is, because the proposed algorithms use the linearity between the outputs of the neurons and the output of the network which is true for RBF networks but is not for MLP networks. To extend the proposed algorithms to MLP networks, we develop target back-propagation method which obtains the target of the output of each hidden layer in the networks. Then, by approximating the erro...
Advisors
Park, Cheol-Hoonresearcher박철훈researcher
Description
한국과학기술원 : 전기및전자공학전공,
Publisher
한국과학기술원
Issue Date
2009
Identifier
309320/325007  / 020035245
Language
eng
Description

학위논문(박사) - 한국과학기술원 : 전기및전자공학전공, 2009.2, [ vii, 90 p. ]

Keywords

Neural Networks; orthogonal least squares; pruning; LSCR; 신경회로망; 직교 최소제곱법; 가지치기; LSCR; Neural Networks; orthogonal least squares; pruning; LSCR; 신경회로망; 직교 최소제곱법; 가지치기; LSCR

URI
http://hdl.handle.net/10203/35506
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=309320&flag=dissertation
Appears in Collection
EE-Theses_Ph.D.(박사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0