A hybrid optimization method of evolutionary and gradient search

Cited 16 time in webofscience Cited 0 time in scopus
  • Hit : 466
  • Download : 0
This article proposes a hybrid optimization algorithm, which combines evolutionary algorithms (EA) and the gradient search technique, for optimization with continuous parameters. Inheriting the advantages of the two approaches, the new method is fast and capable of global search. The main structure of the new method is similar to that of EA except that a special individual called the gradient individual is introduced and EA individuals are located symmetrically. The gradient individual is propagated through generations by means of the quasi-Newton method. Gradient information required for the quasi-Newton method is calculated from the costs of EA individuals produced by the evolution strategies (ES). The symmetric placement of the individuals with respect to the best individual is for calculating the gradient vector by the central difference method. For the estimation of the inverse Hessian matrix, symmetric Rank-1 update shows better performance than BFGS and DFP. Numerical tests on various benchmark problems and a practical control design example demonstrate that the new hybrid algorithm gives a faster convergence rate than EA, without sacrificing the capability of global search.
Publisher
TAYLOR & FRANCIS LTD
Issue Date
2007-01
Language
English
Article Type
Article
Keywords

RANK-ONE UPDATE; LOCAL SEARCH; MEMETIC ALGORITHMS; GENETIC ALGORITHM; DESIGN; MODEL; COST

Citation

ENGINEERING OPTIMIZATION, v.39, no.1, pp.87 - 104

ISSN
0305-215X
DOI
10.1080/03052150600957314
URI
http://hdl.handle.net/10203/91676
Appears in Collection
AE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 16 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0