Reinforcement Learning Based Multi-Step Look-Ahead Bayesian Optimization

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 168
  • Download : 0
This paper considers the situation where data-based optimization is to be performed but data sampling is limited due to high cost and time. Such situations demand highly efficient data-sampling and utilization and Bayesian optimization (BO) is the most commonly used method as it allows users to balance between exploration and exploitation in deciding where to sample next in the design space. However, the standard acquisition functions used in Bayesian optimization such as the expected improvement have been criticized for being greedy and myopic in many situations. To address the limitation of the standard acquisition functions of BO due to its near-sighted nature, this paper suggests a novel reinforcement learning based method which enables multi-step lookahead Bayesian optimization. Several benchmark functions are tested to compare the performance of the RL based method against the traditional BO methods using expected improvement and its rollout-based extensions. The proposed method outperformed popular Bayesian optimization methods in the case study.
Publisher
IFAC
Issue Date
2022-06-15
Language
English
Citation

DYCOPS 2022, pp.100 - 105

ISSN
2405-8963
DOI
10.1016/j.ifacol.2022.07.428
URI
http://hdl.handle.net/10203/298470
Appears in Collection
CBE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0