Controlled exploration of state space in off-line ADP and its application to stochastic shortest path problems

Cited 1 time in webofscience Cited 0 time in scopus
  • Hit : 433
  • Download : 122
This paper addresses the problem of finding a control policy that drives a generic discrete event stochastic system from an initial state to a set of goal states with a specified probability. The control policy is iteratively constructed via an approximate dynamic programming (ADP) technique over a small subset of the state space that is evolved via Monte Carlo simulations. The effect of certain user-chosen parameters on the performance of the algorithm is investigated The method is evaluated on several stochastic shortest path (SSP) examples and on a manufacturing job shop problem. We solve SSP problems that contain up to one million states to illustrate the scaling of computational and memory benefits with respect to the problem size. In the case of the manufacturing job shop example. the proposed ADP approach outperforms a traditional rolling horizon math programming approach. (C) 2009 Elsevier Ltd. All rights reserved.
Publisher
PERGAMON-ELSEVIER SCIENCE LTD
Issue Date
2009-12
Language
English
Article Type
Article
Citation

COMPUTERS CHEMICAL ENGINEERING, v.33, no.12, pp.2111 - 2122

ISSN
0098-1354
DOI
10.1016/j.compchemeng.2009.06.012
URI
http://hdl.handle.net/10203/101338
Appears in Collection
CBE-Journal Papers(저널논문)
Files in This Item
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 1 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0