The application of actor-critic reinforcement learning for fab dispatching scheduling

Cited 1 time in webofscience Cited 0 time in scopus
  • Hit : 76
  • Download : 0
This paper applies Actor-Critic reinforcement learning to control lot dispatching scheduling in reentrant line manufacture model. To minimize the Work-In-Process(WIP) and Cycle Time(CT), the lot dispatching policy is directly optimized through Actor-Critic algorithm. The results show that the optimized dispatching policy yields smaller average WIP and CT than traditional dispatching policy such as Shortest Processing Time, Latest-Step-First-Served, and Least-Work-Next-Queue.
Publisher
Institute of Electrical and Electronics Engineers Inc.
Issue Date
2017-12
Language
English
Citation

2017 Winter Simulation Conference, WSC 2017, pp.4570 - 4571

ISSN
0891-7736
DOI
10.1109/WSC.2017.8248209
URI
http://hdl.handle.net/10203/310368
Appears in Collection
IE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 1 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0