Model Predictive Control and Dynamic Programming

Cited 8 time in webofscience Cited 0 time in scopus
  • Hit : 356
  • Download : 0
Model Predictive Control (MPC) and Dynamic Programming (DP) are two different methods to obtain an optimal feedback control law. The former uses on-line optimization to solve an open-loop optimal control problem cast over a finite size time window at each sample time. A feedback control law is defined implicitly by repeating the optimization calculation after a feedback update of the state at each sample time. In contrast, the latter attempts to derive an explicit feedback law off-line by deriving and solving so called Bellman's optimality equation. Both have been used successfully to solve optimal control problems, the former for constrained control problems and the latter for unconstrained linear quadratic optimal control problem. In this paper, we examine the differences and similarities as well as their relative merits and demerits. We also propose ways to integrate the two methods to alleviate each other's shortcomings.
Publisher
ICROS
Issue Date
2011-10-28
Language
English
Citation

ICCAS 2011, pp.1807 - 1809

URI
http://hdl.handle.net/10203/171583
Appears in Collection
CBE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 8 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0