Learning Stochastic Optimal Policies via Gradient Descent

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 942
  • Download : 0
We systematically develop a learning-based treatment of stochastic optimal control (SOC), relying on direct optimization of parametric control policies. We propose a derivation of adjoint sensitivity results for stochastic differential equations through direct application of variational calculus. Then, given an objective function for a predetermined task specifying the desiderata for the controller, we optimize their parameters via iterative gradient descent methods. In doing so, we extend the range of applicability of classical SOC techniques, often requiring strict assumptions on the functional form of system and control. We verify the performance of the proposed approach on a continuous-time, finite horizon portfolio optimization with proportional transaction costs.
Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
Issue Date
2022
Language
English
Article Type
Article
Citation

IEEE CONTROL SYSTEMS LETTERS, v.6, pp.1094 - 1099

ISSN
2475-1456
DOI
10.1109/LCSYS.2021.3086672
URI
http://hdl.handle.net/10203/286888
Appears in Collection
IE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0