×

Nonlinear Monte Carlo methods with polynomial runtime for Bellman equations of discrete time high-dimensional stochastic optimal control problems. arXiv:2303.03390

Preprint, arXiv:2303.03390 [math.OC] (2023).
Summary: Discrete time stochastic optimal control problems and Markov decision processes (MDPs), respectively, serve as fundamental models for problems that involve sequential decision making under uncertainty and as such constitute the theoretical foundation of reinforcement learning. In this article we study the numerical approximation of MDPs with infinite time horizon, finite control set, and general state spaces. Our set-up in particular covers infinite-horizon optimal stopping problems of discrete time Markov processes. A key tool to solve MDPs are Bellman equations which characterize the value functions of the MDPs and determine the optimal control strategies. By combining ideas from the full-history recursive multilevel Picard approximation method, which was recently introduced to solve certain nonlinear partial differential equations, and ideas from \(Q\)-learning we introduce a class of suitable nonlinear Monte Carlo methods and prove that the proposed methods do overcome the curse of dimensionality in the numerical approximation of the solutions of Bellman equations and the associated discrete time stochastic optimal control problems.

MSC:

90C40 Markov and semi-Markov decision processes
90C39 Dynamic programming
60J05 Discrete-time Markov processes on general state spaces
93E20 Optimal stochastic control
65C05 Monte Carlo methods
arXiv data are taken from the arXiv OAI-PMH API. If you found a mistake, please report it directly to arXiv.