×

Pitfalls in applying optimal control to dynamical systems: an overview and editorial perspective. (English) Zbl 1498.49001

The dynamical systems are \[ x'(t) = f(x(t), u(t)) \, , \quad x(0) = x_0 \, , \quad 0 \le t \le T \] where \(x(t) \in M\) (a manifold in \(\mathbb{R}^n)\) and \(u(t) \in U \subseteq\mathbb{R}^m;\) the time interval \(0 \le t \le T\) may be fixed or variable. The target condition is \((T, x(T)) \in N \subseteq [0, \infty) \times M\) defined by \(\Psi(t, x) = 0\) \((\Psi\) a vector valued function). The objective is to minimize \[ J(u) = \int_0^T L(x(s), u(s)) ds + \varphi(T, x(T)) \] in the space \(\mathcal{U}\) of controls \(u(\cdot)\) that take values in \(U\). Pontryagin’s minimum principle gives a necessary condition for a minimum \(\bar u(t)\) using the Hamiltonian \(H(\lambda, x, u) = L(x, u) + \langle \lambda, f(x, u)\rangle;\) there exists a covector \(\lambda(t)\) satisfying the adjoint equation \[ \lambda'(t) = - \frac{\partial H}{\partial x}(\lambda(t), \bar x(t), \bar u(t)) \] (\(\bar x(t)\) the trajectory corresponding to \( \bar u(t))\) such that \((H, - \lambda + \partial \varphi / \partial x)\) is orthogonal to \(N\) at the terminal point and the optimal control satisfies \[ H(\lambda(t), \bar x(t), \bar u(t)) = \min_{u \in U} H(\lambda(t), \bar x(t), u). \tag{1} \] This reduces the infinite dimensional control problem to finite dimension, although not explicitly since the function to be minimized depends on \(\bar u(t)\), \(\bar x(t)\). However, in certain situations (1) can be manipulated into giving an actual solution. It can also be used as a basis for iteration methods.
The subject of this expository paper is the case where (1) does not provide enough information to identify the optimal control (for instance, the min is not unique, or there are subsets of \(0 \le t \le T\) where (1) is empty). This may lead to extremals which are not true minima. The authors also discuss the consequences of certain choices of cost functionals (i.e. quadratic vs. affine in the controls \(u_j)\) and the relation of the results with the Hamilton-Jacobi-Bellman approach. There are also observations on numerical approximation.

MSC:

49-01 Introductory exposition (textbooks, tutorial papers, etc.) pertaining to calculus of variations and optimal control
49K15 Optimality conditions for problems involving ordinary differential equations
49L20 Dynamic programming in optimal control and differential games
93C10 Nonlinear systems in control theory
93C15 Control/observation systems governed by ordinary differential equations
49M25 Discrete approximations in optimal control
37N35 Dynamical systems in control
65K15 Numerical methods for variational inequalities and related problems
Full Text: DOI

References:

[1] L. D. Berkovitz, Optimal Control Theory, Springer-Verlag, 1974. · Zbl 0295.49001
[2] S. Bhan; H. Schättler, A variational approach to perturbation feedback control for optimal control problems with terminal constraints and free terminal time, Set-Valued Var. Anal., 27, 309-330 (2019) · Zbl 1419.49043 · doi:10.1007/s11228-018-0486-3
[3] F. Black; M. Scholes, The pricing of options and corporate liabilities, J. Polit. Econ., 81, 637-654 (1973) · Zbl 1092.91524 · doi:10.1086/260062
[4] V. G. Boltyansky, Sufficient conditions for optimality and the justification of the dynamic programming method, SIAM J. Control, 4, 326-361 (1966) · Zbl 0143.32004 · doi:10.1137/0304027
[5] V. G. Boltyansky, Mathematical Methods of Optimal Control, Holt, Rinehart and Winston, Inc., 1971. · Zbl 0213.15504
[6] B. Bonnard and M. Chyba, Singular Trajectories and their Role in Control Theory, Mathématiques & Applications, vol. 40, Springer Verlag, Paris, 2003. · Zbl 1022.93003
[7] U. Boscain and B. Piccoli, Optimal Syntheses for Control Systems on 2-D Manifolds, Mathématiques & Applications, Vol. 43, Springer-Verlag, Berlin, 2004. · Zbl 1137.49001
[8] A. Bressan and B. Piccoli, Introduction to the Mathematical Theory of Control, American Institute of Mathematical Sciences, 2007. · Zbl 1127.93002
[9] A. E. Bryson Jr. and Y. C. Ho, Applied Optimal Control, Revised Printing, Hemisphere Publishing Company, New York, 1975.
[10] C. Byrnes; H. Frankowska, Unicité des solutions optimales et absence de chocs pour les équations d’Hamilton-Jacobi-Bellman et de Riccati, C. R. Acad. Sci. Paris, 315, 427-431 (1992) · Zbl 0777.49021
[11] C. I. Byrnes and A. Jhemi, Shock waves for Riccati partial differential equations arising in nonlinear optimal control, in: Systems, Models and Feedback: Theory and Applications, (A. Isidori and T. J. Tarn, eds.), Birkhäuser, (1992), 211-227. · Zbl 0770.35007
[12] I. V. Girsanov, Lectures on Mathematical Theory of Extremum Problems, Lecture Notes in Economics and Mathematical Systems, Vol. 67, Springer-Verlag, Berlin-New York, 1972. · Zbl 0234.49016
[13] A. D. Ioffe and V. M. Tikhomirov, Theory of Extremal Problems, North-Holland, Amsterdam, 1979. · Zbl 0407.90051
[14] H. Kwakernaak and R. Sivan, Linear Optimal Control Systems, Wiley-Interscience, 1972. · Zbl 0276.93001
[15] U. Ledzewicz; H. Schättler, Optimal bang-bang controls for a 2-compartment model in cancer chemotherapy, J. Optim. Theory Appl., 114, 609-637 (2002) · Zbl 1035.49020 · doi:10.1023/A:1016027113579
[16] U. Ledzewicz; H. Schättler, Antiangiogenic therapy in cancer treatment as an optimal control problem, SIAM J. Control Optim., 46, 1052-1079 (2007) · Zbl 1357.49086 · doi:10.1137/060665294
[17] U. Ledzewicz and H. Schättler, Combination of antiangiogenic treatment with chemotherapy as a multi-input optimal control problem, Math. Methods in the Applied Sciences, publ. online. · Zbl 1357.49086
[18] R. C. Merton, Lifetime portfolio selection under uncertainty: The continuous-time case, The Review of Economics and Statistics, 51, 247-257 (1969) · doi:10.2307/1926560
[19] H. G. Moyer, Sufficient conditions for a strong minimum in singular control problems, SIAM J. Control, 11, 620-636 (1973) · Zbl 0241.49010 · doi:10.1137/0311048
[20] B. Piccoli; H. J. Sussmann, Regular synthesis and sufficient conditions for optimality, SIAM J. Control Optim., 39, 359-410 (2000) · Zbl 0961.93014 · doi:10.1137/S0363012999322031
[21] L. S. Pontryagin, V. G. Boltyanskii, R. V. Gamkrelidze and E. F. Mishchenko, The Mathematical Theory of Optimal Processes, Macmillan, New York, 1964. · Zbl 0117.31702
[22] H. Schättler and U. Ledzewicz, Geometric Optimal Control, Interdisciplinary Applied Mathematics, Vol. 38, Springer, New York, 2012. · Zbl 1246.49018
[23] H. Schättler and U. Ledzewicz, Optimal Control for Mathematical Models of Cancer Therapies, Interdisciplinary Applied Mathematics, Vol. 42, Springer, New York, 2015. · Zbl 1331.92008
[24] H. Schättler; U. Ledzewicz; H. Maurer, Sufficient conditions for strong local optimality in optimal control problems with \(L_2\)-type objectives and control constraints, Discrete Contin. Dyn. Syst. Ser. B, 19, 2657-2679 (2014) · Zbl 1304.49043 · doi:10.3934/dcdsb.2014.19.2657
[25] H. J. Sussmann; J. C. Willems, 300 years of optimal control: From the brachistochrone to the maximum principle, IEEE Control Systems, 17, 32-44 (1997) · doi:10.1109/37.588098
[26] G. W. Swan, Applications of Optimal Control Theory in Medicine, Marcel Dekker, New York, 1984. · Zbl 0554.92002
[27] G. W. Swan, Role of optimal control in cancer chemotherapy, Mathematical Biosciences, 101, 237-284 (1990) · Zbl 0702.92007 · doi:10.1016/0025-5564(90)90021-P
[28] A. Swierniak, Cell cycle as an object of control, Journal of Biological Systems, 3, 41-54 (1995)
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.