On the complexity of steepest descent, Newton’s and regularized Newton’s methods for nonconvex unconstrained optimization problems. (English) Zbl 1211.90225
Summary: It is shown that the steepest-descent and Newton’s methods for unconstrained nonconvex optimization under standard assumptions may both require a number of iterations and function evaluations arbitrarily close to \(O(\varepsilon^{-2})\) to drive the norm of the gradient below \(\varepsilon\). This shows that the upper bound of \(O(\varepsilon^{-2})\) evaluations known for the steepest descent is tight and that Newton’s method may be as slow as the steepest-descent method in the worst case. The improved evaluation complexity bound of \(O(\varepsilon^{-3/2})\) evaluations known for cubically regularized Newton’s methods is also shown to be tight.
MSC:
90C30 | Nonlinear programming |
65K05 | Numerical mathematical programming methods |
49M37 | Numerical methods based on nonlinear programming |
49M15 | Newton-type methods |
49M05 | Numerical methods based on necessary conditions |
58C15 | Implicit function theorems; global Newton methods on manifolds |
90C60 | Abstract computational complexity for mathematical programming problems |
68Q25 | Analysis of algorithms and problem complexity |