Abstract.
Based on the notion of the ε -subgradient, we present a unified technique to establish convergence properties of several methods for nonsmooth convex minimization problems. Starting from the technical results, we obtain the global convergence of: (i) the variable metric proximal methods presented by Bonnans, Gilbert, Lemaréchal, and Sagastizábal, (ii) some algorithms proposed by Correa and Lemaréchal, and (iii) the proximal point algorithm given by Rockafellar. In particular, we prove that the Rockafellar—Todd phenomenon does not occur for each of the above mentioned methods. Moreover, we explore the convergence rate of {||x k || } and {f(x k ) } when {x k } is unbounded and {f(x k ) } is bounded for the non\-smooth minimization methods (i), (ii), and (iii).
Author information
Authors and Affiliations
Additional information
Accepted 15 October 1996
Rights and permissions
About this article
Cite this article
Birge, J., Qi, L. & Wei, Z. A General Approach to Convergence Properties of Some Methods for Nonsmooth Convex Optimization . Appl Math Optim 38, 141–158 (1998). https://doi.org/10.1007/s002459900086
Issue Date:
DOI: https://doi.org/10.1007/s002459900086