Abstract
This paper presents an accelerated proximal gradient method for multiobjective optimization, in which each objective function is the sum of a continuously differentiable, convex function and a closed, proper, convex function. Extending first-order methods for multiobjective problems without scalarization has been widely studied, but providing accelerated methods with accurate proofs of convergence rates remains an open problem. Our proposed method is a multiobjective generalization of the accelerated proximal gradient method, also known as the Fast Iterative Shrinkage-Thresholding Algorithm, for scalar optimization. The key to this successful extension is solving a subproblem with terms exclusive to the multiobjective case. This approach allows us to demonstrate the global convergence rate of the proposed method (\(O(1 / k^2)\)), using a merit function to measure the complexity. Furthermore, we present an efficient way to solve the subproblem via its dual representation, and we confirm the validity of the proposed method through some numerical experiments.
Similar content being viewed by others
Data availability
We do not analyse or generate any datasets, because our work proceeds within a theoretical and mathematical approach.
Code availability
The source code used for the numerical experiments is available at https://github.com/zalgo3/zfista.
References
Bandyopadhyay, S., Pal, S., Aruna, B.: Multiobjective GAs, quantitative indices, and pattern classification. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 34, 2088–2099 (2004)
Beck, A.: First-Order Methods in Optimization. Society for Industrial and Applied Mathematics, Philadelphia (2017)
Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imag. Sci. 2, 183–202 (2009)
Bertsekas, D.P.: Nonlinear Programming, 2nd edn. Athena Scientific, Belmont, Massachusetts (1999)
Boţ, R.I., Grad, S.M.: Inertial forward–backward methods for solving vector optimization problems. Optimization 67, 959–974 (2018)
Bonnans, J.F., Shapiro, A.: Perturbation Analysis of Optimization Problems. Springer, New York (2000)
Bonnel, H., Iusem, A.N., Svaiter, B.F.: Proximal methods in vector optimization. SIAM J. Optim. 15, 953–970 (2005)
Bouza, G., Tammer, C.: A steepest descent-like method for vector optimization problems with variable domination structure. J. Nonlinear Var. Anal. 6, 605–618 (2022)
Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge University Press, Cambridge (2004)
Brent, R.P.: Algorithms for Minimization Without Derivatives. Prentice-Hall, New Jersey (1973)
Byrd, R.H., Hribar, M.E., Nocedal, J.: An interior point algorithm for large-scale nonlinear programming. SIAM J. Optim. 9, 877–900 (1999)
Carrizo, G.A., Lotito, P.A., Maciel, M.C.: Trust region globalization strategy for the nonconvex unconstrained multiobjective optimization problem. Math. Program. 159, 339–369 (2016)
Chambolle, A., Dossal, C.: On the convergence of the iterates of the “fast iterative shrinkage/thresholding algorithm’’. J. Optim. Theory Appl. 166, 968–982 (2015)
Custódio, A.L., Madeira, J.F., Vaz, A.I., Vicente, L.N.: Direct multisearch for multiobjective optimization. SIAM J. Optim. 21, 1109–1140 (2011)
Dolan, E.D., Moré, J.J.: Benchmarking optimization software with performance profiles. Math. Program. Ser. B 91, 201��213 (2002)
El Moudden, M., El Mouatasim, A.: Accelerated diagonal steepest descent method for unconstrained multiobjective optimization. J. Optim. Theory Appl. 188, 220–242 (2021)
Fliege, J., Graña Drummond, L.M., Svaiter, B.F.: Newton’s method for multiobjective optimization. SIAM J. Optim. 20, 602–626 (2009)
Fliege, J., Svaiter, B.F.: Steepest descent methods for multicriteria optimization. Math. Methods Oper. Res. 51, 479–494 (2000)
Fliege, J., Vaz, A.I.F., Vicente, L.N.: Complexity of gradient descent for multiobjective optimization. Optim. Methods Softw. 34, 949–959 (2019)
Fukuda, E.H., Graña Drummond, L.M.: Inexact projected gradient method for vector optimization. Comput. Optim. Appl. 54, 473–493 (2013)
Fukuda, E.H., Graña Drummond, L.M.: A survey on multiobjective descemt methods. Pesquisa Operacional 34, 585–620 (2014)
Gandibleux, X., Sevaux, M., Sörensen, K., T’kindt, V.: Metaheuristics for Multiobjective Optimisation, Lecture Notes in Economics and Mathematical Systems, vol. 535. Springer, Berlin (2004)
Gass, S., Saaty, T.: The computational algorithm for the parametric objective function. Naval Res. Logist. Q. 2, 39–45 (1955)
Geoffrion, A.M.: Proper efficiency and the theory of vector maximization. J. Math. Anal. Appl. 22, 618–630 (1968)
Gonçalves, M.L.N., Lima, F.S., Prudente, L.F.: Globally convergent Newton-type methods for multiobjective optimization. Comput. Optim. Appl. 83, 403–434 (2022)
Graña Drummond, L.M., Iusem, A.N.: A projected gradient method for vector optimization problems. Comput. Optim. Appl. 28, 5–29 (2004)
Jin, Y., Olhofer, M., Sendhoff, B.: Dynamic weighted aggregation for evolutionary multi-objective optimization: Why does it work and how? In: Proceedings of the 3rd Annual Conference on Genetic and Evolutionary Computation, GECCO’01, San Francisco, CA, USA, (2001), Morgan Kaufmann Publishers Inc
Köbis, E., Köbis, M.A., Tammer, C.: A first bibliography on set and vector optimization problems with respect to variable domination structures. J. Nonlinear Var. Anal. 6, 725–735 (2022)
Lucambio Pérez, L.R., Prudente, L.F.: Nonlinear conjugate gradient methods for vector optimization. SIAM J. Optim. 28, 2690–2720 (2018)
Marler, R.T., Arora, J.S.: The weighted sum method for multi-objective optimization: new insights. Struct. Multidiscip. Optim. 41, 853–862 (2010)
Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. J. Global Optim. 75, 63–90 (2019)
Moré, J.J., Garbow, B.S., Hillstrom, K.E.: Testing unconstrained optimization software. ACM Trans. Math. Softw. 7, 17–41 (1981)
Moreau, J.-J.: Proximité et dualité dans un espace hilbertien. Bull. Soc. Math. France 93, 273–299 (1965)
Nesterov, Y.: A method for solving the convex programming problem with convergence rate \(O(1/k^2)\). Dokl. Akad. Nauk SSSR 269, 543–547 (1983)
Parikh, N., Boyd, S.: Proximal Algorithms, vol. 1. Now Publishers Inc, Boston (2014)
Rockafellar, R.T., Wets, R.J.B.: Variational Analysis. Grundlehren der mathematischen Wissenschaften, vol. 317. Springer, Berlin (1998)
Sion, M.: On general minimax theorems. Pac. J. Math. 8, 171–176 (1958)
Stadler, W., Dauer, J.: Multicriteria optimization in engineering: a tutorial and survey. In: Kamat, M.P. (ed.) Progress in Aeronautics and Astronautics: Structural Optimization: Status and Promise, vol. 150. American Institute of Aeronautics and Astronautics, Washington DC (1992)
Svaiter, B.F.: The multiobjective steepest descent direction is not Lipschitz continuous, but is Hölder continuous. Oper. Res. Lett. 46, 430–433 (2018)
Tanabe, H., Fukuda, E.H., Yamashita, N.: Proximal gradient methods for multiobjective optimization and their applications. Comput. Optim. Appl. 72, 339–361 (2019)
Tanabe, H., Fukuda, E.H., Yamashita, N.: Convergence rates analysis of a multiobjective proximal gradient method. Optim. Lett. 17, 333–350 (2023)
Tanabe, H., Fukuda, E. H., Yamashita, N.: New merit functions for multiobjective optimization and their properties. arXiv:2010.09333 (2023)
Toint, P.L.: Test problems for partially separable optimization and results for the routine PSPMIN, Namur Report (1983)
Wang, X., Wang, Y., Wang, G.: An accelerated augmented Lagrangian method for multi-criteria optimization problem. J. Ind. Manag. Optim. 16, 1–9 (2020)
Yosida, K.: Functional Analysis, Classics in Mathematics, vol. 123, 6th edn. Springer, Berlin (1995)
Zadeh, L.A.: Optimality and non-scalar-valued performance criteria. IEEE Trans. Autom. Control 8, 59–60 (1963)
Zhao, X., Jolaoso, L.O., Shehu, Y., Yao, J.-C.: Convergence of a nonmonotone projected gradient method for nonconvex multiobjective optimization. J. Nonlinear Var. Anal. 5, 441–457 (2021)
Zitzler, E., Deb, K., Thiele, L.: Comparison of multiobjective evolutionary algorithms: empirical results. Evol. Comput. 8, 173–195 (2000)
Zitzler, E., Technische, E., Zürich, H.: Evolutionary Algorithms for Multiobjective Optimization: Methods and Applications, Phd thesis, Swiss Federal Institute of Technology Zurich (1999)
Zou, H., Hastie, T.: Regularization and variable selection via the elastic net. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 67, 301–320 (2005)
Funding
This work was supported by the Grant-in-Aid for Scientific Research (C) (21K11769 and 19K11840) and Grant-in-Aid for JSPS Fellows (20J21961) from the Japan Society for the Promotion of Science.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Tanabe, H., Fukuda, E.H. & Yamashita, N. An accelerated proximal gradient method for multiobjective optimization. Comput Optim Appl 86, 421–455 (2023). https://doi.org/10.1007/s10589-023-00497-w
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10589-023-00497-w
Keywords
- Multiobjective optimization
- Proximal gradient method
- Pareto optimality
- Global rate of convergence
- First-order method
- FISTA