×

Online one-sided smooth function maximization. (English) Zbl 07724743

Zhang, Yong (ed.) et al., Computing and combinatorics. 28th international conference, COCOON 2022, Shenzhen, China, October 22–24, 2022. Proceedings. Cham: Springer. Lect. Notes Comput. Sci. 13595, 177-185 (2023).
Summary: In this paper, we consider the problem of online one-sided smooth (OSS for short) function maximization. The concept of OSS was first proposed by Mehrdad et al. [1] to express the properties of multilinear extension of set functions. When the problem is offline, they provided a tight \((1-e^{-(1-\alpha )(\frac{\alpha }{1+\alpha })^{2\sigma }})\) (\(\alpha \in (0,1]\), \(\sigma \ge 0)\)) approximation solution. We first propose an online Jump Start Meta Frank Wolfe algorithm that has access to the full gradient of the objective functions. We show that it achieves a \((1-e^{-(1-\alpha )(\frac{\alpha }{1+\alpha })^{2\sigma }})\) approximation with the regret of \(O(\sqrt{T})\) (where \(T\) is the horizon of the online optimization problem) over any convex set. Note that the approximation result is same as the offline version of the OSS maximization problem.
For the entire collection see [Zbl 1516.68030].

MSC:

68Rxx Discrete mathematics in relation to computer science
Full Text: DOI

References:

[1] Mehrdad, G., Richard, S., Bruce, S.: Beyond submodular maximization via one-sided smoothness. In: Proceedings of SODA, pp. 1006-1025 (2021) · Zbl 07788401
[2] Mehrdad, G., Richard, S., Bruce, S.: Beyond submodular maximization, arXiv preprint arXiv:1904.09216, pp. 1-60 (2019)
[3] Chandra, C., Quanrud, K.: Submodular function maximization in parallel via the multilinear relaxation. In: Proceedings of the SODA, pp. 303-322 (2019) · Zbl 1431.68148
[4] Radlinski, F., Dumais, S.: Improving personalized web search using result diversification. In: Proceedings of ACM, SIGIR, pp. 691-692 (2006)
[5] Ghadiri, M., Schmidt, M.: Distributed maximization of submodular plus diversity functions for multilabel feature selection on huge datasets. In: Proceedings of AISTATS, pp. 2077-2086 (2019)
[6] Zadeh, S., Ghadiri, M., Mirrokni, V., Zadimoghaddam, M.: Scalable feature selection via distributed diversity maximization. In: Proceedings of AAAI, pp. 2876-2883 (2017)
[7] Abbassi, Z., Mirrokni, V., Thakur, M.: Diversity maximization under matroid constraints. In: Proceedings of ACM SIGKDD, pp. 32-40 (2013)
[8] Xin, D., Cheng, H., Yan, X., Han, J.: Extracting redundancy-aware top-\(k\) patterns. In: Proceedings of ACM SIGKDD, pp. 444-453 (2006)
[9] Carbonell, J., Goldstein, J.: The use of MMR, diversity-based reranking for reordering documents and producing summaries. In: Proceedings of ACM SIGIR, pp. 335-336 (1998)
[10] Bian, A., Levy, K., Krause, A., Buhmann, J.: Continuous DR-submodular maximization: structure and algorithms. In: Proceedings of NeurIPS, pp. 486-496 (2017)
[11] Sadeghi, O., Fazel, M.: Online continuous DR-submodular maximization with long-term budget constraints. In: Proceedings of ICAIS, pp. 4410-4419 (2020)
[12] Chen, L., Hassani, H., Karbasi, A.: Online continuous submodular maximization. In: Proceedings of ICAIS, pp. 1896-1905 (2018)
[13] Raut, P., Sadeghi, O., Fazel, M.: Online DR-submodular maximization: minimizing regret and constraint violation. In: Proceedings of AAAI, pp. 9395-9402 (2021)
[14] Shai, S.: Online learning and online convex optimization. Found. Trends® Mach. Learn. 4, 107-194 (2012) · Zbl 1253.68190
[15] Farina, G., Kroer, C., Sandholm, T.: Online convex optimization for sequential decision processes and extensive-form games. In: Proceedings of the AAAI Conference on Artificial Intelligence, pp. 1917-1925 (2019)
[16] Nesterov, Y., Introductory Lectures on Convex Optimization: A Basic Course (2003), Heidelberg: Springer, Heidelberg · Zbl 1086.90045
[17] Elad, H.: Introduction to online convex optimization. Found. Trends® Optim. 2, 157-325 (2016)
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.