{"oa_version":"Preprint","department":[{"_id":"VlKo"}],"year":"2021","author":[{"last_name":"Kolmogorov","full_name":"Kolmogorov, Vladimir","id":"3D50B0BA-F248-11E8-B48F-1D18A9856A87","first_name":"Vladimir"},{"last_name":"Pock","first_name":"Thomas","full_name":"Pock, Thomas"}],"quality_controlled":"1","abstract":[{"lang":"eng","text":"We study a class of convex-concave saddle-point problems of the form minxmaxy⟨Kx,y⟩+fP(x)−h∗(y) where K is a linear operator, fP is the sum of a convex function f with a Lipschitz-continuous gradient and the indicator function of a bounded convex polytope P, and h∗ is a convex (possibly nonsmooth) function. Such problem arises, for example, as a Lagrangian relaxation of various discrete optimization problems. Our main assumptions are the existence of an efficient linear minimization oracle (lmo) for fP and an efficient proximal map for h∗ which motivate the solution via a blend of proximal primal-dual algorithms and Frank-Wolfe algorithms. In case h∗ is the indicator function of a linear constraint and function f is quadratic, we show a O(1/n2) convergence rate on the dual objective, requiring O(nlogn) calls of lmo. If the problem comes from the constrained optimization problem minx∈Rd{fP(x)|Ax−b=0} then we additionally get bound O(1/n2) both on the primal gap and on the infeasibility gap. In the most general case, we show a O(1/n) convergence rate of the primal-dual gap again requiring O(nlogn) calls of lmo. To the best of our knowledge, this improves on the known convergence rates for the considered class of saddle-point problems. We show applications to labeling problems frequently appearing in machine learning and computer vision."}],"publication_status":"published","oa":1,"date_published":"2021-07-01T00:00:00Z","article_processing_charge":"No","month":"07","user_id":"8b945eb4-e2f2-11eb-945a-df72226e66a9","main_file_link":[{"open_access":"1","url":"https://arxiv.org/abs/2101.12617"}],"date_created":"2021-12-16T12:41:20Z","day":"01","ec_funded":1,"date_updated":"2021-12-17T09:06:46Z","status":"public","_id":"10552","publication":"38th International Conference on Machine Learning","citation":{"ama":"Kolmogorov V, Pock T. One-sided Frank-Wolfe algorithms for saddle problems. In: 38th International Conference on Machine Learning. ; 2021.","apa":"Kolmogorov, V., & Pock, T. (2021). One-sided Frank-Wolfe algorithms for saddle problems. In 38th International Conference on Machine Learning. Virtual.","ieee":"V. Kolmogorov and T. Pock, “One-sided Frank-Wolfe algorithms for saddle problems,” in 38th International Conference on Machine Learning, Virtual, 2021.","mla":"Kolmogorov, Vladimir, and Thomas Pock. “One-Sided Frank-Wolfe Algorithms for Saddle Problems.” 38th International Conference on Machine Learning, 2021.","short":"V. Kolmogorov, T. Pock, in:, 38th International Conference on Machine Learning, 2021.","ista":"Kolmogorov V, Pock T. 2021. One-sided Frank-Wolfe algorithms for saddle problems. 38th International Conference on Machine Learning. ICML: International Conference on Machine Learning.","chicago":"Kolmogorov, Vladimir, and Thomas Pock. “One-Sided Frank-Wolfe Algorithms for Saddle Problems.” In 38th International Conference on Machine Learning, 2021."},"project":[{"grant_number":"616160","_id":"25FBA906-B435-11E9-9278-68D0E5697425","name":"Discrete Optimization in Computer Vision: Theory and Practice","call_identifier":"FP7"}],"type":"conference","title":"One-sided Frank-Wolfe algorithms for saddle problems","acknowledgement":"Vladimir Kolmogorov was supported by the European Research Council under the European Unions Seventh Framework Programme (FP7/2007-2013)/ERC grant agreement no 616160. Thomas Pock acknowledges support by an ERC grant HOMOVIS, no 640156.","external_id":{"arxiv":["2101.12617"]},"conference":{"end_date":"2021-07-24","location":"Virtual","name":"ICML: International Conference on Machine Learning","start_date":"2021-07-18"},"language":[{"iso":"eng"}]}