{"oa":1,"date_created":"2018-12-11T12:01:57Z","page":"71 - 106","status":"public","main_file_link":[{"open_access":"1","url":"https://hal.inria.fr/hal-00773608"}],"day":"01","publication_status":"published","publication":"Journal of Machine Learning Research","month":"01","date_updated":"2021-01-12T07:41:44Z","title":"An analysis of convex relaxations for MAP estimation of discrete MRFs","publisher":"Microtome Publishing","extern":1,"date_published":"2009-01-01T00:00:00Z","intvolume":" 10","year":"2009","quality_controlled":0,"volume":10,"type":"journal_article","_id":"3197","publist_id":"3484","citation":{"apa":"Kumar, M. P., Kolmogorov, V., & Torr, P. (2009). An analysis of convex relaxations for MAP estimation of discrete MRFs. Journal of Machine Learning Research. Microtome Publishing.","chicago":"Kumar, M Pawan, Vladimir Kolmogorov, and Philip Torr. “An Analysis of Convex Relaxations for MAP Estimation of Discrete MRFs.” Journal of Machine Learning Research. Microtome Publishing, 2009.","ieee":"M. P. Kumar, V. Kolmogorov, and P. Torr, “An analysis of convex relaxations for MAP estimation of discrete MRFs,” Journal of Machine Learning Research, vol. 10. Microtome Publishing, pp. 71–106, 2009.","ista":"Kumar MP, Kolmogorov V, Torr P. 2009. An analysis of convex relaxations for MAP estimation of discrete MRFs. Journal of Machine Learning Research. 10, 71–106.","ama":"Kumar MP, Kolmogorov V, Torr P. An analysis of convex relaxations for MAP estimation of discrete MRFs. Journal of Machine Learning Research. 2009;10:71-106.","short":"M.P. Kumar, V. Kolmogorov, P. Torr, Journal of Machine Learning Research 10 (2009) 71–106.","mla":"Kumar, M. Pawan, et al. “An Analysis of Convex Relaxations for MAP Estimation of Discrete MRFs.” Journal of Machine Learning Research, vol. 10, Microtome Publishing, 2009, pp. 71–106."},"abstract":[{"text":"The problem of obtaining the maximum a posteriori estimate of a general discrete Markov random field (i.e., a Markov random field defined using a discrete set of labels) is known to be NP-hard. However, due to its central importance in many applications, several approximation algorithms have been proposed in the literature. In this paper, we present an analysis of three such algorithms based on convex relaxations: (i) LP-S: the linear programming (LP) relaxation proposed by Schlesinger (1976) for a special case and independently in Chekuri et al. (2001), Koster et al. (1998), and Wainwright et al. (2005) for the general case; (ii) QP-RL: the quadratic programming (QP) relaxation of Ravikumar and Lafferty (2006); and (iii) SOCP-MS: the second order cone programming (SOCP) relaxation first proposed by Muramatsu and Suzuki (2003) for two label problems and later extended by Kumar et al. (2006) for a general label set.\n\nWe show that the SOCP-MS and the QP-RL relaxations are equivalent. Furthermore, we prove that despite the flexibility in the form of the constraints/objective function offered by QP and SOCP, the LP-S relaxation strictly dominates (i.e., provides a better approximation than) QP-RL and SOCP-MS. We generalize these results by defining a large class of SOCP (and equivalent QP) relaxations which is dominated by the LP-S relaxation. Based on these results we propose some novel SOCP relaxations which define constraints using random variables that form cycles or cliques in the graphical model representation of the random field. Using some examples we show that the new SOCP relaxations strictly dominate the previous approaches.","lang":"eng"}],"author":[{"first_name":"M Pawan","full_name":"Kumar, M Pawan","last_name":"Kumar"},{"full_name":"Vladimir Kolmogorov","last_name":"Kolmogorov","first_name":"Vladimir","id":"3D50B0BA-F248-11E8-B48F-1D18A9856A87"},{"full_name":"Torr, Philip H","last_name":"Torr","first_name":"Philip"}]}