{"date_created":"2018-12-11T11:59:48Z","scopus_import":1,"conference":{"name":"NIPS: Neural Information Processing Systems","location":"Lake Tahoe, NV, United States","start_date":"2012-12-03","end_date":"2012-12-06"},"publication_status":"published","day":"01","department":[{"_id":"ChLa"}],"status":"public","page":"82 - 90","publisher":"Neural Information Processing Systems","corr_author":"1","title":"Dynamic pruning of factor graphs for maximum marginal prediction","date_updated":"2024-10-09T20:54:58Z","month":"12","user_id":"3E5EF7F0-F248-11E8-B48F-1D18A9856A87","oa_version":"None","date_published":"2012-12-01T00:00:00Z","intvolume":" 1","quality_controlled":"1","volume":1,"year":"2012","_id":"2825","language":[{"iso":"eng"}],"type":"conference","author":[{"orcid":"0000-0001-8622-7887","id":"40C20FD2-F248-11E8-B48F-1D18A9856A87","first_name":"Christoph","last_name":"Lampert","full_name":"Lampert, Christoph"}],"abstract":[{"text":"We study the problem of maximum marginal prediction (MMP) in probabilistic graphical models, a task that occurs, for example, as the Bayes optimal decision rule under a Hamming loss. MMP is typically performed as a two-stage procedure: one estimates each variable's marginal probability and then forms a prediction from the states of maximal probability. In this work we propose a simple yet effective technique for accelerating MMP when inference is sampling-based: instead of the above two-stage procedure we directly estimate the posterior probability of each decision variable. This allows us to identify the point of time when we are sufficiently certain about any individual decision. Whenever this is the case, we dynamically prune the variables we are confident about from the underlying factor graph. Consequently, at any time only samples of variables whose decision is still uncertain need to be created. Experiments in two prototypical scenarios, multi-label classification and image inpainting, show that adaptive sampling can drastically accelerate MMP without sacrificing prediction accuracy.","lang":"eng"}],"citation":{"apa":"Lampert, C. (2012). Dynamic pruning of factor graphs for maximum marginal prediction (Vol. 1, pp. 82–90). Presented at the NIPS: Neural Information Processing Systems, Lake Tahoe, NV, United States: Neural Information Processing Systems.","chicago":"Lampert, Christoph. “Dynamic Pruning of Factor Graphs for Maximum Marginal Prediction,” 1:82–90. Neural Information Processing Systems, 2012.","ama":"Lampert C. Dynamic pruning of factor graphs for maximum marginal prediction. In: Vol 1. Neural Information Processing Systems; 2012:82-90.","ista":"Lampert C. 2012. Dynamic pruning of factor graphs for maximum marginal prediction. NIPS: Neural Information Processing Systems vol. 1, 82–90.","ieee":"C. Lampert, “Dynamic pruning of factor graphs for maximum marginal prediction,” presented at the NIPS: Neural Information Processing Systems, Lake Tahoe, NV, United States, 2012, vol. 1, pp. 82–90.","mla":"Lampert, Christoph. Dynamic Pruning of Factor Graphs for Maximum Marginal Prediction. Vol. 1, Neural Information Processing Systems, 2012, pp. 82–90.","short":"C. Lampert, in:, Neural Information Processing Systems, 2012, pp. 82–90."},"publist_id":"3975"}