{"publication_status":"submitted","type":"preprint","user_id":"8b945eb4-e2f2-11eb-945a-df72226e66a9","status":"public","department":[{"_id":"JaMa"},{"_id":"MaMo"}],"related_material":{"record":[{"id":"17336","relation":"dissertation_contains","status":"for_moderation"}]},"main_file_link":[{"open_access":"1","url":"https://doi.org/10.48550/arXiv.2305.14164"}],"date_updated":"2024-07-31T08:56:38Z","title":"Improved convergence of score-based diffusion models via prediction-correction","oa":1,"article_processing_charge":"No","language":[{"iso":"eng"}],"citation":{"ama":"Pedrotti F, Maas J, Mondelli M. Improved convergence of score-based diffusion models via prediction-correction. arXiv. doi:10.48550/arXiv.2305.14164","apa":"Pedrotti, F., Maas, J., & Mondelli, M. (n.d.). Improved convergence of score-based diffusion models via prediction-correction. arXiv. https://doi.org/10.48550/arXiv.2305.14164","ieee":"F. Pedrotti, J. Maas, and M. Mondelli, “Improved convergence of score-based diffusion models via prediction-correction,” arXiv. .","short":"F. Pedrotti, J. Maas, M. Mondelli, ArXiv (n.d.).","mla":"Pedrotti, Francesco, et al. “Improved Convergence of Score-Based Diffusion Models via Prediction-Correction.” ArXiv, doi:10.48550/arXiv.2305.14164.","ista":"Pedrotti F, Maas J, Mondelli M. Improved convergence of score-based diffusion models via prediction-correction. arXiv, 10.48550/arXiv.2305.14164.","chicago":"Pedrotti, Francesco, Jan Maas, and Marco Mondelli. “Improved Convergence of Score-Based Diffusion Models via Prediction-Correction.” ArXiv, n.d. https://doi.org/10.48550/arXiv.2305.14164."},"month":"06","date_published":"2024-06-06T00:00:00Z","corr_author":"1","_id":"17350","external_id":{"arxiv":["2305.14164"]},"doi":"10.48550/arXiv.2305.14164","author":[{"full_name":"Pedrotti, Francesco","first_name":"Francesco","id":"d3ac8ac6-dc8d-11ea-abe3-e2a9628c4c3c","last_name":"Pedrotti"},{"orcid":"0000-0002-0845-1338","first_name":"Jan","full_name":"Maas, Jan","id":"4C5696CE-F248-11E8-B48F-1D18A9856A87","last_name":"Maas"},{"id":"27EB676C-8706-11E9-9510-7717E6697425","last_name":"Mondelli","orcid":"0000-0002-3242-7020","full_name":"Mondelli, Marco","first_name":"Marco"}],"day":"06","publication":"arXiv","oa_version":"Preprint","date_created":"2024-07-31T07:56:40Z","abstract":[{"text":"Score-based generative models (SGMs) are powerful tools to sample from\r\ncomplex data distributions. Their underlying idea is to (i) run a forward\r\nprocess for time $T_1$ by adding noise to the data, (ii) estimate its score\r\nfunction, and (iii) use such estimate to run a reverse process. As the reverse\r\nprocess is initialized with the stationary distribution of the forward one, the\r\nexisting analysis paradigm requires $T_1\\to\\infty$. This is however\r\nproblematic: from a theoretical viewpoint, for a given precision of the score\r\napproximation, the convergence guarantee fails as $T_1$ diverges; from a\r\npractical viewpoint, a large $T_1$ increases computational costs and leads to\r\nerror propagation. This paper addresses the issue by considering a version of\r\nthe popular predictor-corrector scheme: after running the forward process, we\r\nfirst estimate the final distribution via an inexact Langevin dynamics and then\r\nrevert the process. Our key technical contribution is to provide convergence\r\nguarantees which require to run the forward process only for a fixed finite\r\ntime $T_1$. Our bounds exhibit a mild logarithmic dependence on the input\r\ndimension and the subgaussian norm of the target distribution, have minimal\r\nassumptions on the data, and require only to control the $L^2$ loss on the\r\nscore approximation, which is the quantity minimized in practice.","lang":"eng"}],"year":"2024","project":[{"grant_number":"F6504","name":"Taming Complexity in Partial Differential Systems","_id":"fc31cba2-9c52-11eb-aca3-ff467d239cd2"},{"name":"Prix Lopez-Loretta 2019 - Marco Mondelli","_id":"059876FA-7A3F-11EA-A408-12923DDC885E"}]}