{"article_type":"original","status":"public","date_published":"2024-02-20T00:00:00Z","quality_controlled":"1","_id":"15172","scopus_import":"1","publication_identifier":{"eissn":["1557-9654"],"issn":["0018-9448"]},"related_material":{"record":[{"relation":"earlier_version","status":"public","id":"14922"}]},"citation":{"ieee":"A. R. Esposito and M. Mondelli, “Concentration without independence via information measures,” IEEE Transactions on Information Theory. IEEE.","chicago":"Esposito, Amedeo Roberto, and Marco Mondelli. “Concentration without Independence via Information Measures.” IEEE Transactions on Information Theory. IEEE, n.d. https://doi.org/10.1109/TIT.2024.3367767.","ama":"Esposito AR, Mondelli M. Concentration without independence via information measures. IEEE Transactions on Information Theory. doi:10.1109/TIT.2024.3367767","apa":"Esposito, A. R., & Mondelli, M. (n.d.). Concentration without independence via information measures. IEEE Transactions on Information Theory. IEEE. https://doi.org/10.1109/TIT.2024.3367767","short":"A.R. Esposito, M. Mondelli, IEEE Transactions on Information Theory (n.d.).","mla":"Esposito, Amedeo Roberto, and Marco Mondelli. “Concentration without Independence via Information Measures.” IEEE Transactions on Information Theory, IEEE, doi:10.1109/TIT.2024.3367767.","ista":"Esposito AR, Mondelli M. Concentration without independence via information measures. IEEE Transactions on Information Theory."},"title":"Concentration without independence via information measures","article_processing_charge":"No","type":"journal_article","date_updated":"2024-03-25T07:15:51Z","user_id":"2DF688A6-F248-11E8-B48F-1D18A9856A87","project":[{"name":"Prix Lopez-Loretta 2019 - Marco Mondelli","_id":"059876FA-7A3F-11EA-A408-12923DDC885E"}],"oa_version":"None","department":[{"_id":"MaMo"}],"doi":"10.1109/TIT.2024.3367767","author":[{"id":"9583e921-e1ad-11ec-9862-cef099626dc9","full_name":"Esposito, Amedeo Roberto","last_name":"Esposito","first_name":"Amedeo Roberto"},{"orcid":"0000-0002-3242-7020","last_name":"Mondelli","first_name":"Marco","id":"27EB676C-8706-11E9-9510-7717E6697425","full_name":"Mondelli, Marco"}],"publication":"IEEE Transactions on Information Theory","language":[{"iso":"eng"}],"day":"20","publication_status":"inpress","date_created":"2024-03-24T23:01:00Z","month":"02","external_id":{"arxiv":["2303.07245"]},"year":"2024","abstract":[{"lang":"eng","text":"We propose a novel approach to concentration for non-independent random variables. The main idea is to “pretend” that the random variables are independent and pay a multiplicative price measuring how far they are from actually being independent. This price is encapsulated in the Hellinger integral between the joint and the product of the marginals, which is then upper bounded leveraging tensorisation properties. Our bounds represent a natural generalisation of concentration inequalities in the presence of dependence: we recover exactly the classical bounds (McDiarmid’s inequality) when the random variables are independent. Furthermore, in a “large deviations” regime, we obtain the same decay in the probability as for the independent case, even when the random variables display non-trivial dependencies. To show this, we consider a number of applications of interest. First, we provide a bound for Markov chains with finite state space. Then, we consider the Simple Symmetric Random Walk, which is a non-contracting Markov chain, and a non-Markovian setting in which the stochastic process depends on its entire past. To conclude, we propose an application to Markov Chain Monte Carlo methods, where our approach leads to an improved lower bound on the minimum burn-in period required to reach a certain accuracy. In all of these settings, we provide a regime of parameters in which our bound fares better than what the state of the art can provide."}],"publisher":"IEEE"}