[{"abstract":[{"text":"The role of nuclear pore complexes (NPCs) in genome organization remains poorly characterized due to technical limitations in probing genome-wide protein-DNA interactions specific to the nuclear periphery. Here, we developed a new sensitive method, NPC-DamID, which combines in vitro reconstitution of nuclear import and DamID technology. The fixation-free method identifies chromatin interactions at the NPCs in intact nuclei from cells and tissues. We found that NPCs are preferentially associated with common and hierarchically arranged super-enhancers (SEs) across multiple cell types. We also uncovered phase-separated condensates at NPCs that compartmentalize and concentrate transcriptional coactivators and structural proteins at SE-regulated genes. Our results support NPCs as anchoring sites for SE regulatory hubs and cell-type-specific transcriptional control.","lang":"eng"}],"main_file_link":[{"open_access":"1","url":"https://doi.org/10.7554/eLife.87462.1"}],"article_type":"original","language":[{"iso":"eng"}],"publication":"eLife","day":"23","oa_version":"Submitted Version","date_created":"2024-01-22T12:21:56Z","status":"public","title":"High-precision mapping of nuclear pore-chromatin interactions reveals new principles of genome organization at the nuclear envelope","type":"journal_article","user_id":"2DF688A6-F248-11E8-B48F-1D18A9856A87","acknowledgement":"This work was supported by M.H.’s NIH R01 grants (NS096786, GM126829) and Salk Cancer Center Support Grant P30 CA014195. M.H. also received financial support from the W.M. Keck Foundation and the NOMIS Foundation. Further, M.H. received support from the AHA-Allen Initiative in Brain Health and Cognitive Impairment award made jointly through the American Heart Association and The Paul G. Allen Frontiers Group (19PABH134610000).\r\n\r\nS.T. and J.C. were supported by Salk’s Women & Science Awards. S.T. also received financial support from the Hewitt Foundation fellowship, and J.C. is a Paul F. Glenn Biology of Aging fellow. J.H. was supported by the National Natural Science Foundation of China (31871317 and 32070635).\r\n\r\nWe thank Roberta Schulte for assistance with in vitro transport assays, for comments that greatly improved the manuscript, and for helping refine the figures presented in this work. We thank Shefali Krishna for creating the diagram for the NPC-DamID method, for her input on super-resolution microscopy analysis, and her insightful comments on this manuscript. We thank all members of the Hetzer lab for helpful discussions of these research ideas and their thoughtful comments on this manuscript. We are also grateful to Salk’s core facilities for their assistance. Specifically, we thank the Next Generation Sequencing Core (NGS) for sequencing our DamID and RNA NGS libraries, the Advanced Biophotonics Core for assistance with super-resolution microscopy, and the Razavi Newman Integrative Genomics and Bioinformatics Core (IGC) for their input on analysis methods for DamID experiments.","author":[{"last_name":"Tyagi","full_name":"Tyagi, Swati","first_name":"Swati"},{"first_name":"Juliana S.","last_name":"Capitanio","full_name":"Capitanio, Juliana S."},{"full_name":"Xu, Jiawei","last_name":"Xu","first_name":"Jiawei"},{"last_name":"Chen","full_name":"Chen, Fei","first_name":"Fei"},{"first_name":"Rahul","last_name":"Sharma","full_name":"Sharma, Rahul"},{"last_name":"Huang","full_name":"Huang, Jialiang","first_name":"Jialiang"},{"last_name":"HETZER","full_name":"HETZER, Martin W","id":"86c0d31b-b4eb-11ec-ac5a-eae7b2e135ed","first_name":"Martin W","orcid":"0000-0002-2111-992X"}],"month":"06","department":[{"_id":"MaHe"}],"corr_author":"1","date_updated":"2024-07-31T11:56:25Z","publication_status":"epub_ahead","_id":"14868","doi":"10.7554/elife.87462","article_processing_charge":"Yes","publisher":"eLife Sciences Publications","citation":{"ama":"Tyagi S, Capitanio JS, Xu J, et al. High-precision mapping of nuclear pore-chromatin interactions reveals new principles of genome organization at the nuclear envelope. <i>eLife</i>. 2023. doi:<a href=\"https://doi.org/10.7554/elife.87462\">10.7554/elife.87462</a>","chicago":"Tyagi, Swati, Juliana S. Capitanio, Jiawei Xu, Fei Chen, Rahul Sharma, Jialiang Huang, and Martin Hetzer. “High-Precision Mapping of Nuclear Pore-Chromatin Interactions Reveals New Principles of Genome Organization at the Nuclear Envelope.” <i>ELife</i>. eLife Sciences Publications, 2023. <a href=\"https://doi.org/10.7554/elife.87462\">https://doi.org/10.7554/elife.87462</a>.","short":"S. Tyagi, J.S. Capitanio, J. Xu, F. Chen, R. Sharma, J. Huang, M. Hetzer, ELife (2023).","apa":"Tyagi, S., Capitanio, J. S., Xu, J., Chen, F., Sharma, R., Huang, J., &#38; Hetzer, M. (2023). High-precision mapping of nuclear pore-chromatin interactions reveals new principles of genome organization at the nuclear envelope. <i>ELife</i>. eLife Sciences Publications. <a href=\"https://doi.org/10.7554/elife.87462\">https://doi.org/10.7554/elife.87462</a>","ieee":"S. Tyagi <i>et al.</i>, “High-precision mapping of nuclear pore-chromatin interactions reveals new principles of genome organization at the nuclear envelope,” <i>eLife</i>. eLife Sciences Publications, 2023.","mla":"Tyagi, Swati, et al. “High-Precision Mapping of Nuclear Pore-Chromatin Interactions Reveals New Principles of Genome Organization at the Nuclear Envelope.” <i>ELife</i>, eLife Sciences Publications, 2023, doi:<a href=\"https://doi.org/10.7554/elife.87462\">10.7554/elife.87462</a>.","ista":"Tyagi S, Capitanio JS, Xu J, Chen F, Sharma R, Huang J, Hetzer M. 2023. High-precision mapping of nuclear pore-chromatin interactions reveals new principles of genome organization at the nuclear envelope. eLife."},"year":"2023","oa":1,"date_published":"2023-06-23T00:00:00Z"},{"abstract":[{"lang":"eng","text":"We entangled microwave and optical photons for the first time as verified by a measured two-mode vacuum squeezing of 0.7 dB. This electro-optic entanglement is the key resource needed to connect cryogenic quantum circuits."}],"conference":{"name":"Laser Science","end_date":"2023-10-12","location":"Tacoma, WA, United States","start_date":"2023-10-09"},"date_created":"2024-01-22T12:29:41Z","status":"public","day":"01","oa_version":"None","language":[{"iso":"eng"}],"publication":"Frontiers in Optics + Laser Science 2023","quality_controlled":"1","month":"10","department":[{"_id":"JoFi"}],"date_updated":"2024-10-09T21:07:59Z","corr_author":"1","type":"conference","user_id":"2DF688A6-F248-11E8-B48F-1D18A9856A87","author":[{"last_name":"Sahu","full_name":"Sahu, Rishabh","id":"47D26E34-F248-11E8-B48F-1D18A9856A87","first_name":"Rishabh","orcid":"0000-0001-6264-2162"},{"last_name":"Qiu","full_name":"Qiu, Liu","first_name":"Liu"},{"first_name":"William J","orcid":"0000-0001-9868-2166","last_name":"Hease","id":"29705398-F248-11E8-B48F-1D18A9856A87","full_name":"Hease, William J"},{"orcid":"0000-0003-1397-7876","first_name":"Georg M","last_name":"Arnold","id":"3770C838-F248-11E8-B48F-1D18A9856A87","full_name":"Arnold, Georg M"},{"full_name":"Minoguchi, Yuri","last_name":"Minoguchi","first_name":"Yuri"},{"last_name":"Rabl","full_name":"Rabl, Peter","first_name":"Peter"},{"last_name":"Fink","id":"4B591CBA-F248-11E8-B48F-1D18A9856A87","full_name":"Fink, Johannes M","orcid":"0000-0001-8112-028X","first_name":"Johannes M"}],"article_number":"LM1F.3","title":"Entangling microwaves and telecom wavelength light","citation":{"ieee":"R. Sahu <i>et al.</i>, “Entangling microwaves and telecom wavelength light,” in <i>Frontiers in Optics + Laser Science 2023</i>, Tacoma, WA, United States, 2023.","mla":"Sahu, Rishabh, et al. “Entangling Microwaves and Telecom Wavelength Light.” <i>Frontiers in Optics + Laser Science 2023</i>, LM1F.3, Optica Publishing Group, 2023, doi:<a href=\"https://doi.org/10.1364/ls.2023.lm1f.3\">10.1364/ls.2023.lm1f.3</a>.","ista":"Sahu R, Qiu L, Hease WJ, Arnold GM, Minoguchi Y, Rabl P, Fink JM. 2023. Entangling microwaves and telecom wavelength light. Frontiers in Optics + Laser Science 2023. Laser Science, LM1F.3.","ama":"Sahu R, Qiu L, Hease WJ, et al. Entangling microwaves and telecom wavelength light. In: <i>Frontiers in Optics + Laser Science 2023</i>. Optica Publishing Group; 2023. doi:<a href=\"https://doi.org/10.1364/ls.2023.lm1f.3\">10.1364/ls.2023.lm1f.3</a>","chicago":"Sahu, Rishabh, Liu Qiu, William J Hease, Georg M Arnold, Yuri Minoguchi, Peter Rabl, and Johannes M Fink. “Entangling Microwaves and Telecom Wavelength Light.” In <i>Frontiers in Optics + Laser Science 2023</i>. Optica Publishing Group, 2023. <a href=\"https://doi.org/10.1364/ls.2023.lm1f.3\">https://doi.org/10.1364/ls.2023.lm1f.3</a>.","short":"R. Sahu, L. Qiu, W.J. Hease, G.M. Arnold, Y. Minoguchi, P. Rabl, J.M. Fink, in:, Frontiers in Optics + Laser Science 2023, Optica Publishing Group, 2023.","apa":"Sahu, R., Qiu, L., Hease, W. J., Arnold, G. M., Minoguchi, Y., Rabl, P., &#38; Fink, J. M. (2023). Entangling microwaves and telecom wavelength light. In <i>Frontiers in Optics + Laser Science 2023</i>. Tacoma, WA, United States: Optica Publishing Group. <a href=\"https://doi.org/10.1364/ls.2023.lm1f.3\">https://doi.org/10.1364/ls.2023.lm1f.3</a>"},"date_published":"2023-10-01T00:00:00Z","year":"2023","doi":"10.1364/ls.2023.lm1f.3","publication_identifier":{"isbn":["9781957171296"]},"publisher":"Optica Publishing Group","article_processing_charge":"No","publication_status":"published","_id":"14872"},{"citation":{"ista":"Feitosa Tomé D. 2023. douglastome/dynamic-engrams: Dynamic and selective engrams emerge with memory consolidation, Zenodo, <a href=\"https://doi.org/10.5281/ZENODO.10251087\">10.5281/ZENODO.10251087</a>.","mla":"Feitosa Tomé, Douglas. <i>Douglastome/Dynamic-Engrams: Dynamic and Selective Engrams Emerge with Memory Consolidation</i>. Zenodo, 2023, doi:<a href=\"https://doi.org/10.5281/ZENODO.10251087\">10.5281/ZENODO.10251087</a>.","ieee":"D. Feitosa Tomé, “douglastome/dynamic-engrams: Dynamic and selective engrams emerge with memory consolidation.” Zenodo, 2023.","apa":"Feitosa Tomé, D. (2023). douglastome/dynamic-engrams: Dynamic and selective engrams emerge with memory consolidation. Zenodo. <a href=\"https://doi.org/10.5281/ZENODO.10251087\">https://doi.org/10.5281/ZENODO.10251087</a>","short":"D. Feitosa Tomé, (2023).","chicago":"Feitosa Tomé, Douglas. “Douglastome/Dynamic-Engrams: Dynamic and Selective Engrams Emerge with Memory Consolidation.” Zenodo, 2023. <a href=\"https://doi.org/10.5281/ZENODO.10251087\">https://doi.org/10.5281/ZENODO.10251087</a>.","ama":"Feitosa Tomé D. douglastome/dynamic-engrams: Dynamic and selective engrams emerge with memory consolidation. 2023. doi:<a href=\"https://doi.org/10.5281/ZENODO.10251087\">10.5281/ZENODO.10251087</a>"},"date_published":"2023-12-02T00:00:00Z","oa":1,"related_material":{"record":[{"relation":"used_in_publication","id":"14887","status":"public"}]},"year":"2023","_id":"14892","doi":"10.5281/ZENODO.10251087","article_processing_charge":"No","publisher":"Zenodo","tmp":{"legal_code_url":"https://creativecommons.org/licenses/by/4.0/legalcode","image":"/images/cc_by.png","name":"Creative Commons Attribution 4.0 International Public License (CC-BY 4.0)","short":"CC BY (4.0)"},"month":"12","department":[{"_id":"TiVo"}],"date_updated":"2025-04-23T07:40:21Z","corr_author":"1","title":"douglastome/dynamic-engrams: Dynamic and selective engrams emerge with memory consolidation","type":"research_data_reference","user_id":"2DF688A6-F248-11E8-B48F-1D18A9856A87","author":[{"first_name":"Douglas","last_name":"Feitosa Tomé","full_name":"Feitosa Tomé, Douglas","id":"0eed2d40-3d48-11ec-8d38-f789cc2e40b2"}],"oa_version":"None","day":"02","date_created":"2024-01-29T09:06:43Z","status":"public","has_accepted_license":"1","abstract":[{"lang":"eng","text":"Code and data necessary to reproduce the simulations and data analyses reported in our manuscript: Tomé, D.F., Zhang, Y., Aida, T., Mosto, O., Lu, Y., Chen, M., Sadeh, S., Roy, D. S., Clopath, C. Dynamic and selective engrams emerge with memory consolidation. 2023."}],"main_file_link":[{"open_access":"1","url":"https://doi.org/10.5281/zenodo.10251087"}],"ddc":["570"]},{"related_material":{"record":[{"status":"public","id":"14885","relation":"used_in_publication"}]},"oa":1,"year":"2023","date_published":"2023-08-23T00:00:00Z","citation":{"ama":"Shaw T, Buri P, McCarthy M, Miles E, Pellicciotti F. Air temperature and near-surface meteorology datasets on three Swiss glaciers - Extreme 2022 Summer. 2023. doi:<a href=\"https://doi.org/10.5281/ZENODO.8277285\">10.5281/ZENODO.8277285</a>","chicago":"Shaw, Thomas, Pascal Buri, Michael McCarthy, Evan Miles, and Francesca Pellicciotti. “Air Temperature and Near-Surface Meteorology Datasets on Three Swiss Glaciers - Extreme 2022 Summer.” Zenodo, 2023. <a href=\"https://doi.org/10.5281/ZENODO.8277285\">https://doi.org/10.5281/ZENODO.8277285</a>.","short":"T. Shaw, P. Buri, M. McCarthy, E. Miles, F. Pellicciotti, (2023).","apa":"Shaw, T., Buri, P., McCarthy, M., Miles, E., &#38; Pellicciotti, F. (2023). Air temperature and near-surface meteorology datasets on three Swiss glaciers - Extreme 2022 Summer. Zenodo. <a href=\"https://doi.org/10.5281/ZENODO.8277285\">https://doi.org/10.5281/ZENODO.8277285</a>","ieee":"T. Shaw, P. Buri, M. McCarthy, E. Miles, and F. Pellicciotti, “Air temperature and near-surface meteorology datasets on three Swiss glaciers - Extreme 2022 Summer.” Zenodo, 2023.","mla":"Shaw, Thomas, et al. <i>Air Temperature and Near-Surface Meteorology Datasets on Three Swiss Glaciers - Extreme 2022 Summer</i>. Zenodo, 2023, doi:<a href=\"https://doi.org/10.5281/ZENODO.8277285\">10.5281/ZENODO.8277285</a>.","ista":"Shaw T, Buri P, McCarthy M, Miles E, Pellicciotti F. 2023. Air temperature and near-surface meteorology datasets on three Swiss glaciers - Extreme 2022 Summer, Zenodo, <a href=\"https://doi.org/10.5281/ZENODO.8277285\">10.5281/ZENODO.8277285</a>."},"article_processing_charge":"No","publisher":"Zenodo","doi":"10.5281/ZENODO.8277285","_id":"14919","corr_author":"1","date_updated":"2025-09-04T11:58:38Z","department":[{"_id":"FrPe"}],"month":"08","author":[{"last_name":"Shaw","id":"3caa3f91-1f03-11ee-96ce-e0e553054d6e","full_name":"Shaw, Thomas","first_name":"Thomas","orcid":"0000-0001-7640-6152"},{"first_name":"Pascal","last_name":"Buri","full_name":"Buri, Pascal","id":"317987aa-9421-11ee-ac5a-b941b041abba"},{"first_name":"Michael","last_name":"McCarthy","full_name":"McCarthy, Michael"},{"first_name":"Evan","last_name":"Miles","full_name":"Miles, Evan"},{"orcid":"0000-0002-5554-8087","first_name":"Francesca","last_name":"Pellicciotti","full_name":"Pellicciotti, Francesca","id":"b28f055a-81ea-11ed-b70c-a9fe7f7b0e70"}],"user_id":"2DF688A6-F248-11E8-B48F-1D18A9856A87","type":"research_data_reference","title":"Air temperature and near-surface meteorology datasets on three Swiss glaciers - Extreme 2022 Summer","status":"public","date_created":"2024-01-31T12:08:26Z","day":"23","oa_version":"Published Version","ddc":["550"],"main_file_link":[{"url":"https://doi.org/10.5281/ZENODO.8277285","open_access":"1"}],"abstract":[{"text":"GLACIER METEOROLOGICAL DATA SWISS ALPS -2022\r\n","lang":"eng"}]},{"abstract":[{"text":"We consider fixpoint algorithms for two-player games on graphs with $\\omega$-regular winning conditions, where the environment is constrained by a strong transition fairness assumption. Strong transition fairness is a widely occurring special case of strong fairness, which requires that any execution is strongly fair with respect to a specified set of live edges: whenever the\r\nsource vertex of a live edge is visited infinitely often along a play, the edge itself is traversed infinitely often along the play as well. We show that, surprisingly, strong transition fairness retains the algorithmic characteristics of the fixpoint algorithms for $\\omega$-regular games -- the new algorithms have the same alternation depth as the classical algorithms but invoke a new type of predecessor operator. For Rabin games with $k$ pairs, the complexity of the new algorithm is $O(n^{k+2}k!)$ symbolic steps, which is independent of the number of live edges in the strong transition fairness assumption. Further, we show that GR(1) specifications with strong transition fairness assumptions can be solved with a 3-nested fixpoint algorithm, same as the usual algorithm. In contrast, strong fairness necessarily requires increasing the alternation depth depending on the number of fairness assumptions. We get symbolic algorithms for (generalized) Rabin, parity and GR(1) objectives under strong transition fairness assumptions as well as a direct symbolic algorithm for qualitative winning in stochastic\r\n$\\omega$-regular games that runs in $O(n^{k+2}k!)$ symbolic steps, improving the state of the art. Finally, we have implemented a BDD-based synthesis engine based on our algorithm. We show on a set of synthetic and real benchmarks that our algorithm is scalable, parallelizable, and outperforms previous algorithms by orders of magnitude.","lang":"eng"}],"article_type":"original","project":[{"call_identifier":"H2020","name":"Vigilant Algorithmic Monitoring of Software","_id":"62781420-2b32-11ec-9570-8d9b63373d4d","grant_number":"101020093"}],"quality_controlled":"1","external_id":{"arxiv":["2202.07480"]},"volume":2,"day":"24","oa_version":"Published Version","ec_funded":1,"title":"Fast symbolic algorithms for mega-regular games under strong transition fairness","article_number":"4","tmp":{"legal_code_url":"https://creativecommons.org/licenses/by/4.0/legalcode","image":"/images/cc_by.png","name":"Creative Commons Attribution 4.0 International Public License (CC-BY 4.0)","short":"CC BY (4.0)"},"department":[{"_id":"ToHe"}],"month":"02","date_updated":"2025-04-14T07:55:57Z","publication_status":"published","doi":"10.46298/theoretics.23.4","publication_identifier":{"issn":["2751-4838"]},"publisher":"EPI Sciences","article_processing_charge":"Yes","citation":{"chicago":"Banerjee, Tamajit, Rupak Majumdar, Kaushik Mallik, Anne-Kathrin Schmuck, and Sadegh Soudjani. “Fast Symbolic Algorithms for Mega-Regular Games under Strong Transition Fairness.” <i>TheoretiCS</i>. EPI Sciences, 2023. <a href=\"https://doi.org/10.46298/theoretics.23.4\">https://doi.org/10.46298/theoretics.23.4</a>.","ama":"Banerjee T, Majumdar R, Mallik K, Schmuck A-K, Soudjani S. Fast symbolic algorithms for mega-regular games under strong transition fairness. <i>TheoretiCS</i>. 2023;2. doi:<a href=\"https://doi.org/10.46298/theoretics.23.4\">10.46298/theoretics.23.4</a>","apa":"Banerjee, T., Majumdar, R., Mallik, K., Schmuck, A.-K., &#38; Soudjani, S. (2023). Fast symbolic algorithms for mega-regular games under strong transition fairness. <i>TheoretiCS</i>. EPI Sciences. <a href=\"https://doi.org/10.46298/theoretics.23.4\">https://doi.org/10.46298/theoretics.23.4</a>","short":"T. Banerjee, R. Majumdar, K. Mallik, A.-K. Schmuck, S. Soudjani, TheoretiCS 2 (2023).","mla":"Banerjee, Tamajit, et al. “Fast Symbolic Algorithms for Mega-Regular Games under Strong Transition Fairness.” <i>TheoretiCS</i>, vol. 2, 4, EPI Sciences, 2023, doi:<a href=\"https://doi.org/10.46298/theoretics.23.4\">10.46298/theoretics.23.4</a>.","ieee":"T. Banerjee, R. Majumdar, K. Mallik, A.-K. Schmuck, and S. Soudjani, “Fast symbolic algorithms for mega-regular games under strong transition fairness,” <i>TheoretiCS</i>, vol. 2. EPI Sciences, 2023.","ista":"Banerjee T, Majumdar R, Mallik K, Schmuck A-K, Soudjani S. 2023. Fast symbolic algorithms for mega-regular games under strong transition fairness. TheoretiCS. 2, 4."},"year":"2023","date_published":"2023-02-24T00:00:00Z","intvolume":"         2","file":[{"creator":"dernst","file_id":"14940","content_type":"application/pdf","file_name":"2023_TheoretiCS_Banerjee.pdf","date_updated":"2024-02-05T10:19:35Z","file_size":917076,"relation":"main_file","success":1,"access_level":"open_access","checksum":"2972d531122a6f15727b396110fb3f5c","date_created":"2024-02-05T10:19:35Z"}],"ddc":["000"],"has_accepted_license":"1","language":[{"iso":"eng"}],"publication":"TheoretiCS","date_created":"2024-01-31T13:40:49Z","status":"public","type":"journal_article","user_id":"2DF688A6-F248-11E8-B48F-1D18A9856A87","author":[{"first_name":"Tamajit","last_name":"Banerjee","full_name":"Banerjee, Tamajit"},{"first_name":"Rupak","last_name":"Majumdar","full_name":"Majumdar, Rupak"},{"full_name":"Mallik, Kaushik","id":"0834ff3c-6d72-11ec-94e0-b5b0a4fb8598","last_name":"Mallik","first_name":"Kaushik","orcid":"0000-0001-9864-7475"},{"first_name":"Anne-Kathrin","last_name":"Schmuck","full_name":"Schmuck, Anne-Kathrin"},{"last_name":"Soudjani","full_name":"Soudjani, Sadegh","first_name":"Sadegh"}],"acknowledgement":"A previous version of this paper has appeared in TACAS 2022. Authors ordered alphabetically. T. Banerjee was interning with MPI-SWS when this research was conducted. R. Majumdar and A.-K. Schmuck are partially supported by DFG project 389792660 TRR 248–CPEC. A.-K. Schmuck is additionally funded through DFG project (SCHM 3541/1-1). K. Mallik is supported by the ERC project ERC-2020-AdG 101020093.","file_date_updated":"2024-02-05T10:19:35Z","corr_author":"1","_id":"14920","arxiv":1,"oa":1},{"quality_controlled":"1","external_id":{"arxiv":["2305.13165"]},"project":[{"name":"Prix Lopez-Loretta 2019 - Marco Mondelli","_id":"059876FA-7A3F-11EA-A408-12923DDC885E"}],"publication":"37th Annual Conference on Neural Information Processing Systems","alternative_title":["NeurIPS"],"language":[{"iso":"eng"}],"day":"15","oa_version":"Preprint","status":"public","date_created":"2024-02-02T11:17:41Z","conference":{"name":"NeurIPS: Neural Information Processing Systems","end_date":"2023-12-16","location":"New Orleans, LA, United States","start_date":"2023-12-10"},"main_file_link":[{"url":" https://doi.org/10.48550/arXiv.2305.13165","open_access":"1"}],"abstract":[{"lang":"eng","text":"Neural collapse (NC) refers to the surprising structure of the last layer of deep neural networks in the terminal phase of gradient descent training. Recently, an increasing amount of experimental evidence has pointed to the propagation of NC to earlier layers of neural networks. However, while the NC in the last layer is well studied theoretically, much less is known about its multi-layered counterpart - deep neural collapse (DNC). In particular, existing work focuses either on linear layers or only on the last two layers at the price of an extra assumption. Our paper fills this gap by generalizing the established analytical framework for NC - the unconstrained features model - to multiple non-linear layers. Our key technical contribution is to show that, in a deep unconstrained features model, the unique global optimum for binary classification exhibits all the properties typical of DNC. This explains the existing experimental evidence of DNC. We also empirically show that (i) by optimizing deep unconstrained features models via gradient descent, the resulting solution agrees well with our theory, and (ii) trained networks recover the unconstrained features suitable for the occurrence of DNC, thus supporting the validity of this modeling principle."}],"_id":"14921","publication_status":"published","article_processing_charge":"No","arxiv":1,"year":"2023","oa":1,"date_published":"2023-12-15T00:00:00Z","citation":{"mla":"Súkeník, Peter, et al. “Deep Neural Collapse Is Provably Optimal for the Deep Unconstrained Features Model.” <i>37th Annual Conference on Neural Information Processing Systems</i>, 2023.","ieee":"P. Súkeník, M. Mondelli, and C. Lampert, “Deep neural collapse is provably optimal for the deep unconstrained features model,” in <i>37th Annual Conference on Neural Information Processing Systems</i>, New Orleans, LA, United States, 2023.","ista":"Súkeník P, Mondelli M, Lampert C. 2023. Deep neural collapse is provably optimal for the deep unconstrained features model. 37th Annual Conference on Neural Information Processing Systems. NeurIPS: Neural Information Processing Systems, NeurIPS, .","chicago":"Súkeník, Peter, Marco Mondelli, and Christoph Lampert. “Deep Neural Collapse Is Provably Optimal for the Deep Unconstrained Features Model.” In <i>37th Annual Conference on Neural Information Processing Systems</i>, 2023.","ama":"Súkeník P, Mondelli M, Lampert C. Deep neural collapse is provably optimal for the deep unconstrained features model. In: <i>37th Annual Conference on Neural Information Processing Systems</i>. ; 2023.","apa":"Súkeník, P., Mondelli, M., &#38; Lampert, C. (2023). Deep neural collapse is provably optimal for the deep unconstrained features model. In <i>37th Annual Conference on Neural Information Processing Systems</i>. New Orleans, LA, United States.","short":"P. Súkeník, M. Mondelli, C. Lampert, in:, 37th Annual Conference on Neural Information Processing Systems, 2023."},"title":"Deep neural collapse is provably optimal for the deep unconstrained features model","author":[{"last_name":"Súkeník","full_name":"Súkeník, Peter","id":"d64d6a8d-eb8e-11eb-b029-96fd216dec3c","first_name":"Peter"},{"first_name":"Marco","orcid":"0000-0002-3242-7020","last_name":"Mondelli","full_name":"Mondelli, Marco","id":"27EB676C-8706-11E9-9510-7717E6697425"},{"first_name":"Christoph","orcid":"0000-0001-8622-7887","full_name":"Lampert, Christoph","id":"40C20FD2-F248-11E8-B48F-1D18A9856A87","last_name":"Lampert"}],"acknowledgement":"M. M. is partially supported by the 2019 Lopez-Loreta Prize. The authors would like to thank Eugenia Iofinova, Bernd Prach and Simone Bombari for valuable feedback on the manuscript.","user_id":"2DF688A6-F248-11E8-B48F-1D18A9856A87","type":"conference","date_updated":"2025-04-15T07:50:16Z","corr_author":"1","department":[{"_id":"MaMo"},{"_id":"ChLa"}],"month":"12"},{"date_published":"2023-06-30T00:00:00Z","year":"2023","citation":{"ista":"Esposito AR, Mondelli M. 2023. Concentration without independence via information measures. Proceedings of 2023 IEEE International Symposium on Information Theory. ISIT: International Symposium on Information Theory, 400–405.","mla":"Esposito, Amedeo Roberto, and Marco Mondelli. “Concentration without Independence via Information Measures.” <i>Proceedings of 2023 IEEE International Symposium on Information Theory</i>, IEEE, 2023, pp. 400–05, doi:<a href=\"https://doi.org/10.1109/isit54713.2023.10206899\">10.1109/isit54713.2023.10206899</a>.","ieee":"A. R. Esposito and M. Mondelli, “Concentration without independence via information measures,” in <i>Proceedings of 2023 IEEE International Symposium on Information Theory</i>, Taipei, Taiwan, 2023, pp. 400–405.","apa":"Esposito, A. R., &#38; Mondelli, M. (2023). Concentration without independence via information measures. In <i>Proceedings of 2023 IEEE International Symposium on Information Theory</i> (pp. 400–405). Taipei, Taiwan: IEEE. <a href=\"https://doi.org/10.1109/isit54713.2023.10206899\">https://doi.org/10.1109/isit54713.2023.10206899</a>","short":"A.R. Esposito, M. Mondelli, in:, Proceedings of 2023 IEEE International Symposium on Information Theory, IEEE, 2023, pp. 400–405.","chicago":"Esposito, Amedeo Roberto, and Marco Mondelli. “Concentration without Independence via Information Measures.” In <i>Proceedings of 2023 IEEE International Symposium on Information Theory</i>, 400–405. IEEE, 2023. <a href=\"https://doi.org/10.1109/isit54713.2023.10206899\">https://doi.org/10.1109/isit54713.2023.10206899</a>.","ama":"Esposito AR, Mondelli M. Concentration without independence via information measures. In: <i>Proceedings of 2023 IEEE International Symposium on Information Theory</i>. IEEE; 2023:400-405. doi:<a href=\"https://doi.org/10.1109/isit54713.2023.10206899\">10.1109/isit54713.2023.10206899</a>"},"article_processing_charge":"No","publisher":"IEEE","publication_identifier":{"eisbn":["9781665475549"],"eissn":["2157-8117"]},"doi":"10.1109/isit54713.2023.10206899","publication_status":"published","date_updated":"2025-09-04T13:06:52Z","month":"06","department":[{"_id":"MaMo"}],"title":"Concentration without independence via information measures","oa_version":"Preprint","day":"30","quality_controlled":"1","external_id":{"arxiv":["2303.07245"]},"project":[{"name":"Prix Lopez-Loretta 2019 - Marco Mondelli","_id":"059876FA-7A3F-11EA-A408-12923DDC885E"}],"main_file_link":[{"url":"https://doi.org/10.48550/arXiv.2303.07245","open_access":"1"}],"abstract":[{"text":"We propose a novel approach to concentration for non-independent random variables. The main idea is to ``pretend'' that the random variables are independent and pay a multiplicative price measuring how far they are from actually being independent. This price is encapsulated in the Hellinger integral between the joint and the product of the marginals, which is then upper bounded leveraging tensorisation properties. Our bounds represent a natural generalisation of concentration inequalities in the presence of dependence: we recover exactly the classical bounds (McDiarmid's inequality) when the random variables are independent. Furthermore, in a ``large deviations'' regime, we obtain the same decay in the probability as for the independent case, even when the random variables display non-trivial dependencies. To show this, we consider a number of applications of interest. First, we provide a bound for Markov chains with finite state space. Then, we consider the Simple Symmetric Random Walk, which is a non-contracting Markov chain, and a non-Markovian setting in which the stochastic process depends on its entire past. To conclude, we propose an application to Markov Chain Monte Carlo methods, where our approach leads to an improved lower bound on the minimum burn-in period required to reach a certain accuracy. In all of these settings, we provide a regime of parameters in which our bound fares better than what the state of the art can provide.","lang":"eng"}],"conference":{"location":"Taipei, Taiwan","start_date":"2023-06-25","name":"ISIT: International Symposium on Information Theory","end_date":"2023-06-30"},"oa":1,"related_material":{"record":[{"relation":"later_version","status":"public","id":"15172"}]},"arxiv":1,"_id":"14922","corr_author":"1","acknowledgement":"The authors are partially supported by the 2019 Lopez-Loreta Prize. They would also like to thank Professor Jan Maas for providing valuable suggestions and comments on an early version of the work.","user_id":"2DF688A6-F248-11E8-B48F-1D18A9856A87","author":[{"id":"9583e921-e1ad-11ec-9862-cef099626dc9","full_name":"Esposito, Amedeo Roberto","last_name":"Esposito","first_name":"Amedeo Roberto"},{"full_name":"Mondelli, Marco","id":"27EB676C-8706-11E9-9510-7717E6697425","last_name":"Mondelli","orcid":"0000-0002-3242-7020","first_name":"Marco"}],"type":"conference","status":"public","date_created":"2024-02-02T11:18:40Z","scopus_import":"1","publication":"Proceedings of 2023 IEEE International Symposium on Information Theory","language":[{"iso":"eng"}],"page":"400-405"},{"quality_controlled":"1","external_id":{"arxiv":["2302.03306"]},"oa_version":"Preprint","day":"30","conference":{"location":"Taipei, Taiwan","start_date":"2023-06-25","name":"ISIT: International Symposium on Information Theory","end_date":"2023-06-30"},"abstract":[{"lang":"eng","text":"We study the performance of a Bayesian statistician who estimates a rank-one signal corrupted by non-symmetric rotationally invariant noise with a generic distribution of singular values. As the signal-to-noise ratio and the noise structure are unknown, a Gaussian setup is incorrectly assumed. We derive the exact analytic expression for the error of the mismatched Bayes estimator and also provide the analysis of an approximate message passing (AMP) algorithm. The first result exploits the asymptotic behavior of spherical integrals for rectangular matrices and of low-rank matrix perturbations; the second one relies on the design and analysis of an auxiliary AMP. The numerical experiments show that there is a performance gap between the AMP and Bayes estimators, which is due to the incorrect estimation of the signal norm."}],"main_file_link":[{"open_access":"1","url":"https://doi.org/10.48550/arXiv.2302.03306"}],"publication_identifier":{"eissn":["2157-8117"],"isbn":["9781665475549"]},"article_processing_charge":"No","publisher":"IEEE","doi":"10.1109/isit54713.2023.10206671","publication_status":"published","year":"2023","date_published":"2023-06-30T00:00:00Z","citation":{"mla":"Fu, Teng, et al. “Mismatched Estimation of Non-Symmetric Rank-One Matrices Corrupted by Structured Noise.” <i>Proceedings of 2023 IEEE International Symposium on Information Theory</i>, IEEE, 2023, pp. 1178–83, doi:<a href=\"https://doi.org/10.1109/isit54713.2023.10206671\">10.1109/isit54713.2023.10206671</a>.","ieee":"T. Fu, Y. Liu, J. Barbier, M. Mondelli, S. Liang, and T. Hou, “Mismatched estimation of non-symmetric rank-one matrices corrupted by structured noise,” in <i>Proceedings of 2023 IEEE International Symposium on Information Theory</i>, Taipei, Taiwan, 2023, pp. 1178–1183.","ista":"Fu T, Liu Y, Barbier J, Mondelli M, Liang S, Hou T. 2023. Mismatched estimation of non-symmetric rank-one matrices corrupted by structured noise. Proceedings of 2023 IEEE International Symposium on Information Theory. ISIT: International Symposium on Information Theory, 1178–1183.","chicago":"Fu, Teng, YuHao Liu, Jean Barbier, Marco Mondelli, ShanSuo Liang, and TianQi Hou. “Mismatched Estimation of Non-Symmetric Rank-One Matrices Corrupted by Structured Noise.” In <i>Proceedings of 2023 IEEE International Symposium on Information Theory</i>, 1178–83. IEEE, 2023. <a href=\"https://doi.org/10.1109/isit54713.2023.10206671\">https://doi.org/10.1109/isit54713.2023.10206671</a>.","ama":"Fu T, Liu Y, Barbier J, Mondelli M, Liang S, Hou T. Mismatched estimation of non-symmetric rank-one matrices corrupted by structured noise. In: <i>Proceedings of 2023 IEEE International Symposium on Information Theory</i>. IEEE; 2023:1178-1183. doi:<a href=\"https://doi.org/10.1109/isit54713.2023.10206671\">10.1109/isit54713.2023.10206671</a>","apa":"Fu, T., Liu, Y., Barbier, J., Mondelli, M., Liang, S., &#38; Hou, T. (2023). Mismatched estimation of non-symmetric rank-one matrices corrupted by structured noise. In <i>Proceedings of 2023 IEEE International Symposium on Information Theory</i> (pp. 1178–1183). Taipei, Taiwan: IEEE. <a href=\"https://doi.org/10.1109/isit54713.2023.10206671\">https://doi.org/10.1109/isit54713.2023.10206671</a>","short":"T. Fu, Y. Liu, J. Barbier, M. Mondelli, S. Liang, T. Hou, in:, Proceedings of 2023 IEEE International Symposium on Information Theory, IEEE, 2023, pp. 1178–1183."},"title":"Mismatched estimation of non-symmetric rank-one matrices corrupted by structured noise","date_updated":"2025-07-10T11:51:04Z","department":[{"_id":"MaMo"}],"month":"06","publication":"Proceedings of 2023 IEEE International Symposium on Information Theory","language":[{"iso":"eng"}],"status":"public","date_created":"2024-02-02T11:20:39Z","scopus_import":"1","page":"1178-1183","arxiv":1,"_id":"14923","oa":1,"user_id":"2DF688A6-F248-11E8-B48F-1D18A9856A87","author":[{"first_name":"Teng","full_name":"Fu, Teng","last_name":"Fu"},{"first_name":"YuHao","full_name":"Liu, YuHao","last_name":"Liu"},{"first_name":"Jean","last_name":"Barbier","full_name":"Barbier, Jean"},{"orcid":"0000-0002-3242-7020","first_name":"Marco","full_name":"Mondelli, Marco","id":"27EB676C-8706-11E9-9510-7717E6697425","last_name":"Mondelli"},{"last_name":"Liang","full_name":"Liang, ShanSuo","first_name":"ShanSuo"},{"full_name":"Hou, TianQi","last_name":"Hou","first_name":"TianQi"}],"type":"conference","corr_author":"1"},{"user_id":"2DF688A6-F248-11E8-B48F-1D18A9856A87","acknowledgement":"D. Wu and M. Mondelli are partially supported by the 2019 Lopez-Loreta Prize. V. Kungurtsev was supported by the OP VVV project CZ.02.1.01/0.0/0.0/16_019/0000765 \"Research Center for Informatics\".","author":[{"first_name":"Diyuan","last_name":"Wu","id":"1a5914c2-896a-11ed-bdf8-fb80621a0635","full_name":"Wu, Diyuan"},{"last_name":"Kungurtsev","full_name":"Kungurtsev, Vyacheslav","first_name":"Vyacheslav"},{"id":"27EB676C-8706-11E9-9510-7717E6697425","full_name":"Mondelli, Marco","last_name":"Mondelli","orcid":"0000-0002-3242-7020","first_name":"Marco"}],"type":"conference","corr_author":"1","arxiv":1,"_id":"14924","oa":1,"publication":"Transactions on Machine Learning Research","language":[{"iso":"eng"}],"has_accepted_license":"1","status":"public","date_created":"2024-02-02T11:21:56Z","title":"Mean-field analysis for heavy ball methods: Dropout-stability, connectivity, and global convergence","date_updated":"2025-04-15T07:50:17Z","month":"02","department":[{"_id":"MaMo"}],"tmp":{"legal_code_url":"https://creativecommons.org/licenses/by/4.0/legalcode","image":"/images/cc_by.png","name":"Creative Commons Attribution 4.0 International Public License (CC-BY 4.0)","short":"CC BY (4.0)"},"publisher":"ML Research Press","article_processing_charge":"No","publication_status":"published","year":"2023","date_published":"2023-02-28T00:00:00Z","citation":{"short":"D. Wu, V. Kungurtsev, M. Mondelli, in:, Transactions on Machine Learning Research, ML Research Press, 2023.","apa":"Wu, D., Kungurtsev, V., &#38; Mondelli, M. (2023). Mean-field analysis for heavy ball methods: Dropout-stability, connectivity, and global convergence. In <i>Transactions on Machine Learning Research</i>. ML Research Press.","ama":"Wu D, Kungurtsev V, Mondelli M. Mean-field analysis for heavy ball methods: Dropout-stability, connectivity, and global convergence. In: <i>Transactions on Machine Learning Research</i>. ML Research Press; 2023.","chicago":"Wu, Diyuan, Vyacheslav Kungurtsev, and Marco Mondelli. “Mean-Field Analysis for Heavy Ball Methods: Dropout-Stability, Connectivity, and Global Convergence.” In <i>Transactions on Machine Learning Research</i>. ML Research Press, 2023.","ista":"Wu D, Kungurtsev V, Mondelli M. 2023. Mean-field analysis for heavy ball methods: Dropout-stability, connectivity, and global convergence. Transactions on Machine Learning Research. , TMLR, .","ieee":"D. Wu, V. Kungurtsev, and M. Mondelli, “Mean-field analysis for heavy ball methods: Dropout-stability, connectivity, and global convergence,” in <i>Transactions on Machine Learning Research</i>, 2023.","mla":"Wu, Diyuan, et al. “Mean-Field Analysis for Heavy Ball Methods: Dropout-Stability, Connectivity, and Global Convergence.” <i>Transactions on Machine Learning Research</i>, ML Research Press, 2023."},"abstract":[{"lang":"eng","text":"The stochastic heavy ball method (SHB), also known as stochastic gradient descent (SGD) with Polyak's momentum, is widely used in training neural networks. However, despite the remarkable success of such algorithm in practice, its theoretical characterization remains limited. In this paper, we focus on neural networks with two and three layers and provide a rigorous understanding of the properties of the solutions found by SHB: \\emph{(i)} stability after dropping out part of the neurons, \\emph{(ii)} connectivity along a low-loss path, and \\emph{(iii)} convergence to the global optimum.\r\nTo achieve this goal, we take a mean-field view and relate the SHB dynamics to a certain partial differential equation in the limit of large network widths. This mean-field perspective has inspired a recent line of work focusing on SGD while, in contrast, our paper considers an algorithm with momentum. More specifically, after proving existence and uniqueness of the limit differential equations, we show convergence to the global optimum and give a quantitative bound between the mean-field limit and the SHB dynamics of a finite-width network. Armed with this last bound, we are able to establish the dropout-stability and connectivity of SHB solutions."}],"main_file_link":[{"open_access":"1","url":"https://doi.org/10.48550/arXiv.2210.06819"}],"alternative_title":["TMLR"],"external_id":{"arxiv":["2210.06819"]},"quality_controlled":"1","project":[{"name":"Prix Lopez-Loretta 2019 - Marco Mondelli","_id":"059876FA-7A3F-11EA-A408-12923DDC885E"}],"oa_version":"Published Version","day":"28"},{"citation":{"apa":"Kori, A., Locatello, F., Ribeiro, F. D. S., Toni, F., &#38; Glocker, B. (n.d.). Grounded object centric learning. <i>arXiv</i>. <a href=\"https://doi.org/10.48550/arXiv.2307.09437\">https://doi.org/10.48550/arXiv.2307.09437</a>","short":"A. Kori, F. Locatello, F.D.S. Ribeiro, F. Toni, B. Glocker, ArXiv (n.d.).","chicago":"Kori, Avinash, Francesco Locatello, Fabio De Sousa Ribeiro, Francesca Toni, and Ben Glocker. “Grounded Object Centric Learning.” <i>ArXiv</i>, n.d. <a href=\"https://doi.org/10.48550/arXiv.2307.09437\">https://doi.org/10.48550/arXiv.2307.09437</a>.","ama":"Kori A, Locatello F, Ribeiro FDS, Toni F, Glocker B. Grounded object centric learning. <i>arXiv</i>. doi:<a href=\"https://doi.org/10.48550/arXiv.2307.09437\">10.48550/arXiv.2307.09437</a>","ista":"Kori A, Locatello F, Ribeiro FDS, Toni F, Glocker B. Grounded object centric learning. arXiv, 2307.09437.","mla":"Kori, Avinash, et al. “Grounded Object Centric Learning.” <i>ArXiv</i>, 2307.09437, doi:<a href=\"https://doi.org/10.48550/arXiv.2307.09437\">10.48550/arXiv.2307.09437</a>.","ieee":"A. Kori, F. Locatello, F. D. S. Ribeiro, F. Toni, and B. Glocker, “Grounded object centric learning,” <i>arXiv</i>. ."},"year":"2023","date_published":"2023-07-18T00:00:00Z","oa":1,"doi":"10.48550/arXiv.2307.09437","arxiv":1,"article_processing_charge":"No","publication_status":"submitted","_id":"14948","month":"07","department":[{"_id":"FrLo"}],"date_updated":"2024-02-12T08:13:12Z","type":"preprint","author":[{"first_name":"Avinash","last_name":"Kori","full_name":"Kori, Avinash"},{"first_name":"Francesco","orcid":"0000-0002-4850-0683","last_name":"Locatello","full_name":"Locatello, Francesco","id":"26cfd52f-2483-11ee-8040-88983bcc06d4"},{"last_name":"Ribeiro","full_name":"Ribeiro, Fabio De Sousa","first_name":"Fabio De Sousa"},{"first_name":"Francesca","full_name":"Toni, Francesca","last_name":"Toni"},{"last_name":"Glocker","full_name":"Glocker, Ben","first_name":"Ben"}],"user_id":"2DF688A6-F248-11E8-B48F-1D18A9856A87","acknowledgement":"This work was supported by supported by UKRI (grant agreement no. EP/S023356/1), in the UKRI\r\nCentre for Doctoral Training in Safe and Trusted AI via A. Kori.","article_number":"2307.09437","title":"Grounded object centric learning","date_created":"2024-02-07T14:47:04Z","status":"public","oa_version":"Preprint","day":"18","language":[{"iso":"eng"}],"publication":"arXiv","external_id":{"arxiv":["2307.09437"]},"main_file_link":[{"open_access":"1","url":"https://doi.org/10.48550/arXiv.2307.09437"}],"abstract":[{"text":"The extraction of modular object-centric representations for downstream tasks\r\nis an emerging area of research. Learning grounded representations of objects\r\nthat are guaranteed to be stable and invariant promises robust performance\r\nacross different tasks and environments. Slot Attention (SA) learns\r\nobject-centric representations by assigning objects to \\textit{slots}, but\r\npresupposes a \\textit{single} distribution from which all slots are randomly\r\ninitialised. This results in an inability to learn \\textit{specialized} slots\r\nwhich bind to specific object types and remain invariant to identity-preserving\r\nchanges in object appearance. To address this, we present\r\n\\emph{\\textsc{Co}nditional \\textsc{S}lot \\textsc{A}ttention} (\\textsc{CoSA})\r\nusing a novel concept of \\emph{Grounded Slot Dictionary} (GSD) inspired by\r\nvector quantization. Our proposed GSD comprises (i) canonical object-level\r\nproperty vectors and (ii) parametric Gaussian distributions, which define a\r\nprior over the slots. We demonstrate the benefits of our method in multiple\r\ndownstream tasks such as scene generation, composition, and task adaptation,\r\nwhilst remaining competitive with SA in popular object discovery benchmarks.","lang":"eng"}]},{"file_date_updated":"2024-02-07T14:57:32Z","type":"journal_article","author":[{"first_name":"Max","full_name":"Burg, Max","last_name":"Burg"},{"first_name":"Florian","full_name":"Wenzel, Florian","last_name":"Wenzel"},{"first_name":"Dominik","full_name":"Zietlow, Dominik","last_name":"Zietlow"},{"first_name":"Max","full_name":"Horn, Max","last_name":"Horn"},{"first_name":"Osama","full_name":"Makansi, Osama","last_name":"Makansi"},{"full_name":"Locatello, Francesco","id":"26cfd52f-2483-11ee-8040-88983bcc06d4","last_name":"Locatello","first_name":"Francesco","orcid":"0000-0002-4850-0683"},{"full_name":"Russell, Chris","last_name":"Russell","first_name":"Chris"}],"user_id":"2DF688A6-F248-11E8-B48F-1D18A9856A87","acknowledgement":"The authors would like to thank Varad Gunjal and Vishaal Udandarao. MFB thanks the International Max Planck Research School for Intelligent Systems (IMPRS-IS).","oa":1,"_id":"14949","ddc":["000"],"file":[{"checksum":"af87ddea7908923426365347b9c87ba7","date_created":"2024-02-07T14:57:32Z","file_size":27325153,"relation":"main_file","access_level":"open_access","content_type":"application/pdf","date_updated":"2024-02-07T14:57:32Z","file_name":"Burg_et_al_2023_Image_retrieval_outperforms.pdf","creator":"ptazenko","file_id":"14950"}],"date_created":"2024-02-07T14:57:39Z","status":"public","language":[{"iso":"eng"}],"publication":"Journal of Machine Learning Research","has_accepted_license":"1","month":"12","department":[{"_id":"FrLo"}],"date_updated":"2024-02-12T08:30:21Z","tmp":{"legal_code_url":"https://creativecommons.org/licenses/by/4.0/legalcode","image":"/images/cc_by.png","name":"Creative Commons Attribution 4.0 International Public License (CC-BY 4.0)","short":"CC BY (4.0)"},"title":"Image retrieval outperforms diffusion models on data augmentation","citation":{"ista":"Burg M, Wenzel F, Zietlow D, Horn M, Makansi O, Locatello F, Russell C. 2023. Image retrieval outperforms diffusion models on data augmentation. Journal of Machine Learning Research.","mla":"Burg, Max, et al. “Image Retrieval Outperforms Diffusion Models on Data Augmentation.” <i>Journal of Machine Learning Research</i>, ML Research Press, 2023.","ieee":"M. Burg <i>et al.</i>, “Image retrieval outperforms diffusion models on data augmentation,” <i>Journal of Machine Learning Research</i>. ML Research Press, 2023.","apa":"Burg, M., Wenzel, F., Zietlow, D., Horn, M., Makansi, O., Locatello, F., &#38; Russell, C. (2023). Image retrieval outperforms diffusion models on data augmentation. <i>Journal of Machine Learning Research</i>. ML Research Press.","short":"M. Burg, F. Wenzel, D. Zietlow, M. Horn, O. Makansi, F. Locatello, C. Russell, Journal of Machine Learning Research (2023).","chicago":"Burg, Max, Florian Wenzel, Dominik Zietlow, Max Horn, Osama Makansi, Francesco Locatello, and Chris Russell. “Image Retrieval Outperforms Diffusion Models on Data Augmentation.” <i>Journal of Machine Learning Research</i>. ML Research Press, 2023.","ama":"Burg M, Wenzel F, Zietlow D, et al. Image retrieval outperforms diffusion models on data augmentation. <i>Journal of Machine Learning Research</i>. 2023."},"year":"2023","date_published":"2023-12-10T00:00:00Z","article_processing_charge":"No","publisher":"ML Research Press","publication_identifier":{"eissn":["2835-8856"]},"publication_status":"published","article_type":"original","abstract":[{"text":"Many approaches have been proposed to use diffusion models to augment training datasets for downstream tasks, such as classification. However, diffusion models are themselves trained on large datasets, often with noisy annotations, and it remains an open question to which extent these models contribute to downstream classification performance. In particular, it remains unclear if they generalize enough to improve over directly using the additional data of their pre-training process for augmentation. We systematically evaluate a range of existing methods to generate images from diffusion models and study new extensions to assess their benefit for data augmentation. Personalizing diffusion models towards the target data outperforms simpler prompting strategies. However, using the pre-training data of the diffusion model alone, via a simple nearest-neighbor retrieval procedure, leads to even stronger downstream performance. Our study explores the potential of diffusion models in generating new training data, and surprisingly finds that these sophisticated models are not yet able to beat a simple and strong image retrieval baseline on simple downstream vision tasks.","lang":"eng"}],"main_file_link":[{"open_access":"1","url":"https://openreview.net/forum?id=xflYdGZMpv"}],"day":"10","oa_version":"Published Version","alternative_title":["TMLR"],"quality_controlled":"1"},{"external_id":{"arxiv":["2311.00664"]},"language":[{"iso":"eng"}],"publication":"arXiv","day":"01","oa_version":"Preprint","date_created":"2024-02-07T15:08:55Z","status":"public","abstract":[{"lang":"eng","text":"While different neural models often exhibit latent spaces that are alike when exposed to semantically related data, this intrinsic similarity is not always immediately discernible. Towards a better understanding of this phenomenon, our work shows how representations learned from these neural modules can be translated between different pre-trained networks via simpler transformations than previously thought. An advantage of this approach is the ability to\r\nestimate these transformations using standard, well-understood algebraic procedures that have closed-form solutions. Our method directly estimates a transformation between two given latent spaces, thereby enabling effective stitching of encoders and decoders without additional training. We extensively validate the adaptability of this translation procedure in different\r\nexperimental settings: across various trainings, domains, architectures (e.g., ResNet, CNN, ViT), and in multiple downstream tasks (classification, reconstruction). Notably, we show how it is possible to zero-shot stitch text encoders and vision decoders, or vice-versa, yielding surprisingly good classification performance in this multimodal setting."}],"main_file_link":[{"url":"https://doi.org/10.48550/arXiv.2311.00664","open_access":"1"}],"publication_status":"submitted","_id":"14952","arxiv":1,"doi":"10.48550/arXiv.2311.00664","article_processing_charge":"No","citation":{"ista":"Maiorca V, Moschella L, Norelli A, Fumero M, Locatello F, Rodolà E. Latent space translation via semantic alignment. arXiv, 2311.00664.","mla":"Maiorca, Valentino, et al. “Latent Space Translation via Semantic Alignment.” <i>ArXiv</i>, 2311.00664, doi:<a href=\"https://doi.org/10.48550/arXiv.2311.00664\">10.48550/arXiv.2311.00664</a>.","ieee":"V. Maiorca, L. Moschella, A. Norelli, M. Fumero, F. Locatello, and E. Rodolà, “Latent space translation via semantic alignment,” <i>arXiv</i>. .","apa":"Maiorca, V., Moschella, L., Norelli, A., Fumero, M., Locatello, F., &#38; Rodolà, E. (n.d.). Latent space translation via semantic alignment. <i>arXiv</i>. <a href=\"https://doi.org/10.48550/arXiv.2311.00664\">https://doi.org/10.48550/arXiv.2311.00664</a>","short":"V. Maiorca, L. Moschella, A. Norelli, M. Fumero, F. Locatello, E. Rodolà, ArXiv (n.d.).","chicago":"Maiorca, Valentino, Luca Moschella, Antonio Norelli, Marco Fumero, Francesco Locatello, and Emanuele Rodolà. “Latent Space Translation via Semantic Alignment.” <i>ArXiv</i>, n.d. <a href=\"https://doi.org/10.48550/arXiv.2311.00664\">https://doi.org/10.48550/arXiv.2311.00664</a>.","ama":"Maiorca V, Moschella L, Norelli A, Fumero M, Locatello F, Rodolà E. Latent space translation via semantic alignment. <i>arXiv</i>. doi:<a href=\"https://doi.org/10.48550/arXiv.2311.00664\">10.48550/arXiv.2311.00664</a>"},"year":"2023","date_published":"2023-11-01T00:00:00Z","oa":1,"title":"Latent space translation via semantic alignment","type":"preprint","user_id":"2DF688A6-F248-11E8-B48F-1D18A9856A87","author":[{"full_name":"Maiorca, Valentino","last_name":"Maiorca","first_name":"Valentino"},{"full_name":"Moschella, Luca","last_name":"Moschella","first_name":"Luca"},{"first_name":"Antonio","full_name":"Norelli, Antonio","last_name":"Norelli"},{"first_name":"Marco","last_name":"Fumero","full_name":"Fumero, Marco"},{"last_name":"Locatello","id":"26cfd52f-2483-11ee-8040-88983bcc06d4","full_name":"Locatello, Francesco","orcid":"0000-0002-4850-0683","first_name":"Francesco"},{"full_name":"Rodolà, Emanuele","last_name":"Rodolà","first_name":"Emanuele"}],"acknowledgement":"This work is supported by the ERC grant no.802554 (SPECGEO), PRIN 2020 project no.2020TA3K9N (LEGO.AI), and PNRR MUR project PE0000013-FAIR. Francesco\r\nLocatello did not contribute to this work at Amazon.","article_number":"2311.00664","month":"11","department":[{"_id":"FrLo"}],"date_updated":"2024-02-12T09:40:23Z"},{"main_file_link":[{"open_access":"1","url":"https://doi.org/10.48550/arXiv.2310.18123"}],"abstract":[{"text":"This paper provides statistical sample complexity bounds for score-matching and its applications in causal discovery. We demonstrate that accurate estimation of the score function is achievable by training a standard deep ReLU neural network using stochastic gradient descent. We establish bounds on the error rate of recovering causal relationships using the score-matching-based causal discovery method of Rolland et al. [2022], assuming a sufficiently good estimation of the score function. Finally, we analyze the upper bound of score-matching estimation within the score-based generative modeling, which has been applied for causal discovery but is also of independent interest within the domain of generative models.","lang":"eng"}],"oa_version":"Preprint","day":"27","date_created":"2024-02-07T15:11:11Z","status":"public","external_id":{"arxiv":["2310.18123"]},"publication":"arXiv","language":[{"iso":"eng"}],"date_updated":"2024-02-12T09:45:58Z","month":"10","department":[{"_id":"FrLo"}],"title":"Sample complexity bounds for score-matching: Causal discovery and generative modeling","author":[{"last_name":"Zhu","full_name":"Zhu, Zhenyu","first_name":"Zhenyu"},{"first_name":"Francesco","orcid":"0000-0002-4850-0683","full_name":"Locatello, Francesco","id":"26cfd52f-2483-11ee-8040-88983bcc06d4","last_name":"Locatello"},{"full_name":"Cevher, Volkan","last_name":"Cevher","first_name":"Volkan"}],"article_number":"2310.18123","user_id":"2DF688A6-F248-11E8-B48F-1D18A9856A87","acknowledgement":"We are thankful to the reviewers for providing constructive feedback and Kun Zhang and Dominik Janzing for helpful discussion on the special case of deterministic children. This work was supported by Hasler Foundation Program: Hasler Responsible AI (project number 21043). This work was supported by the Swiss National Science Foundation (SNSF) under grant number 200021_205011. Francesco Locatello did not contribute to this work at Amazon. ","type":"preprint","oa":1,"year":"2023","date_published":"2023-10-27T00:00:00Z","citation":{"ista":"Zhu Z, Locatello F, Cevher V. Sample complexity bounds for score-matching: Causal discovery and generative modeling. arXiv, 2310.18123.","mla":"Zhu, Zhenyu, et al. “Sample Complexity Bounds for Score-Matching: Causal Discovery and Generative Modeling.” <i>ArXiv</i>, 2310.18123, doi:<a href=\"https://doi.org/10.48550/arXiv.2310.18123\">10.48550/arXiv.2310.18123</a>.","ieee":"Z. Zhu, F. Locatello, and V. Cevher, “Sample complexity bounds for score-matching: Causal discovery and generative modeling,” <i>arXiv</i>. .","apa":"Zhu, Z., Locatello, F., &#38; Cevher, V. (n.d.). Sample complexity bounds for score-matching: Causal discovery and generative modeling. <i>arXiv</i>. <a href=\"https://doi.org/10.48550/arXiv.2310.18123\">https://doi.org/10.48550/arXiv.2310.18123</a>","short":"Z. Zhu, F. Locatello, V. Cevher, ArXiv (n.d.).","chicago":"Zhu, Zhenyu, Francesco Locatello, and Volkan Cevher. “Sample Complexity Bounds for Score-Matching: Causal Discovery and Generative Modeling.” <i>ArXiv</i>, n.d. <a href=\"https://doi.org/10.48550/arXiv.2310.18123\">https://doi.org/10.48550/arXiv.2310.18123</a>.","ama":"Zhu Z, Locatello F, Cevher V. Sample complexity bounds for score-matching: Causal discovery and generative modeling. <i>arXiv</i>. doi:<a href=\"https://doi.org/10.48550/arXiv.2310.18123\">10.48550/arXiv.2310.18123</a>"},"_id":"14953","publication_status":"submitted","article_processing_charge":"No","doi":"10.48550/arXiv.2310.18123","arxiv":1},{"day":"20","oa_version":"Preprint","date_created":"2024-02-07T15:11:56Z","status":"public","external_id":{"arxiv":["2310.13387"]},"language":[{"iso":"eng"}],"publication":"arXiv","main_file_link":[{"url":"https://doi.org/10.48550/arXiv.2310.13387","open_access":"1"}],"abstract":[{"text":"When domain knowledge is limited and experimentation is restricted by ethical, financial, or time constraints, practitioners turn to observational causal discovery methods to recover the causal structure, exploiting the statistical properties of their data. Because causal discovery without further assumptions is an ill-posed problem, each algorithm comes with its own set of\r\nusually untestable assumptions, some of which are hard to meet in real datasets. Motivated by these considerations, this paper extensively benchmarks the empirical performance of recent causal discovery methods on observational i.i.d. data generated under different background conditions, allowing for violations of the critical assumptions required by each selected approach. Our experimental findings show that score matching-based methods demonstrate\r\nsurprising performance in the false positive and false negative rate of the inferred graph in these challenging scenarios, and we provide theoretical insights into their performance. This work is also the first effort to benchmark the stability of causal discovery algorithms with respect to the values of their hyperparameters. Finally, we hope this paper will set a new standard for the evaluation of causal discovery methods and can serve as an accessible entry point for practitioners interested in the field, highlighting the empirical implications of different algorithm choices.","lang":"eng"}],"citation":{"chicago":"Montagna, Francesco, Atalanti A. Mastakouri, Elias Eulig, Nicoletta Noceti, Lorenzo Rosasco, Dominik Janzing, Bryon Aragam, and Francesco Locatello. “Assumption Violations in Causal Discovery and the Robustness of Score Matching.” <i>ArXiv</i>, n.d. <a href=\"https://doi.org/10.48550/arXiv.2310.13387\">https://doi.org/10.48550/arXiv.2310.13387</a>.","ama":"Montagna F, Mastakouri AA, Eulig E, et al. Assumption violations in causal discovery and the robustness of score matching. <i>arXiv</i>. doi:<a href=\"https://doi.org/10.48550/arXiv.2310.13387\">10.48550/arXiv.2310.13387</a>","apa":"Montagna, F., Mastakouri, A. A., Eulig, E., Noceti, N., Rosasco, L., Janzing, D., … Locatello, F. (n.d.). Assumption violations in causal discovery and the robustness of score matching. <i>arXiv</i>. <a href=\"https://doi.org/10.48550/arXiv.2310.13387\">https://doi.org/10.48550/arXiv.2310.13387</a>","short":"F. Montagna, A.A. Mastakouri, E. Eulig, N. Noceti, L. Rosasco, D. Janzing, B. Aragam, F. Locatello, ArXiv (n.d.).","mla":"Montagna, Francesco, et al. “Assumption Violations in Causal Discovery and the Robustness of Score Matching.” <i>ArXiv</i>, 2310.13387, doi:<a href=\"https://doi.org/10.48550/arXiv.2310.13387\">10.48550/arXiv.2310.13387</a>.","ieee":"F. Montagna <i>et al.</i>, “Assumption violations in causal discovery and the robustness of score matching,” <i>arXiv</i>. .","ista":"Montagna F, Mastakouri AA, Eulig E, Noceti N, Rosasco L, Janzing D, Aragam B, Locatello F. Assumption violations in causal discovery and the robustness of score matching. arXiv, 2310.13387."},"date_published":"2023-10-20T00:00:00Z","oa":1,"year":"2023","publication_status":"submitted","_id":"14954","doi":"10.48550/arXiv.2310.13387","arxiv":1,"article_processing_charge":"No","department":[{"_id":"FrLo"}],"month":"10","date_updated":"2024-02-12T09:51:15Z","title":"Assumption violations in causal discovery and the robustness of score matching","type":"preprint","acknowledgement":"We thank Kun Zhang and Carl-Johann Simon-Gabriel for the insightful discussions. This work\r\nhas been supported by AFOSR, grant n. FA8655-20-1-7035. FM is supported by Programma\r\nOperativo Nazionale ricerca e innovazione 2014-2020. FM partially contributed to this work during an internship at Amazon Web Services with FL. FL partially contributed while at AWS.","user_id":"2DF688A6-F248-11E8-B48F-1D18A9856A87","author":[{"first_name":"Francesco","full_name":"Montagna, Francesco","last_name":"Montagna"},{"first_name":"Atalanti A.","full_name":"Mastakouri, Atalanti A.","last_name":"Mastakouri"},{"last_name":"Eulig","full_name":"Eulig, Elias","first_name":"Elias"},{"last_name":"Noceti","full_name":"Noceti, Nicoletta","first_name":"Nicoletta"},{"first_name":"Lorenzo","last_name":"Rosasco","full_name":"Rosasco, Lorenzo"},{"first_name":"Dominik","last_name":"Janzing","full_name":"Janzing, Dominik"},{"first_name":"Bryon","last_name":"Aragam","full_name":"Aragam, Bryon"},{"id":"26cfd52f-2483-11ee-8040-88983bcc06d4","full_name":"Locatello, Francesco","last_name":"Locatello","first_name":"Francesco","orcid":"0000-0002-4850-0683"}],"article_number":"2310.13387"},{"file":[{"checksum":"484efc27bda75ed6666044989695d9b6","date_created":"2024-02-13T08:50:53Z","file_size":552357,"relation":"main_file","access_level":"open_access","success":1,"content_type":"application/pdf","file_name":"2023_CRL_Xu.pdf","date_updated":"2024-02-13T08:50:53Z","file_id":"14982","creator":"dernst"}],"ddc":["000"],"has_accepted_license":"1","language":[{"iso":"eng"}],"publication":"Causal Representation Learning Workshop at NeurIPS 2023","status":"public","date_created":"2024-02-07T15:17:51Z","type":"conference","user_id":"2DF688A6-F248-11E8-B48F-1D18A9856A87","acknowledgement":"This work was initiated at the Second Bellairs Workshop on Causality held at the Bellairs Research Institute, January 6–13, 2022; we thank all workshop participants for providing a stimulating research environment. The research of DX and SM was supported by the Air Force Office of Scientific Research under award number FA8655-22-1-7155. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the United States Air Force. We also thank SURF for the support in using the Dutch National Supercomputer Snellius. DY was supported by an Amazon fellowship and the International Max Planck Research School for Intelligent Systems (IMPRS-IS). Work done outside of Amazon. SL was supported by an IVADO excellence PhD scholarship and by Samsung Electronics Co., Ldt. JvK acknowledges support from the German Federal Ministry of Education and Research (BMBF)\r\nthrough the Tübingen AI Center (FKZ: 01IS18039B).\r\n","author":[{"first_name":"Danru","last_name":"Xu","full_name":"Xu, Danru"},{"last_name":"Yao","full_name":"Yao, Dingling","id":"d3e02e50-48a8-11ee-8f62-c108061797fa","first_name":"Dingling"},{"last_name":"Lachapelle","full_name":"Lachapelle, Sebastien","first_name":"Sebastien"},{"last_name":"Taslakian","full_name":"Taslakian, Perouz","first_name":"Perouz"},{"first_name":"Julius","full_name":"von Kügelgen, Julius","last_name":"von Kügelgen"},{"full_name":"Locatello, Francesco","id":"26cfd52f-2483-11ee-8040-88983bcc06d4","last_name":"Locatello","orcid":"0000-0002-4850-0683","first_name":"Francesco"},{"first_name":"Sara","last_name":"Magliacane","full_name":"Magliacane, Sara"}],"file_date_updated":"2024-02-13T08:50:53Z","OA_place":"repository","_id":"14958","oa":1,"conference":{"location":"New Orleans, LA, United States","start_date":"2023-12-15","end_date":"2023-12-15","name":"CRL: Causal Representation Learning Workshop at NeurIPS"},"OA_type":"green","main_file_link":[{"url":"https://openreview.net/forum?id=Whr6uobelR","open_access":"1"}],"abstract":[{"text":"Causal representation learning (CRL) aims at identifying high-level causal variables from low-level data, e.g. images. Current methods usually assume that all causal variables are captured in the high-dimensional observations. In this work, we focus on learning causal representations from data under partial observability, i.e., when some of the causal variables are not observed in the measurements, and the set of masked variables changes across the different samples. We introduce some initial theoretical results for identifying causal variables under partial observability by exploiting a sparsity regularizer, focusing in particular on the linear and piecewise linear mixing function case. We provide a theorem that allows us to identify the causal variables up to permutation and element-wise linear transformations in the linear case and a lemma that allows us to identify causal variables up to linear transformation in the piecewise case. Finally, we provide a conjecture that would allow us to identify the causal variables up to permutation and element-wise linear transformations also in the piecewise linear case. We test the theorem and conjecture on simulated data, showing the effectiveness of our method.","lang":"eng"}],"quality_controlled":"1","oa_version":"Published Version","day":"05","title":"A sparsity principle for partially observable causal representation learning","article_number":"54","tmp":{"legal_code_url":"https://creativecommons.org/licenses/by/4.0/legalcode","image":"/images/cc_by.png","name":"Creative Commons Attribution 4.0 International Public License (CC-BY 4.0)","short":"CC BY (4.0)"},"month":"12","department":[{"_id":"FrLo"}],"date_updated":"2025-02-04T12:37:34Z","publication_status":"published","publisher":"OpenReview","article_processing_charge":"No","citation":{"ieee":"D. Xu <i>et al.</i>, “A sparsity principle for partially observable causal representation learning,” in <i>Causal Representation Learning Workshop at NeurIPS 2023</i>, New Orleans, LA, United States, 2023.","mla":"Xu, Danru, et al. “A Sparsity Principle for Partially Observable Causal Representation Learning.” <i>Causal Representation Learning Workshop at NeurIPS 2023</i>, 54, OpenReview, 2023.","ista":"Xu D, Yao D, Lachapelle S, Taslakian P, von Kügelgen J, Locatello F, Magliacane S. 2023. A sparsity principle for partially observable causal representation learning. Causal Representation Learning Workshop at NeurIPS 2023. CRL: Causal Representation Learning Workshop at NeurIPS, 54.","ama":"Xu D, Yao D, Lachapelle S, et al. A sparsity principle for partially observable causal representation learning. In: <i>Causal Representation Learning Workshop at NeurIPS 2023</i>. OpenReview; 2023.","chicago":"Xu, Danru, Dingling Yao, Sebastien Lachapelle, Perouz Taslakian, Julius von Kügelgen, Francesco Locatello, and Sara Magliacane. “A Sparsity Principle for Partially Observable Causal Representation Learning.” In <i>Causal Representation Learning Workshop at NeurIPS 2023</i>. OpenReview, 2023.","short":"D. Xu, D. Yao, S. Lachapelle, P. Taslakian, J. von Kügelgen, F. Locatello, S. Magliacane, in:, Causal Representation Learning Workshop at NeurIPS 2023, OpenReview, 2023.","apa":"Xu, D., Yao, D., Lachapelle, S., Taslakian, P., von Kügelgen, J., Locatello, F., &#38; Magliacane, S. (2023). A sparsity principle for partially observable causal representation learning. In <i>Causal Representation Learning Workshop at NeurIPS 2023</i>. New Orleans, LA, United States: OpenReview."},"date_published":"2023-12-05T00:00:00Z","year":"2023"},{"publication":"arXiv","language":[{"iso":"eng"}],"external_id":{"arxiv":["2310.14246"]},"status":"public","date_created":"2024-02-08T15:31:46Z","oa_version":"Preprint","day":"22","main_file_link":[{"url":"https://doi.org/10.48550/arXiv.2310.14246","open_access":"1"}],"abstract":[{"text":"The use of simulated data in the field of causal discovery is ubiquitous due to the scarcity of annotated real data. Recently, Reisach et al., 2021 highlighted the emergence of patterns in simulated linear data, which displays increasing marginal variance in the casual direction. As an ablation in their experiments, Montagna et al., 2023 found that similar patterns may emerge in\r\nnonlinear models for the variance of the score vector $\\nabla \\log p_{\\mathbf{X}}$, and introduced the ScoreSort algorithm. In this work, we formally define and characterize this score-sortability pattern of nonlinear additive noise models. We find that it defines a class of identifiable (bivariate) causal models overlapping with nonlinear additive noise models. We\r\ntheoretically demonstrate the advantages of ScoreSort in terms of statistical efficiency compared to prior state-of-the-art score matching-based methods and empirically show the score-sortability of the most common synthetic benchmarks in the literature. Our findings remark (1) the lack of diversity in the data as an important limitation in the evaluation of nonlinear causal discovery approaches, (2) the importance of thoroughly testing different settings within a problem class, and (3) the importance of analyzing statistical properties in\r\ncausal discovery, where research is often limited to defining identifiability conditions of the model. ","lang":"eng"}],"article_processing_charge":"No","doi":"10.48550/arXiv.2310.14246","arxiv":1,"_id":"14961","publication_status":"submitted","oa":1,"date_published":"2023-10-22T00:00:00Z","year":"2023","citation":{"ista":"Montagna F, Noceti N, Rosasco L, Locatello F. Shortcuts for causal discovery of nonlinear models by score matching. arXiv, 2310.14246.","ieee":"F. Montagna, N. Noceti, L. Rosasco, and F. Locatello, “Shortcuts for causal discovery of nonlinear models by score matching,” <i>arXiv</i>. .","mla":"Montagna, Francesco, et al. “Shortcuts for Causal Discovery of Nonlinear Models by Score Matching.” <i>ArXiv</i>, 2310.14246, doi:<a href=\"https://doi.org/10.48550/arXiv.2310.14246\">10.48550/arXiv.2310.14246</a>.","short":"F. Montagna, N. Noceti, L. Rosasco, F. Locatello, ArXiv (n.d.).","apa":"Montagna, F., Noceti, N., Rosasco, L., &#38; Locatello, F. (n.d.). Shortcuts for causal discovery of nonlinear models by score matching. <i>arXiv</i>. <a href=\"https://doi.org/10.48550/arXiv.2310.14246\">https://doi.org/10.48550/arXiv.2310.14246</a>","ama":"Montagna F, Noceti N, Rosasco L, Locatello F. Shortcuts for causal discovery of nonlinear models by score matching. <i>arXiv</i>. doi:<a href=\"https://doi.org/10.48550/arXiv.2310.14246\">10.48550/arXiv.2310.14246</a>","chicago":"Montagna, Francesco, Nicoletta Noceti, Lorenzo Rosasco, and Francesco Locatello. “Shortcuts for Causal Discovery of Nonlinear Models by Score Matching.” <i>ArXiv</i>, n.d. <a href=\"https://doi.org/10.48550/arXiv.2310.14246\">https://doi.org/10.48550/arXiv.2310.14246</a>."},"author":[{"first_name":"Francesco","full_name":"Montagna, Francesco","last_name":"Montagna"},{"first_name":"Nicoletta","last_name":"Noceti","full_name":"Noceti, Nicoletta"},{"last_name":"Rosasco","full_name":"Rosasco, Lorenzo","first_name":"Lorenzo"},{"first_name":"Francesco","orcid":"0000-0002-4850-0683","id":"26cfd52f-2483-11ee-8040-88983bcc06d4","full_name":"Locatello, Francesco","last_name":"Locatello"}],"user_id":"2DF688A6-F248-11E8-B48F-1D18A9856A87","article_number":"2310.14246","type":"preprint","title":"Shortcuts for causal discovery of nonlinear models by score matching","date_updated":"2024-10-09T21:08:10Z","corr_author":"1","month":"10","department":[{"_id":"FrLo"}]},{"main_file_link":[{"open_access":"1","url":"https://doi.org/10.48550/arXiv.2309.09858"}],"abstract":[{"lang":"eng","text":"In this paper, we show that recent advances in video representation learning\r\nand pre-trained vision-language models allow for substantial improvements in\r\nself-supervised video object localization. We propose a method that first\r\nlocalizes objects in videos via a slot attention approach and then assigns text\r\nto the obtained slots. The latter is achieved by an unsupervised way to read\r\nlocalized semantic information from the pre-trained CLIP model. The resulting\r\nvideo object localization is entirely unsupervised apart from the implicit\r\nannotation contained in CLIP, and it is effectively the first unsupervised\r\napproach that yields good results on regular video benchmarks."}],"status":"public","date_created":"2024-02-08T15:33:39Z","day":"18","oa_version":"Preprint","publication":"arXiv","language":[{"iso":"eng"}],"external_id":{"arxiv":["2309.09858"]},"date_updated":"2024-02-12T10:12:22Z","department":[{"_id":"FrLo"}],"month":"09","author":[{"first_name":"Ke","last_name":"Fan","full_name":"Fan, Ke"},{"first_name":"Zechen","full_name":"Bai, Zechen","last_name":"Bai"},{"full_name":"Xiao, Tianjun","last_name":"Xiao","first_name":"Tianjun"},{"first_name":"Dominik","last_name":"Zietlow","full_name":"Zietlow, Dominik"},{"full_name":"Horn, Max","last_name":"Horn","first_name":"Max"},{"first_name":"Zixu","full_name":"Zhao, Zixu","last_name":"Zhao"},{"first_name":"Carl-Johann Simon-Gabriel","last_name":"Carl-Johann Simon-Gabriel","full_name":"Carl-Johann Simon-Gabriel, Carl-Johann Simon-Gabriel"},{"last_name":"Shou","full_name":"Shou, Mike Zheng","first_name":"Mike Zheng"},{"last_name":"Locatello","full_name":"Locatello, Francesco","id":"26cfd52f-2483-11ee-8040-88983bcc06d4","orcid":"0000-0002-4850-0683","first_name":"Francesco"},{"last_name":"Schiele","full_name":"Schiele, Bernt","first_name":"Bernt"},{"first_name":"Thomas","full_name":"Brox, Thomas","last_name":"Brox"},{"full_name":"Zhang, Zheng","last_name":"Zhang","first_name":"Zheng"},{"full_name":"Fu, Yanwei","last_name":"Fu","first_name":"Yanwei"},{"last_name":"He","full_name":"He, Tong","first_name":"Tong"}],"user_id":"2DF688A6-F248-11E8-B48F-1D18A9856A87","article_number":"2309.09858","type":"preprint","title":"Unsupervised open-vocabulary object localization in videos","extern":"1","year":"2023","oa":1,"date_published":"2023-09-18T00:00:00Z","citation":{"ama":"Fan K, Bai Z, Xiao T, et al. Unsupervised open-vocabulary object localization in videos. <i>arXiv</i>. doi:<a href=\"https://doi.org/10.48550/arXiv.2309.09858\">10.48550/arXiv.2309.09858</a>","chicago":"Fan, Ke, Zechen Bai, Tianjun Xiao, Dominik Zietlow, Max Horn, Zixu Zhao, Carl-Johann Simon-Gabriel Carl-Johann Simon-Gabriel, et al. “Unsupervised Open-Vocabulary Object Localization in Videos.” <i>ArXiv</i>, n.d. <a href=\"https://doi.org/10.48550/arXiv.2309.09858\">https://doi.org/10.48550/arXiv.2309.09858</a>.","short":"K. Fan, Z. Bai, T. Xiao, D. Zietlow, M. Horn, Z. Zhao, C.-J.S.-G. Carl-Johann Simon-Gabriel, M.Z. Shou, F. Locatello, B. Schiele, T. Brox, Z. Zhang, Y. Fu, T. He, ArXiv (n.d.).","apa":"Fan, K., Bai, Z., Xiao, T., Zietlow, D., Horn, M., Zhao, Z., … He, T. (n.d.). Unsupervised open-vocabulary object localization in videos. <i>arXiv</i>. <a href=\"https://doi.org/10.48550/arXiv.2309.09858\">https://doi.org/10.48550/arXiv.2309.09858</a>","ieee":"K. Fan <i>et al.</i>, “Unsupervised open-vocabulary object localization in videos,” <i>arXiv</i>. .","mla":"Fan, Ke, et al. “Unsupervised Open-Vocabulary Object Localization in Videos.” <i>ArXiv</i>, 2309.09858, doi:<a href=\"https://doi.org/10.48550/arXiv.2309.09858\">10.48550/arXiv.2309.09858</a>.","ista":"Fan K, Bai Z, Xiao T, Zietlow D, Horn M, Zhao Z, Carl-Johann Simon-Gabriel C-JS-G, Shou MZ, Locatello F, Schiele B, Brox T, Zhang Z, Fu Y, He T. Unsupervised open-vocabulary object localization in videos. arXiv, 2309.09858."},"article_processing_charge":"No","doi":"10.48550/arXiv.2309.09858","arxiv":1,"_id":"14962","publication_status":"submitted"},{"publication":"arXiv","language":[{"iso":"eng"}],"external_id":{"arxiv":["2309.00233"]},"status":"public","date_created":"2024-02-08T15:34:43Z","day":"01","oa_version":"Preprint","main_file_link":[{"open_access":"1","url":" https://doi.org/10.48550/arXiv.2309.00233"}],"abstract":[{"lang":"eng","text":"Unsupervised object-centric learning methods allow the partitioning of scenes\r\ninto entities without additional localization information and are excellent\r\ncandidates for reducing the annotation burden of multiple-object tracking (MOT)\r\npipelines. Unfortunately, they lack two key properties: objects are often split\r\ninto parts and are not consistently tracked over time. In fact,\r\nstate-of-the-art models achieve pixel-level accuracy and temporal consistency\r\nby relying on supervised object detection with additional ID labels for the\r\nassociation through time. This paper proposes a video object-centric model for\r\nMOT. It consists of an index-merge module that adapts the object-centric slots\r\ninto detection outputs and an object memory module that builds complete object\r\nprototypes to handle occlusions. Benefited from object-centric learning, we\r\nonly require sparse detection labels (0%-6.25%) for object localization and\r\nfeature binding. Relying on our self-supervised\r\nExpectation-Maximization-inspired loss for object association, our approach\r\nrequires no ID labels. Our experiments significantly narrow the gap between the\r\nexisting object-centric model and the fully supervised state-of-the-art and\r\noutperform several unsupervised trackers."}],"article_processing_charge":"No","arxiv":1,"doi":"10.48550/arXiv.2309.00233","_id":"14963","publication_status":"submitted","oa":1,"date_published":"2023-09-01T00:00:00Z","year":"2023","citation":{"chicago":"Zhao, Zixu, Jiaze Wang, Max Horn, Yizhuo Ding, Tong He, Zechen Bai, Dominik Zietlow, et al. “Object-Centric Multiple Object Tracking.” <i>ArXiv</i>, n.d. <a href=\"https://doi.org/10.48550/arXiv.2309.00233\">https://doi.org/10.48550/arXiv.2309.00233</a>.","ama":"Zhao Z, Wang J, Horn M, et al. Object-centric multiple object tracking. <i>arXiv</i>. doi:<a href=\"https://doi.org/10.48550/arXiv.2309.00233\">10.48550/arXiv.2309.00233</a>","apa":"Zhao, Z., Wang, J., Horn, M., Ding, Y., He, T., Bai, Z., … Xiao, T. (n.d.). Object-centric multiple object tracking. <i>arXiv</i>. <a href=\"https://doi.org/10.48550/arXiv.2309.00233\">https://doi.org/10.48550/arXiv.2309.00233</a>","short":"Z. Zhao, J. Wang, M. Horn, Y. Ding, T. He, Z. Bai, D. Zietlow, C.-J.S.-G. Carl-Johann Simon-Gabriel, B. Shuai, Z. Tu, T. Brox, B. Schiele, Y. Fu, F. Locatello, Z. Zhang, T. Xiao, ArXiv (n.d.).","mla":"Zhao, Zixu, et al. “Object-Centric Multiple Object Tracking.” <i>ArXiv</i>, 2309.00233, doi:<a href=\"https://doi.org/10.48550/arXiv.2309.00233\">10.48550/arXiv.2309.00233</a>.","ieee":"Z. Zhao <i>et al.</i>, “Object-centric multiple object tracking,” <i>arXiv</i>. .","ista":"Zhao Z, Wang J, Horn M, Ding Y, He T, Bai Z, Zietlow D, Carl-Johann Simon-Gabriel C-JS-G, Shuai B, Tu Z, Brox T, Schiele B, Fu Y, Locatello F, Zhang Z, Xiao T. Object-centric multiple object tracking. arXiv, 2309.00233."},"user_id":"2DF688A6-F248-11E8-B48F-1D18A9856A87","author":[{"full_name":"Zhao, Zixu","last_name":"Zhao","first_name":"Zixu"},{"first_name":"Jiaze","full_name":"Wang, Jiaze","last_name":"Wang"},{"first_name":"Max","last_name":"Horn","full_name":"Horn, Max"},{"full_name":"Ding, Yizhuo","last_name":"Ding","first_name":"Yizhuo"},{"full_name":"He, Tong","last_name":"He","first_name":"Tong"},{"full_name":"Bai, Zechen","last_name":"Bai","first_name":"Zechen"},{"full_name":"Zietlow, Dominik","last_name":"Zietlow","first_name":"Dominik"},{"last_name":"Carl-Johann Simon-Gabriel","full_name":"Carl-Johann Simon-Gabriel, Carl-Johann Simon-Gabriel","first_name":"Carl-Johann Simon-Gabriel"},{"first_name":"Bing","full_name":"Shuai, Bing","last_name":"Shuai"},{"full_name":"Tu, Zhuowen","last_name":"Tu","first_name":"Zhuowen"},{"first_name":"Thomas","full_name":"Brox, Thomas","last_name":"Brox"},{"last_name":"Schiele","full_name":"Schiele, Bernt","first_name":"Bernt"},{"first_name":"Yanwei","last_name":"Fu","full_name":"Fu, Yanwei"},{"full_name":"Locatello, Francesco","id":"26cfd52f-2483-11ee-8040-88983bcc06d4","last_name":"Locatello","first_name":"Francesco","orcid":"0000-0002-4850-0683"},{"full_name":"Zhang, Zheng","last_name":"Zhang","first_name":"Zheng"},{"full_name":"Xiao, Tianjun","last_name":"Xiao","first_name":"Tianjun"}],"article_number":"2309.00233","type":"preprint","title":"Object-centric multiple object tracking","extern":"1","date_updated":"2024-02-12T10:16:21Z","department":[{"_id":"FrLo"}],"month":"09"},{"day":"25","oa_version":"Published Version","abstract":[{"text":"A method of determining a correspondence between a first biological property of a cell and one or more further biological properties of cells is provided. The first biological property and the further biological properties are determined by different analysis techniques and each are contained in a respective one of a plurality of sets of biological properties. The method includes the steps of: converting the plurality of sets of biological properties into corresponding representations in a representation format which is invariant to the technologies used to derive the biological properties; determining, in said representation format, a representation from each of the converted sets of further biological properties which most closely matches the first representation of the first biological property; and re-converting the determined representations from the representation format back to the biological properties associated with the determined representations and thereby determining a correspondence between the first biological property and each of the further biological properties.","lang":"eng"}],"article_processing_charge":"No","citation":{"chicago":"Ficek, Joanna, Kjong-Van Lehmann, Francesco Locatello, Gunnar  Raetsch, and Stefan Stark. “Methods of Determining Correspondences between Biological Properties of Cells,” 2023.","ama":"Ficek J, Lehmann K-V, Locatello F, Raetsch G, Stark S. Methods of determining correspondences between biological properties of cells. 2023.","apa":"Ficek, J., Lehmann, K.-V., Locatello, F., Raetsch, G., &#38; Stark, S. (2023). Methods of determining correspondences between biological properties of cells.","short":"J. Ficek, K.-V. Lehmann, F. Locatello, G. Raetsch, S. Stark, (2023).","mla":"Ficek, Joanna, et al. <i>Methods of Determining Correspondences between Biological Properties of Cells</i>. 2023.","ieee":"J. Ficek, K.-V. Lehmann, F. Locatello, G. Raetsch, and S. Stark, “Methods of determining correspondences between biological properties of cells.” 2023.","ista":"Ficek J, Lehmann K-V, Locatello F, Raetsch G, Stark S. 2023. Methods of determining correspondences between biological properties of cells."},"year":"2023","date_published":"2023-05-25T00:00:00Z","application_date":"2021-04-21","publication_date":"2023-05-25","title":"Methods of determining correspondences between biological properties of cells","department":[{"_id":"FrLo"}],"month":"05","date_updated":"2025-01-29T10:53:48Z","has_accepted_license":"1","status":"public","date_created":"2024-02-08T15:52:21Z","applicant":["ETH Zürich"],"page":"9","ddc":["540"],"file":[{"access_level":"open_access","success":1,"relation":"main_file","file_size":2893462,"date_created":"2024-02-08T15:41:51Z","checksum":"55ed444b176b48e4fb4d609ea895de36","file_id":"14966","creator":"ptazenko","date_updated":"2024-02-08T15:41:51Z","file_name":"Patent_FrLo_US20230162818A1.pdf","content_type":"application/pdf"}],"OA_place":"repository","_id":"14965","oa":1,"ipn":"US20230162818A1","type":"patent","author":[{"last_name":"Ficek","full_name":"Ficek, Joanna","first_name":"Joanna"},{"first_name":"Kjong-Van","last_name":"Lehmann","full_name":"Lehmann, Kjong-Van"},{"id":"26cfd52f-2483-11ee-8040-88983bcc06d4","full_name":"Locatello, Francesco","last_name":"Locatello","first_name":"Francesco","orcid":"0000-0002-4850-0683"},{"full_name":"Raetsch, Gunnar ","last_name":"Raetsch","first_name":"Gunnar "},{"first_name":"Stefan","full_name":"Stark, Stefan","last_name":"Stark"}],"user_id":"8b945eb4-e2f2-11eb-945a-df72226e66a9","application_number":"PCT/EP2021/060318","extern":"1","ipc":"C12Q1/68 ; G06V10/82 ; G06V20/69 ; G16B40/30","file_date_updated":"2024-02-08T15:41:51Z"},{"file":[{"relation":"main_file","file_size":215629,"access_level":"open_access","checksum":"105ff58e55de866ce76967f3a95e82f7","date_created":"2024-02-08T16:03:08Z","file_id":"14975","creator":"ptazenko","content_type":"application/pdf","file_name":"CLeaR23_roundtable_discussion.pdf","date_updated":"2024-02-08T16:03:08Z"}],"abstract":[{"text":"The field of machine learning and AI has witnessed remarkable breakthroughs with the emergence of LLMs, which have also sparked a lively debate in the causal community. As researchers in this field, we are interested in exploring how LLMs relate to causality research, and how we can leverage the technology to advance it. In the second conference of Causal Learning and Reasoning (CLeaR), 2023, we held a round table discussion to gather and integrate the diverse perspectives of the CLeaR community on this topic.\r\nThere is a general consensus that LLMs are not yet capable of causal reasoning at the current\r\nstage but has a lot of potential with public available information by CLeaR 2023. Enhancing causal machine learning is vital not only for its own sake but also to help LLMs improve their performance, especially regarding trustworthiness. In this document, we present both the summary and the raw outcome of the round table discussion. We acknowledge that with the progress of both fields, the opportunities and impact may rapidly change. We will repeat the same exercise in CLeaR 2024 to document the evolution.","lang":"eng"}],"ddc":["000"],"conference":{"name":"CLeaR: Conference on Causal Learning and Reasoning","end_date":"2023-04-14","location":"Tübingen, Germany","start_date":"2023-04-11"},"oa_version":"Submitted Version","day":"01","status":"public","date_created":"2024-02-08T16:03:18Z","has_accepted_license":"1","quality_controlled":"1","language":[{"iso":"eng"}],"publication":"2nd Conference on Causal Learning and Reasoning","file_date_updated":"2024-02-08T16:03:08Z","month":"05","department":[{"_id":"FrLo"}],"date_updated":"2025-08-05T11:19:37Z","extern":"1","title":"Causality in the time of LLMs: Round table discussion results of CLeaR 2023","type":"conference","author":[{"last_name":"Zhang","full_name":"Zhang, Cheng","first_name":"Cheng"},{"first_name":"Dominik","last_name":"Janzing","full_name":"Janzing, Dominik"},{"first_name":"Mihaela ","full_name":"van der Schaar, Mihaela ","last_name":"van der Schaar"},{"first_name":"Francesco","orcid":"0000-0002-4850-0683","last_name":"Locatello","full_name":"Locatello, Francesco","id":"26cfd52f-2483-11ee-8040-88983bcc06d4"},{"first_name":"Peter","last_name":"Spirtes","full_name":"Spirtes, Peter"},{"full_name":"Zhang, Kun","last_name":"Zhang","first_name":"Kun"},{"last_name":"Schölkopf","full_name":"Schölkopf, Bernhard","first_name":"Bernhard"},{"orcid":"0000-0002-7008-0216","first_name":"Caroline","last_name":"Uhler","id":"49ADD78E-F248-11E8-B48F-1D18A9856A87","full_name":"Uhler, Caroline"}],"user_id":"2DF688A6-F248-11E8-B48F-1D18A9856A87","citation":{"chicago":"Zhang, Cheng, Dominik Janzing, Mihaela  van der Schaar, Francesco Locatello, Peter Spirtes, Kun Zhang, Bernhard Schölkopf, and Caroline Uhler. “Causality in the Time of LLMs: Round Table Discussion Results of CLeaR 2023.” In <i>2nd Conference on Causal Learning and Reasoning</i>, n.d.","ama":"Zhang C, Janzing D, van der Schaar M, et al. Causality in the time of LLMs: Round table discussion results of CLeaR 2023. In: <i>2nd Conference on Causal Learning and Reasoning</i>.","apa":"Zhang, C., Janzing, D., van der Schaar, M., Locatello, F., Spirtes, P., Zhang, K., … Uhler, C. (n.d.). Causality in the time of LLMs: Round table discussion results of CLeaR 2023. In <i>2nd Conference on Causal Learning and Reasoning</i>. Tübingen, Germany.","short":"C. Zhang, D. Janzing, M. van der Schaar, F. Locatello, P. Spirtes, K. Zhang, B. Schölkopf, C. Uhler, in:, 2nd Conference on Causal Learning and Reasoning, n.d.","mla":"Zhang, Cheng, et al. “Causality in the Time of LLMs: Round Table Discussion Results of CLeaR 2023.” <i>2nd Conference on Causal Learning and Reasoning</i>.","ieee":"C. Zhang <i>et al.</i>, “Causality in the time of LLMs: Round table discussion results of CLeaR 2023,” in <i>2nd Conference on Causal Learning and Reasoning</i>, Tübingen, Germany.","ista":"Zhang C, Janzing D, van der Schaar M, Locatello F, Spirtes P, Zhang K, Schölkopf B, Uhler C. Causality in the time of LLMs: Round table discussion results of CLeaR 2023. 2nd Conference on Causal Learning and Reasoning. CLeaR: Conference on Causal Learning and Reasoning."},"date_published":"2023-05-01T00:00:00Z","year":"2023","oa":1,"publication_status":"submitted","_id":"14974","article_processing_charge":"No"}]
