{"title":"Wasserstein distances, neuronal entanglement, and sparsity","has_accepted_license":"1","publisher":"ICLR","publication_status":"published","scopus_import":"1","conference":{"start_date":"2025-04-24","name":"ICLR: International Conference on Learning Representations","location":"Singapore, Singapore","end_date":"2025-04-28"},"quality_controlled":"1","oa_version":"Published Version","file_date_updated":"2025-08-04T08:14:09Z","corr_author":"1","publication":"13th International Conference on Learning Representations","day":"01","date_updated":"2025-08-04T08:16:43Z","date_created":"2025-07-20T22:02:03Z","type":"conference","date_published":"2025-04-01T00:00:00Z","page":"26244-26274","publication_identifier":{"isbn":["9798331320850"]},"ddc":["000"],"abstract":[{"text":"Disentangling polysemantic neurons is at the core of many current approaches to interpretability of large language models. Here we attempt to study how disentanglement can be used to understand performance, particularly under weight sparsity, a leading post-training optimization technique. We suggest a novel measure for estimating neuronal entanglement: the Wasserstein distance of a neuron's output distribution to a Gaussian. Moreover, we show the existence of a small number of highly entangled \"Wasserstein Neurons\" in each linear layer of an LLM, characterized by their highly non-Gaussian output distributions, their role in mapping similar inputs to dissimilar outputs, and their significant impact on model accuracy. To study these phenomena, we propose a new experimental framework for disentangling polysemantic neurons. Our framework separates each layer's inputs to create a mixture of experts where each neuron's output is computed by a mixture of neurons of lower Wasserstein distance, each better at maintaining accuracy when sparsified without retraining. We provide strong evidence that this is because the mixture of sparse experts is effectively disentangling the input-output relationship of individual neurons, in particular the difficult Wasserstein neurons.","lang":"eng"}],"status":"public","OA_place":"publisher","user_id":"2DF688A6-F248-11E8-B48F-1D18A9856A87","language":[{"iso":"eng"}],"citation":{"short":"S. Sawmya, L. Kong, I. Markov, D.-A. Alistarh, N. Shavit, in:, 13th International Conference on Learning Representations, ICLR, 2025, pp. 26244–26274.","chicago":"Sawmya, Shashata, Linghao Kong, Ilia Markov, Dan-Adrian Alistarh, and Nir Shavit. “Wasserstein Distances, Neuronal Entanglement, and Sparsity.” In 13th International Conference on Learning Representations, 26244–74. ICLR, 2025.","apa":"Sawmya, S., Kong, L., Markov, I., Alistarh, D.-A., & Shavit, N. (2025). Wasserstein distances, neuronal entanglement, and sparsity. In 13th International Conference on Learning Representations (pp. 26244–26274). Singapore, Singapore: ICLR.","ama":"Sawmya S, Kong L, Markov I, Alistarh D-A, Shavit N. Wasserstein distances, neuronal entanglement, and sparsity. In: 13th International Conference on Learning Representations. ICLR; 2025:26244-26274.","mla":"Sawmya, Shashata, et al. “Wasserstein Distances, Neuronal Entanglement, and Sparsity.” 13th International Conference on Learning Representations, ICLR, 2025, pp. 26244–74.","ista":"Sawmya S, Kong L, Markov I, Alistarh D-A, Shavit N. 2025. Wasserstein distances, neuronal entanglement, and sparsity. 13th International Conference on Learning Representations. ICLR: International Conference on Learning Representations, 26244–26274.","ieee":"S. Sawmya, L. Kong, I. Markov, D.-A. Alistarh, and N. Shavit, “Wasserstein distances, neuronal entanglement, and sparsity,” in 13th International Conference on Learning Representations, Singapore, Singapore, 2025, pp. 26244–26274."},"external_id":{"arxiv":["2405.15756"]},"OA_type":"diamond","author":[{"first_name":"Shashata","full_name":"Sawmya, Shashata","last_name":"Sawmya"},{"first_name":"Linghao","full_name":"Kong, Linghao","last_name":"Kong"},{"id":"D0CF4148-C985-11E9-8066-0BDEE5697425","first_name":"Ilia","last_name":"Markov","full_name":"Markov, Ilia"},{"orcid":"0000-0003-3650-940X","full_name":"Alistarh, Dan-Adrian","last_name":"Alistarh","first_name":"Dan-Adrian","id":"4A899BFC-F248-11E8-B48F-1D18A9856A87"},{"full_name":"Shavit, Nir","last_name":"Shavit","first_name":"Nir"}],"tmp":{"short":"CC BY (4.0)","legal_code_url":"https://creativecommons.org/licenses/by/4.0/legalcode","name":"Creative Commons Attribution 4.0 International Public License (CC-BY 4.0)","image":"/images/cc_by.png"},"department":[{"_id":"DaAl"}],"related_material":{"link":[{"relation":"software","url":"https://github.com/Shavit-Lab/Sparse-Expansion"}]},"article_processing_charge":"No","oa":1,"acknowledgement":"The authors would like to extend their gratitude to Lori Leu for her insightful comments on the\r\napplication of the Wasserstein distance metric. We also wish to thank Elias Frantar for his help in\r\nworking with the SparseGPT implementation and his advice for the project. Additionally, we would like to thank Tony Tong Wang and Thomas Athey for their valuable feedback and constructive discussions.\r\nThis work was supported by an NIH Brains CONNECTS U01 grant and AMD’s AI & HPC Fund.","_id":"20037","month":"04","year":"2025","arxiv":1,"file":[{"relation":"main_file","file_id":"20110","date_created":"2025-08-04T08:14:09Z","date_updated":"2025-08-04T08:14:09Z","file_name":"2025_ICLR_Sawmya.pdf","access_level":"open_access","content_type":"application/pdf","creator":"dernst","success":1,"checksum":"39a8fa7dbdd7029859e156f53f20f6bc","file_size":5447177}]}