{"has_accepted_license":"1","publisher":"ICLR","title":"The journey matters: Average parameter count over pre-training unifies sparse and dense scaling laws","publication_status":"published","oa_version":"Published Version","scopus_import":"1","quality_controlled":"1","conference":{"name":"ICLR: International Conference on Learning Representations","start_date":"2025-04-24","end_date":"2025-04-28","location":"Singapore, Singapore"},"publication":"13th International Conference on Learning Representations","file_date_updated":"2025-08-04T08:23:47Z","date_updated":"2025-08-04T08:24:59Z","date_created":"2025-07-20T22:02:03Z","day":"01","type":"conference","date_published":"2025-04-01T00:00:00Z","publication_identifier":{"isbn":["9798331320850"]},"ddc":["000"],"page":"85165-85181","OA_place":"publisher","user_id":"2DF688A6-F248-11E8-B48F-1D18A9856A87","language":[{"iso":"eng"}],"abstract":[{"lang":"eng","text":"Pruning eliminates unnecessary parameters in neural networks; it offers a promising solution to the growing computational demands of large language models (LLMs). While many focus on post-training pruning, sparse pre-training--which combines pruning and pre-training into a single phase--provides a simpler alternative. In this work, we present the first systematic exploration of optimal sparse pre-training configurations for LLMs through an examination of 80 unique pruning schedules across different sparsity levels and training durations. We find that initiating pruning at 25% of total training compute and concluding at 75% achieves near-optimal final evaluation loss. These findings provide valuable insights for efficient and effective sparse pre-training of LLMs. Furthermore, we propose a new scaling law that modifies the Chinchilla scaling law to use the average parameter count over pre-training. Through empirical and theoretical validation, we demonstrate that this modified scaling law accurately models evaluation loss for both sparsely and densely pre-trained LLMs, unifying scaling laws across pre-training paradigms. Our findings indicate that while sparse pre-training achieves the same final model quality as dense pre-training for equivalent compute budgets, it provides substantial benefits through reduced model size, enabling significant potential computational savings during inference."}],"status":"public","tmp":{"short":"CC BY (4.0)","legal_code_url":"https://creativecommons.org/licenses/by/4.0/legalcode","name":"Creative Commons Attribution 4.0 International Public License (CC-BY 4.0)","image":"/images/cc_by.png"},"department":[{"_id":"DaAl"}],"external_id":{"arxiv":["2501.12486 "]},"author":[{"first_name":"Tian","full_name":"Jin, Tian","last_name":"Jin"},{"full_name":"Humayun, Ahmed Imtiaz","last_name":"Humayun","first_name":"Ahmed Imtiaz"},{"first_name":"Utku","full_name":"Evci, Utku","last_name":"Evci"},{"full_name":"Subramanian, Suvinay","last_name":"Subramanian","first_name":"Suvinay"},{"first_name":"Amir","full_name":"Yazdanbakhsh, Amir","last_name":"Yazdanbakhsh"},{"first_name":"Dan-Adrian","last_name":"Alistarh","orcid":"0000-0003-3650-940X","full_name":"Alistarh, Dan-Adrian","id":"4A899BFC-F248-11E8-B48F-1D18A9856A87"},{"full_name":"Dziugaite, Gintare Karolina","last_name":"Dziugaite","first_name":"Gintare Karolina"}],"citation":{"ista":"Jin T, Humayun AI, Evci U, Subramanian S, Yazdanbakhsh A, Alistarh D-A, Dziugaite GK. 2025. The journey matters: Average parameter count over pre-training unifies sparse and dense scaling laws. 13th International Conference on Learning Representations. ICLR: International Conference on Learning Representations, 85165–85181.","ieee":"T. Jin et al., “The journey matters: Average parameter count over pre-training unifies sparse and dense scaling laws,” in 13th International Conference on Learning Representations, Singapore, Singapore, 2025, pp. 85165–85181.","mla":"Jin, Tian, et al. “The Journey Matters: Average Parameter Count over Pre-Training Unifies Sparse and Dense Scaling Laws.” 13th International Conference on Learning Representations, ICLR, 2025, pp. 85165–81.","chicago":"Jin, Tian, Ahmed Imtiaz Humayun, Utku Evci, Suvinay Subramanian, Amir Yazdanbakhsh, Dan-Adrian Alistarh, and Gintare Karolina Dziugaite. “The Journey Matters: Average Parameter Count over Pre-Training Unifies Sparse and Dense Scaling Laws.” In 13th International Conference on Learning Representations, 85165–81. ICLR, 2025.","apa":"Jin, T., Humayun, A. I., Evci, U., Subramanian, S., Yazdanbakhsh, A., Alistarh, D.-A., & Dziugaite, G. K. (2025). The journey matters: Average parameter count over pre-training unifies sparse and dense scaling laws. In 13th International Conference on Learning Representations (pp. 85165–85181). Singapore, Singapore: ICLR.","ama":"Jin T, Humayun AI, Evci U, et al. The journey matters: Average parameter count over pre-training unifies sparse and dense scaling laws. In: 13th International Conference on Learning Representations. ICLR; 2025:85165-85181.","short":"T. Jin, A.I. Humayun, U. Evci, S. Subramanian, A. Yazdanbakhsh, D.-A. Alistarh, G.K. Dziugaite, in:, 13th International Conference on Learning Representations, ICLR, 2025, pp. 85165–85181."},"OA_type":"diamond","oa":1,"article_processing_charge":"No","_id":"20038","acknowledgement":"We are deeply grateful to Elias Frantar, Naveen Kumar, Sanjiv Kumar, Daniel\r\nM. Roy, and Clemens Schaefer for their valuable feedback and thoughtful review of this paper.\r\nWe also acknowledge the critical support provided by the Google CoreML Performance Team, and Google Research during this project. We further recognize the extended team at Google DeepMind, who enabled and supported this research direction.\r\nThis work was in part supported by the Sloan Foundation, the MIT-IBM Watson AI Lab, Apple, and SRC JUMP 2.0 (CoCoSys).","arxiv":1,"month":"04","year":"2025","file":[{"date_updated":"2025-08-04T08:23:47Z","date_created":"2025-08-04T08:23:47Z","file_id":"20111","relation":"main_file","access_level":"open_access","file_name":"2025_ICLR_Jin.pdf","creator":"dernst","content_type":"application/pdf","file_size":704989,"checksum":"dbc27120e9aba67dffbd9e5d513a6803","success":1}]}