{"acknowledgement":"The authors would like to thank Eldar Kurtic for experimental support and useful suggestions throughout the project","intvolume":" 235","date_created":"2024-09-22T22:01:44Z","type":"conference","volume":235,"conference":{"name":"ICML: International Conference on Machine Learning","location":"Vienna, Austria","end_date":"2024-07-27","start_date":"2024-07-21"},"page":"38187-38206","oa":1,"quality_controlled":"1","publication_identifier":{"eissn":["2640-3498"]},"publisher":"ML Research Press","day":"01","publication":"Proceedings of the 41st International Conference on Machine Learning","main_file_link":[{"url":"https://doi.org/10.48550/arXiv.2401.04679","open_access":"1"}],"department":[{"_id":"DaAl"},{"_id":"GradSch"}],"oa_version":"Preprint","user_id":"2DF688A6-F248-11E8-B48F-1D18A9856A87","scopus_import":"1","related_material":{"link":[{"url":"https://github.com/IST-DASLab/RoSA","relation":"software"}]},"corr_author":"1","year":"2024","author":[{"id":"66374281-f394-11eb-9cf6-869147deecc0","full_name":"Nikdan, Mahdi","last_name":"Nikdan","first_name":"Mahdi"},{"last_name":"Tabesh","orcid":"0009-0003-4119-6281","first_name":"Soroush","id":"06000900-6068-11ef-8d61-c2472ef2e752","full_name":"Tabesh, Soroush"},{"last_name":"Crncevic","first_name":"Elvir","id":"41888001-440d-11ef-8299-d0e838b8185e","full_name":"Crncevic, Elvir"},{"orcid":"0000-0003-3650-940X","last_name":"Alistarh","first_name":"Dan-Adrian","id":"4A899BFC-F248-11E8-B48F-1D18A9856A87","full_name":"Alistarh, Dan-Adrian"}],"language":[{"iso":"eng"}],"month":"09","citation":{"ieee":"M. Nikdan, S. Tabesh, E. Crncevic, and D.-A. Alistarh, “RoSA: Accurate parameter-efficient fine-tuning via robust adaptation,” in Proceedings of the 41st International Conference on Machine Learning, Vienna, Austria, 2024, vol. 235, pp. 38187–38206.","ista":"Nikdan M, Tabesh S, Crncevic E, Alistarh D-A. 2024. RoSA: Accurate parameter-efficient fine-tuning via robust adaptation. Proceedings of the 41st International Conference on Machine Learning. ICML: International Conference on Machine Learning vol. 235, 38187–38206.","mla":"Nikdan, Mahdi, et al. “RoSA: Accurate Parameter-Efficient Fine-Tuning via Robust Adaptation.” Proceedings of the 41st International Conference on Machine Learning, vol. 235, ML Research Press, 2024, pp. 38187–206.","apa":"Nikdan, M., Tabesh, S., Crncevic, E., & Alistarh, D.-A. (2024). RoSA: Accurate parameter-efficient fine-tuning via robust adaptation. In Proceedings of the 41st International Conference on Machine Learning (Vol. 235, pp. 38187–38206). Vienna, Austria: ML Research Press.","ama":"Nikdan M, Tabesh S, Crncevic E, Alistarh D-A. RoSA: Accurate parameter-efficient fine-tuning via robust adaptation. In: Proceedings of the 41st International Conference on Machine Learning. Vol 235. ML Research Press; 2024:38187-38206.","chicago":"Nikdan, Mahdi, Soroush Tabesh, Elvir Crncevic, and Dan-Adrian Alistarh. “RoSA: Accurate Parameter-Efficient Fine-Tuning via Robust Adaptation.” In Proceedings of the 41st International Conference on Machine Learning, 235:38187–206. ML Research Press, 2024.","short":"M. Nikdan, S. Tabesh, E. Crncevic, D.-A. Alistarh, in:, Proceedings of the 41st International Conference on Machine Learning, ML Research Press, 2024, pp. 38187–38206."},"status":"public","publication_status":"published","date_published":"2024-09-01T00:00:00Z","title":"RoSA: Accurate parameter-efficient fine-tuning via robust adaptation","_id":"18117","article_processing_charge":"No","date_updated":"2024-10-01T08:22:01Z","abstract":[{"text":"We investigate parameter-efficient fine-tuning (PEFT) methods that can provide good accuracy under limited computational and memory budgets in the context of large language models (LLMs). We present a new PEFT method called Robust Adaptation (RoSA) inspired by robust principal component analysis that jointly trains low-rank\r\n and highly-sparse components on top of a set of fixed pretrained weights to efficiently approximate the performance of a full-fine-tuning (FFT) solution. Across a series of challenging generative tasks such as grade-school math and SQL query generation, which require fine-tuning for good performance, we show that RoSA outperforms LoRA, pure sparse fine-tuning, and alternative hybrid methods at the same parameter budget, and can even recover the performance of FFT on some tasks. We provide system support for RoSA to complement the training algorithm, specifically in the form of sparse GPU kernels which enable memory- and computationally-efficient training, and show that it is also compatible with low-precision base weights, resulting in the first joint representation combining quantization, low-rank and sparse approximations. Our code is available at https://github.com/IST-DASLab/RoSA.","lang":"eng"}],"external_id":{"arxiv":["2401.04679"]}}