{"file":[{"date_updated":"2023-05-24T16:11:16Z","success":1,"creator":"epeste","access_level":"open_access","file_name":"PhD_Thesis_Alexandra_Peste_final.pdf","checksum":"6b3354968403cb9d48cc5a83611fb571","content_type":"application/pdf","file_size":2152072,"relation":"main_file","date_created":"2023-05-24T16:11:16Z","file_id":"13087"},{"date_updated":"2023-05-24T16:12:59Z","access_level":"closed","creator":"epeste","content_type":"application/zip","file_name":"PhD_Thesis_APeste.zip","checksum":"8d0df94bbcf4db72c991f22503b3fd60","file_id":"13088","date_created":"2023-05-24T16:12:59Z","file_size":1658293,"relation":"source_file"}],"date_published":"2023-05-23T00:00:00Z","oa_version":"Published Version","project":[{"name":"International IST Doctoral Program","grant_number":"665385","call_identifier":"H2020","_id":"2564DBCA-B435-11E9-9278-68D0E5697425"},{"_id":"268A44D6-B435-11E9-9278-68D0E5697425","call_identifier":"H2020","grant_number":"805223","name":"Elastic Coordination for Scalable Machine Learning"}],"year":"2023","degree_awarded":"PhD","ddc":["000"],"type":"dissertation","language":[{"iso":"eng"}],"citation":{"ama":"Krumes A. Efficiency and generalization of sparse neural networks. 2023. doi:10.15479/at:ista:13074","ista":"Krumes A. 2023. Efficiency and generalization of sparse neural networks. Institute of Science and Technology Austria.","ieee":"A. Krumes, “Efficiency and generalization of sparse neural networks,” Institute of Science and Technology Austria, 2023.","mla":"Krumes, Alexandra. Efficiency and Generalization of Sparse Neural Networks. Institute of Science and Technology Austria, 2023, doi:10.15479/at:ista:13074.","short":"A. Krumes, Efficiency and Generalization of Sparse Neural Networks, Institute of Science and Technology Austria, 2023.","apa":"Krumes, A. (2023). Efficiency and generalization of sparse neural networks. Institute of Science and Technology Austria. https://doi.org/10.15479/at:ista:13074","chicago":"Krumes, Alexandra. “Efficiency and Generalization of Sparse Neural Networks.” Institute of Science and Technology Austria, 2023. https://doi.org/10.15479/at:ista:13074."},"abstract":[{"lang":"eng","text":"Deep learning has become an integral part of a large number of important applications, and many of the recent breakthroughs have been enabled by the ability to train very large models, capable to capture complex patterns and relationships from the data. At the same time, the massive sizes of modern deep learning models have made their deployment to smaller devices more challenging; this is particularly important, as in many applications the users rely on accurate deep learning predictions, but they only have access to devices with limited memory and compute power. One solution to this problem is to prune neural networks, by setting as many of their parameters as possible to zero, to obtain accurate sparse models with lower memory footprint. Despite the great research progress in obtaining sparse models that preserve accuracy, while satisfying memory and computational constraints, there are still many challenges associated with efficiently training sparse models, as well as understanding their generalization properties.\r\n\r\nThe focus of this thesis is to investigate how the training process of sparse models can be made more efficient, and to understand the differences between sparse and dense models in terms of how well they can generalize to changes in the data distribution. We first study a method for co-training sparse and dense models, at a lower cost compared to regular training. With our method we can obtain very accurate sparse networks, and dense models that can recover the baseline accuracy. Furthermore, we are able to more easily analyze the differences, at prediction level, between the sparse-dense model pairs. Next, we investigate the generalization properties of sparse neural networks in more detail, by studying how well different sparse models trained on a larger task can adapt to smaller, more specialized tasks, in a transfer learning scenario. Our analysis across multiple pruning methods and sparsity levels reveals that sparse models provide features that can transfer similarly to or better than the dense baseline. However, the choice of the pruning method plays an important role, and can influence the results when the features are fixed (linear finetuning), or when they are allowed to adapt to the new task (full finetuning). Using sparse models with fixed masks for finetuning on new tasks has an important practical advantage, as it enables training neural networks on smaller devices. However, one drawback of current pruning methods is that the entire training cycle has to be repeated to obtain the initial sparse model, for every sparsity target; in consequence, the entire training process is costly and also multiple models need to be stored. In the last part of the thesis we propose a method that can train accurate dense models that are compressible in a single step, to multiple sparsity levels, without additional finetuning. Our method results in sparse models that can be competitive with existing pruning methods, and which can also successfully generalize to new tasks."}],"has_accepted_license":"1","ec_funded":1,"acknowledged_ssus":[{"_id":"ScienComp"}],"status":"public","day":"23","related_material":{"record":[{"relation":"part_of_dissertation","status":"public","id":"11458"},{"id":"12299","status":"public","relation":"part_of_dissertation"},{"status":"public","relation":"part_of_dissertation","id":"13053"}]},"date_updated":"2024-10-09T21:05:41Z","publisher":"Institute of Science and Technology Austria","corr_author":"1","publication_identifier":{"issn":["2663-337X"]},"_id":"13074","file_date_updated":"2023-05-24T16:12:59Z","author":[{"id":"32D78294-F248-11E8-B48F-1D18A9856A87","full_name":"Peste, Elena-Alexandra","last_name":"Peste","first_name":"Elena-Alexandra"}],"supervisor":[{"first_name":"Christoph","last_name":"Lampert","full_name":"Lampert, Christoph","orcid":"0000-0001-8622-7887","id":"40C20FD2-F248-11E8-B48F-1D18A9856A87"},{"full_name":"Alistarh, Dan-Adrian","last_name":"Alistarh","first_name":"Dan-Adrian","orcid":"0000-0003-3650-940X","id":"4A899BFC-F248-11E8-B48F-1D18A9856A87"}],"oa":1,"date_created":"2023-05-23T17:07:53Z","page":"147","article_processing_charge":"No","doi":"10.15479/at:ista:13074","department":[{"_id":"GradSch"},{"_id":"DaAl"},{"_id":"ChLa"}],"publication_status":"published","alternative_title":["ISTA Thesis"],"month":"05","user_id":"8b945eb4-e2f2-11eb-945a-df72226e66a9","title":"Efficiency and generalization of sparse neural networks"}