{"language":[{"iso":"eng"}],"department":[{"_id":"DaAl"}],"user_id":"4359f0d1-fa6c-11eb-b949-802e58b17ae8","title":"Efficiency guarantees for parallel incremental algorithms under relaxed schedulers","author":[{"orcid":"0000-0003-3650-940X","last_name":"Alistarh","full_name":"Alistarh, Dan-Adrian","first_name":"Dan-Adrian","id":"4A899BFC-F248-11E8-B48F-1D18A9856A87"},{"id":"3279A00C-F248-11E8-B48F-1D18A9856A87","first_name":"Giorgi","full_name":"Nadiradze, Giorgi","last_name":"Nadiradze","orcid":"0000-0001-5634-0731"},{"first_name":"Nikita","id":"2F4DB10C-F248-11E8-B48F-1D18A9856A87","full_name":"Koval, Nikita","last_name":"Koval"}],"abstract":[{"text":"Several classic problems in graph processing and computational geometry are solved via incremental algorithms, which split computation into a series of small tasks acting on shared state, which gets updated progressively. While the sequential variant of such algorithms usually specifies a fixed (but sometimes random) order in which the tasks should be performed, a standard approach to parallelizing such algorithms is to relax this constraint to allow for out-of-order parallel execution. This is the case for parallel implementations of Dijkstra's single-source shortest-paths (SSSP) algorithm, and for parallel Delaunay mesh triangulation. While many software frameworks parallelize incremental computation in this way, it is still not well understood whether this relaxed ordering approach can still provide any complexity guarantees. In this paper, we address this problem, and analyze the efficiency guarantees provided by a range of incremental algorithms when parallelized via relaxed schedulers. We show that, for algorithms such as Delaunay mesh triangulation and sorting by insertion, schedulers with a maximum relaxation factor of k in terms of the maximum priority inversion allowed will introduce a maximum amount of wasted work of O(łog n poly(k)), where n is the number of tasks to be executed. For SSSP, we show that the additional work is O(poly(k), dmax / wmin), where dmax is the maximum distance between two nodes, and wmin is the minimum such distance. In practical settings where n >> k, this suggests that the overheads of relaxation will be outweighed by the improved scalability of the relaxed scheduler. On the negative side, we provide lower bounds showing that certain algorithms will inherently incur a non-trivial amount of wasted work due to scheduler relaxation, even for relatively benign relaxed schedulers.","lang":"eng"}],"isi":1,"citation":{"apa":"Alistarh, D.-A., Nadiradze, G., & Koval, N. (2019). Efficiency guarantees for parallel incremental algorithms under relaxed schedulers. In 31st ACM Symposium on Parallelism in Algorithms and Architectures (pp. 145–154). Phoenix, AZ, United States: ACM Press. https://doi.org/10.1145/3323165.3323201","ama":"Alistarh D-A, Nadiradze G, Koval N. Efficiency guarantees for parallel incremental algorithms under relaxed schedulers. In: 31st ACM Symposium on Parallelism in Algorithms and Architectures. ACM Press; 2019:145-154. doi:10.1145/3323165.3323201","short":"D.-A. Alistarh, G. Nadiradze, N. Koval, in:, 31st ACM Symposium on Parallelism in Algorithms and Architectures, ACM Press, 2019, pp. 145–154.","mla":"Alistarh, Dan-Adrian, et al. “Efficiency Guarantees for Parallel Incremental Algorithms under Relaxed Schedulers.” 31st ACM Symposium on Parallelism in Algorithms and Architectures, ACM Press, 2019, pp. 145–54, doi:10.1145/3323165.3323201.","chicago":"Alistarh, Dan-Adrian, Giorgi Nadiradze, and Nikita Koval. “Efficiency Guarantees for Parallel Incremental Algorithms under Relaxed Schedulers.” In 31st ACM Symposium on Parallelism in Algorithms and Architectures, 145–54. ACM Press, 2019. https://doi.org/10.1145/3323165.3323201.","ieee":"D.-A. Alistarh, G. Nadiradze, and N. Koval, “Efficiency guarantees for parallel incremental algorithms under relaxed schedulers,” in 31st ACM Symposium on Parallelism in Algorithms and Architectures, Phoenix, AZ, United States, 2019, pp. 145–154.","ista":"Alistarh D-A, Nadiradze G, Koval N. 2019. Efficiency guarantees for parallel incremental algorithms under relaxed schedulers. 31st ACM Symposium on Parallelism in Algorithms and Architectures. SPAA: Symposium on Parallelism in Algorithms and Architectures, 145–154."},"conference":{"end_date":"2019-06-24","location":"Phoenix, AZ, United States","name":"SPAA: Symposium on Parallelism in Algorithms and Architectures","start_date":"2019-06-22"},"external_id":{"arxiv":["2003.09363"],"isi":["000507618500018"]},"publication":"31st ACM Symposium on Parallelism in Algorithms and Architectures","project":[{"grant_number":"805223","name":"Elastic Coordination for Scalable Machine Learning","call_identifier":"H2020","_id":"268A44D6-B435-11E9-9278-68D0E5697425"}],"status":"public","publication_status":"published","main_file_link":[{"open_access":"1","url":"https://arxiv.org/abs/2003.09363"}],"oa":1,"month":"06","date_created":"2019-07-24T08:59:36Z","article_processing_charge":"No","doi":"10.1145/3323165.3323201","date_published":"2019-06-01T00:00:00Z","publisher":"ACM Press","ec_funded":1,"_id":"6673","day":"01","type":"conference","related_material":{"record":[{"relation":"dissertation_contains","id":"10429","status":"public"}]},"scopus_import":"1","quality_controlled":"1","date_updated":"2023-09-07T13:31:39Z","oa_version":"Preprint","page":"145-154","year":"2019","publication_identifier":{"isbn":["9781450361842"]}}