{"month":"12","date_created":"2023-07-16T22:01:13Z","publication_status":"published","publisher":"ML Research Press","external_id":{"arxiv":["2102.06004"]},"publication":"Proceedings of Machine Learning Research","acknowledgement":"This paper is a shortened, workshop version of Konstantinov and Lampert (2021),\r\nhttps://arxiv.org/abs/2102.06004. For further results, including an analysis of algorithms achieving the lower bounds from this paper, we refer to the full version.","oa_version":"Preprint","department":[{"_id":"ChLa"}],"date_updated":"2023-09-26T10:44:37Z","type":"conference","user_id":"2DF688A6-F248-11E8-B48F-1D18A9856A87","status":"public","_id":"13241","abstract":[{"text":"Addressing fairness concerns about machine learning models is a crucial step towards their long-term adoption in real-world automated systems. Many approaches for training fair models from data have been developed and an implicit assumption about such algorithms is that they are able to recover a fair model, despite potential historical biases in the data. In this work we show a number of impossibility results that indicate that there is no learning algorithm that can recover a fair model when a proportion of the dataset is subject to arbitrary manipulations. Specifically, we prove that there are situations in which an adversary can force any learner to return a biased classifier, with or without degrading accuracy, and that the strength of this bias increases for learning problems with underrepresented protected groups in the data. Our results emphasize on the importance of studying further data corruption models of various strength and of establishing stricter data collection practices for fairness-aware learning.","lang":"eng"}],"oa":1,"year":"2022","author":[{"id":"4B9D76E4-F248-11E8-B48F-1D18A9856A87","full_name":"Konstantinov, Nikola H","last_name":"Konstantinov","first_name":"Nikola H"},{"orcid":"0000-0001-8622-7887","last_name":"Lampert","first_name":"Christoph","id":"40C20FD2-F248-11E8-B48F-1D18A9856A87","full_name":"Lampert, Christoph"}],"day":"01","language":[{"iso":"eng"}],"main_file_link":[{"url":"https://arxiv.org/abs/2102.06004","open_access":"1"}],"page":"59-83","citation":{"mla":"Konstantinov, Nikola H., and Christoph Lampert. “On the Impossibility of Fairness-Aware Learning from Corrupted Data.” Proceedings of Machine Learning Research, vol. 171, ML Research Press, 2022, pp. 59–83.","ista":"Konstantinov NH, Lampert C. 2022. On the impossibility of fairness-aware learning from corrupted data. Proceedings of Machine Learning Research. vol. 171, 59–83.","ieee":"N. H. Konstantinov and C. Lampert, “On the impossibility of fairness-aware learning from corrupted data,” in Proceedings of Machine Learning Research, 2022, vol. 171, pp. 59–83.","ama":"Konstantinov NH, Lampert C. On the impossibility of fairness-aware learning from corrupted data. In: Proceedings of Machine Learning Research. Vol 171. ML Research Press; 2022:59-83.","chicago":"Konstantinov, Nikola H, and Christoph Lampert. “On the Impossibility of Fairness-Aware Learning from Corrupted Data.” In Proceedings of Machine Learning Research, 171:59–83. ML Research Press, 2022.","apa":"Konstantinov, N. H., & Lampert, C. (2022). On the impossibility of fairness-aware learning from corrupted data. In Proceedings of Machine Learning Research (Vol. 171, pp. 59–83). ML Research Press.","short":"N.H. Konstantinov, C. Lampert, in:, Proceedings of Machine Learning Research, ML Research Press, 2022, pp. 59–83."},"related_material":{"record":[{"id":"10802","status":"public","relation":"extended_version"}]},"intvolume":" 171","title":"On the impossibility of fairness-aware learning from corrupted data","article_processing_charge":"No","publication_identifier":{"eissn":["2640-3498"]},"date_published":"2022-12-01T00:00:00Z","volume":171,"quality_controlled":"1","scopus_import":"1"}