FLEA: Provably robust fair multisource learning from unreliable training data
Iofinova EB, Konstantinov NH, Lampert C. 2022. FLEA: Provably robust fair multisource learning from unreliable training data. Transactions on Machine Learning Research.
Download
Download (ext.)
https://openreview.net/forum?id=XsPopigZXV
[Published Version]
Journal Article
| Published
| English
Corresponding author has ISTA affiliation
Department
Abstract
Fairness-aware learning aims at constructing classifiers that not only make accurate predictions, but also do not discriminate against specific groups. It is a fast-growing area of
machine learning with far-reaching societal impact. However, existing fair learning methods
are vulnerable to accidental or malicious artifacts in the training data, which can cause
them to unknowingly produce unfair classifiers. In this work we address the problem of
fair learning from unreliable training data in the robust multisource setting, where the
available training data comes from multiple sources, a fraction of which might not be representative of the true data distribution. We introduce FLEA, a filtering-based algorithm
that identifies and suppresses those data sources that would have a negative impact on
fairness or accuracy if they were used for training. As such, FLEA is not a replacement of
prior fairness-aware learning methods but rather an augmentation that makes any of them
robust against unreliable training data. We show the effectiveness of our approach by a
diverse range of experiments on multiple datasets. Additionally, we prove formally that
–given enough data– FLEA protects the learner against corruptions as long as the fraction of
affected data sources is less than half. Our source code and documentation are available at
https://github.com/ISTAustria-CVML/FLEA.
Publishing Year
Date Published
2022-12-22
Journal Title
Transactions on Machine Learning Research
Publisher
ML Research Press
Acknowledgement
The authors would like to thank Bernd Prach, Elias Frantar, Alexandra Peste, Mahdi Nikdan, and Peter Súkeník for their helpful feedback. This research was supported by the Scientific Service Units (SSU) of IST Austria through resources provided by Scientific Computing (SciComp). This publication was made possible by an ETH AI Center postdoctoral fellowship granted to Nikola Konstantinov. Eugenia Iofinova was supported in part by the FWF DK VGSCO, grant agreement number W1260-N35.
Acknowledged SSUs
ISSN
IST-REx-ID
Cite this
Iofinova EB, Konstantinov NH, Lampert C. FLEA: Provably robust fair multisource learning from unreliable training data. Transactions on Machine Learning Research. 2022.
Iofinova, E. B., Konstantinov, N. H., & Lampert, C. (2022). FLEA: Provably robust fair multisource learning from unreliable training data. Transactions on Machine Learning Research. ML Research Press.
Iofinova, Eugenia B, Nikola H Konstantinov, and Christoph Lampert. “FLEA: Provably Robust Fair Multisource Learning from Unreliable Training Data.” Transactions on Machine Learning Research. ML Research Press, 2022.
E. B. Iofinova, N. H. Konstantinov, and C. Lampert, “FLEA: Provably robust fair multisource learning from unreliable training data,” Transactions on Machine Learning Research. ML Research Press, 2022.
Iofinova EB, Konstantinov NH, Lampert C. 2022. FLEA: Provably robust fair multisource learning from unreliable training data. Transactions on Machine Learning Research.
Iofinova, Eugenia B., et al. “FLEA: Provably Robust Fair Multisource Learning from Unreliable Training Data.” Transactions on Machine Learning Research, ML Research Press, 2022.
All files available under the following license(s):
Creative Commons Attribution 4.0 International Public License (CC-BY 4.0):
Main File(s)
File Name
2022_TMLR_Iofinova.pdf
1.95 MB
Access Level
Open Access
Date Uploaded
2023-02-23
MD5 Checksum
97c8a8470759cab597abb973ca137a3b
Link(s) to Main File(s)
Access Level
Open Access
Export
Marked PublicationsOpen Data ISTA Research Explorer
Sources
arXiv 2106.11732