---
res:
  bibo_abstract:
  - "Addressing fairness concerns about machine learning models is a crucial step
    towards their long-term adoption in real-world automated systems. While many approaches
    have been developed for training fair models from data, little is known about
    the robustness of these methods to data corruption. In this work we consider fairness-aware
    learning under worst-case data manipulations. We show that an adversary can in
    some situations force any learner to return an overly biased classifier, regardless
    of the sample size and with or without degrading\r\naccuracy, and that the strength
    of the excess bias increases for learning problems with underrepresented protected
    groups in the data. We also prove that our hardness results are tight up to constant
    factors. To this end, we study two natural learning algorithms that optimize for
    both accuracy and fairness and show that these algorithms enjoy guarantees that
    are order-optimal in terms of the corruption ratio and the protected groups frequencies
    in the large data\r\nlimit.@eng"
  bibo_authorlist:
  - foaf_Person:
      foaf_givenName: Nikola H
      foaf_name: Konstantinov, Nikola H
      foaf_surname: Konstantinov
      foaf_workInfoHomepage: http://www.librecat.org/personId=4B9D76E4-F248-11E8-B48F-1D18A9856A87
    orcid: 0009-0009-5204-7621
  - foaf_Person:
      foaf_givenName: Christoph
      foaf_name: Lampert, Christoph
      foaf_surname: Lampert
      foaf_workInfoHomepage: http://www.librecat.org/personId=40C20FD2-F248-11E8-B48F-1D18A9856A87
    orcid: 0000-0002-4561-241X
  bibo_volume: 23
  dct_date: 2022^xs_gYear
  dct_isPartOf:
  - http://id.crossref.org/issn/1532-4435
  - http://id.crossref.org/issn/1533-7928
  dct_language: eng
  dct_publisher: ML Research Press@
  dct_subject:
  - Fairness
  - robustness
  - data poisoning
  - trustworthy machine learning
  - PAC learning
  dct_title: Fairness-aware PAC learning from corrupted data@
...
