<?xml version="1.0" encoding="UTF-8"?>

<modsCollection xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://www.loc.gov/mods/v3" xsi:schemaLocation="http://www.loc.gov/mods/v3 http://www.loc.gov/standards/mods/v3/mods-3-3.xsd">
<mods version="3.3">

<genre>article</genre>

<titleInfo><title>Fairness-aware PAC learning from corrupted data</title></titleInfo>


<note type="publicationStatus">published</note>


<note type="qualityControlled">yes</note>

<name type="personal">
  <namePart type="given">Nikola H</namePart>
  <namePart type="family">Konstantinov</namePart>
  <role><roleTerm type="text">author</roleTerm> </role><identifier type="local">4B9D76E4-F248-11E8-B48F-1D18A9856A87</identifier><description xsi:type="identifierDefinition" type="orcid">0009-0009-5204-7621</description></name>
<name type="personal">
  <namePart type="given">Christoph</namePart>
  <namePart type="family">Lampert</namePart>
  <role><roleTerm type="text">author</roleTerm> </role><identifier type="local">40C20FD2-F248-11E8-B48F-1D18A9856A87</identifier><description xsi:type="identifierDefinition" type="orcid">0000-0002-4561-241X</description></name>







<name type="corporate">
  <namePart></namePart>
  <identifier type="local">ChLa</identifier>
  <role>
    <roleTerm type="text">department</roleTerm>
  </role>
</name>








<abstract lang="eng">Addressing fairness concerns about machine learning models is a crucial step towards their long-term adoption in real-world automated systems. While many approaches have been developed for training fair models from data, little is known about the robustness of these methods to data corruption. In this work we consider fairness-aware learning under worst-case data manipulations. We show that an adversary can in some situations force any learner to return an overly biased classifier, regardless of the sample size and with or without degrading
accuracy, and that the strength of the excess bias increases for learning problems with underrepresented protected groups in the data. We also prove that our hardness results are tight up to constant factors. To this end, we study two natural learning algorithms that optimize for both accuracy and fairness and show that these algorithms enjoy guarantees that are order-optimal in terms of the corruption ratio and the protected groups frequencies in the large data
limit.</abstract>

<relatedItem type="constituent">
  <location>
    <url displayLabel="2022_JournalMachineLearningResearch_Konstantinov.pdf">https://research-explorer.ista.ac.at/download/10802/11570/2022_JournalMachineLearningResearch_Konstantinov.pdf</url>
  </location>
  <physicalDescription><internetMediaType>application/pdf</internetMediaType></physicalDescription><accessCondition type="restrictionOnAccess">no</accessCondition>
</relatedItem>
<originInfo><publisher>ML Research Press</publisher><dateIssued encoding="w3cdtf">2022</dateIssued>
</originInfo>
<language><languageTerm authority="iso639-2b" type="code">eng</languageTerm>
</language>

<subject><topic>Fairness</topic><topic>robustness</topic><topic>data poisoning</topic><topic>trustworthy machine learning</topic><topic>PAC learning</topic>
</subject>


<relatedItem type="host"><titleInfo><title>Journal of Machine Learning Research</title></titleInfo>
  <identifier type="issn">1532-4435</identifier>
  <identifier type="eIssn">1533-7928</identifier>
  <identifier type="arXiv">2102.06004</identifier>
<part><detail type="volume"><number>23</number></detail><extent unit="pages">1-60</extent>
</part>
</relatedItem>
<relatedItem type="Supplementary material">
  <location>     <url>https://research-explorer.ista.ac.at/record/13241</url>     <url>https://research-explorer.ista.ac.at/record/10799</url>  </location>
</relatedItem>

<extension>
<bibliographicCitation>
<ieee>N. H. Konstantinov and C. Lampert, “Fairness-aware PAC learning from corrupted data,” &lt;i&gt;Journal of Machine Learning Research&lt;/i&gt;, vol. 23. ML Research Press, pp. 1–60, 2022.</ieee>
<ama>Konstantinov NH, Lampert C. Fairness-aware PAC learning from corrupted data. &lt;i&gt;Journal of Machine Learning Research&lt;/i&gt;. 2022;23:1-60.</ama>
<ista>Konstantinov NH, Lampert C. 2022. Fairness-aware PAC learning from corrupted data. Journal of Machine Learning Research. 23, 1–60.</ista>
<short>N.H. Konstantinov, C. Lampert, Journal of Machine Learning Research 23 (2022) 1–60.</short>
<mla>Konstantinov, Nikola H., and Christoph Lampert. “Fairness-Aware PAC Learning from Corrupted Data.” &lt;i&gt;Journal of Machine Learning Research&lt;/i&gt;, vol. 23, ML Research Press, 2022, pp. 1–60.</mla>
<chicago>Konstantinov, Nikola H, and Christoph Lampert. “Fairness-Aware PAC Learning from Corrupted Data.” &lt;i&gt;Journal of Machine Learning Research&lt;/i&gt;. ML Research Press, 2022.</chicago>
<apa>Konstantinov, N. H., &amp;#38; Lampert, C. (2022). Fairness-aware PAC learning from corrupted data. &lt;i&gt;Journal of Machine Learning Research&lt;/i&gt;. ML Research Press.</apa>
</bibliographicCitation>
</extension>
<recordInfo><recordIdentifier>10802</recordIdentifier><recordCreationDate encoding="w3cdtf">2022-02-28T14:05:42Z</recordCreationDate><recordChangeDate encoding="w3cdtf">2026-04-07T14:19:48Z</recordChangeDate>
</recordInfo>
</mods>
</modsCollection>
