{"user_id":"c635000d-4b10-11ee-a964-aac5a93f6ac1","month":"07","corr_author":"1","language":[{"iso":"eng"}],"author":[{"last_name":"Henzinger","orcid":"0000-0002-2985-7724","id":"40876CD8-F248-11E8-B48F-1D18A9856A87","full_name":"Henzinger, Thomas A","first_name":"Thomas A"},{"first_name":"Mahyar","full_name":"Karimi, Mahyar","id":"f1dedef5-2f78-11ee-989a-c4c97bccf506","orcid":"0009-0005-0820-1696","last_name":"Karimi"},{"first_name":"Konstantin","id":"8121a2d0-dc85-11ea-9058-af578f3b4515","full_name":"Kueffner, Konstantin","orcid":"0000-0001-8974-2542","last_name":"Kueffner"},{"last_name":"Mallik","orcid":"0000-0001-9864-7475","full_name":"Mallik, Kaushik","id":"0834ff3c-6d72-11ec-94e0-b5b0a4fb8598","first_name":"Kaushik"}],"alternative_title":["LNCS"],"license":"https://creativecommons.org/licenses/by/4.0/","type":"conference","publisher":"Springer Nature","file":[{"checksum":"ccaf94bf7d658ba012c016e11869b54c","file_name":"2023_LNCS_CAV_HenzingerT.pdf","content_type":"application/pdf","access_level":"open_access","date_updated":"2023-07-31T08:11:20Z","file_size":647760,"file_id":"13327","success":1,"date_created":"2023-07-31T08:11:20Z","relation":"main_file","creator":"dernst"}],"tmp":{"short":"CC BY (4.0)","image":"/images/cc_by.png","name":"Creative Commons Attribution 4.0 International Public License (CC-BY 4.0)","legal_code_url":"https://creativecommons.org/licenses/by/4.0/legalcode"},"conference":{"location":"Paris, France","start_date":"2023-07-17","end_date":"2023-07-22","name":"CAV: Computer Aided Verification"},"date_published":"2023-07-18T00:00:00Z","quality_controlled":"1","title":"Monitoring algorithmic fairness","ddc":["000"],"date_created":"2023-07-25T18:32:40Z","oa_version":"Published Version","acknowledgement":"This work is supported by the European Research Council under Grant No.: ERC-2020-AdG101020093.","project":[{"call_identifier":"H2020","_id":"62781420-2b32-11ec-9570-8d9b63373d4d","name":"Vigilant Algorithmic Monitoring of Software","grant_number":"101020093"}],"_id":"13310","article_processing_charge":"Yes (in subscription journal)","publication_status":"published","has_accepted_license":"1","date_updated":"2024-10-09T21:06:05Z","publication_identifier":{"eisbn":["9783031377037"],"eissn":["1611-3349"],"issn":["0302-9743"],"isbn":["9783031377020"]},"intvolume":" 13965","page":"358–382","volume":13965,"oa":1,"status":"public","doi":"10.1007/978-3-031-37703-7_17","file_date_updated":"2023-07-31T08:11:20Z","day":"18","external_id":{"arxiv":["2305.15979"]},"abstract":[{"lang":"eng","text":"Machine-learned systems are in widespread use for making decisions about humans, and it is important that they are fair, i.e., not biased against individuals based on sensitive attributes. We present runtime verification of algorithmic fairness for systems whose models are unknown, but are assumed to have a Markov chain structure. We introduce a specification language that can model many common algorithmic fairness properties, such as demographic parity, equal opportunity, and social burden. We build monitors that observe a long sequence of events as generated by a given system, and output, after each observation, a quantitative estimate of how fair or biased the system was on that run until that point in time. The estimate is proven to be correct modulo a variable error bound and a given confidence level, where the error bound gets tighter as the observed sequence gets longer. Our monitors are of two types, and use, respectively, frequentist and Bayesian statistical inference techniques. While the frequentist monitors compute estimates that are objectively correct with respect to the ground truth, the Bayesian monitors compute estimates that are correct subject to a given prior belief about the system’s model. Using a prototype implementation, we show how we can monitor if a bank is fair in giving loans to applicants from different social backgrounds, and if a college is fair in admitting students while maintaining a reasonable financial burden on the society. Although they exhibit different theoretical complexities in certain cases, in our experiments, both frequentist and Bayesian monitors took less than a millisecond to update their verdicts after each observation."}],"year":"2023","ec_funded":1,"publication":"Computer Aided Verification","department":[{"_id":"GradSch"},{"_id":"ToHe"}],"citation":{"apa":"Henzinger, T. A., Karimi, M., Kueffner, K., & Mallik, K. (2023). Monitoring algorithmic fairness. In Computer Aided Verification (Vol. 13965, pp. 358–382). Paris, France: Springer Nature. https://doi.org/10.1007/978-3-031-37703-7_17","short":"T.A. Henzinger, M. Karimi, K. Kueffner, K. Mallik, in:, Computer Aided Verification, Springer Nature, 2023, pp. 358–382.","ista":"Henzinger TA, Karimi M, Kueffner K, Mallik K. 2023. Monitoring algorithmic fairness. Computer Aided Verification. CAV: Computer Aided Verification, LNCS, vol. 13965, 358–382.","ieee":"T. A. Henzinger, M. Karimi, K. Kueffner, and K. Mallik, “Monitoring algorithmic fairness,” in Computer Aided Verification, Paris, France, 2023, vol. 13965, pp. 358–382.","chicago":"Henzinger, Thomas A, Mahyar Karimi, Konstantin Kueffner, and Kaushik Mallik. “Monitoring Algorithmic Fairness.” In Computer Aided Verification, 13965:358–382. Springer Nature, 2023. https://doi.org/10.1007/978-3-031-37703-7_17.","mla":"Henzinger, Thomas A., et al. “Monitoring Algorithmic Fairness.” Computer Aided Verification, vol. 13965, Springer Nature, 2023, pp. 358–382, doi:10.1007/978-3-031-37703-7_17.","ama":"Henzinger TA, Karimi M, Kueffner K, Mallik K. Monitoring algorithmic fairness. In: Computer Aided Verification. Vol 13965. Springer Nature; 2023:358–382. doi:10.1007/978-3-031-37703-7_17"}}