---
_id: '13310'
abstract:
- lang: eng
  text: Machine-learned systems are in widespread use for making decisions about humans,
    and it is important that they are fair, i.e., not biased against individuals based
    on sensitive attributes. We present runtime verification of algorithmic fairness
    for systems whose models are unknown, but are assumed to have a Markov chain structure.
    We introduce a specification language that can model many common algorithmic fairness
    properties, such as demographic parity, equal opportunity, and social burden.
    We build monitors that observe a long sequence of events as generated by a given
    system, and output, after each observation, a quantitative estimate of how fair
    or biased the system was on that run until that point in time. The estimate is
    proven to be correct modulo a variable error bound and a given confidence level,
    where the error bound gets tighter as the observed sequence gets longer. Our monitors
    are of two types, and use, respectively, frequentist and Bayesian statistical
    inference techniques. While the frequentist monitors compute estimates that are
    objectively correct with respect to the ground truth, the Bayesian monitors compute
    estimates that are correct subject to a given prior belief about the system’s
    model. Using a prototype implementation, we show how we can monitor if a bank
    is fair in giving loans to applicants from different social backgrounds, and if
    a college is fair in admitting students while maintaining a reasonable financial
    burden on the society. Although they exhibit different theoretical complexities
    in certain cases, in our experiments, both frequentist and Bayesian monitors took
    less than a millisecond to update their verdicts after each observation.
acknowledgement: 'This work is supported by the European Research Council under Grant
  No.: ERC-2020-AdG101020093.'
alternative_title:
- LNCS
article_processing_charge: Yes (in subscription journal)
arxiv: 1
author:
- first_name: Thomas A
  full_name: Henzinger, Thomas A
  id: 40876CD8-F248-11E8-B48F-1D18A9856A87
  last_name: Henzinger
  orcid: 0000-0002-2985-7724
- first_name: Mahyar
  full_name: Karimi, Mahyar
  id: 6e5417ba-5355-11ee-ae5a-94c2e510b26b
  last_name: Karimi
  orcid: 0009-0005-0820-1696
- first_name: Konstantin
  full_name: Kueffner, Konstantin
  id: 8121a2d0-dc85-11ea-9058-af578f3b4515
  last_name: Kueffner
  orcid: 0000-0001-8974-2542
- first_name: Kaushik
  full_name: Mallik, Kaushik
  id: 0834ff3c-6d72-11ec-94e0-b5b0a4fb8598
  last_name: Mallik
  orcid: 0000-0001-9864-7475
citation:
  ama: 'Henzinger TA, Karimi M, Kueffner K, Mallik K. Monitoring algorithmic fairness.
    In: <i>Computer Aided Verification</i>. Vol 13965. Springer Nature; 2023:358–382.
    doi:<a href="https://doi.org/10.1007/978-3-031-37703-7_17">10.1007/978-3-031-37703-7_17</a>'
  apa: 'Henzinger, T. A., Karimi, M., Kueffner, K., &#38; Mallik, K. (2023). Monitoring
    algorithmic fairness. In <i>Computer Aided Verification</i> (Vol. 13965, pp. 358–382).
    Paris, France: Springer Nature. <a href="https://doi.org/10.1007/978-3-031-37703-7_17">https://doi.org/10.1007/978-3-031-37703-7_17</a>'
  chicago: Henzinger, Thomas A, Mahyar Karimi, Konstantin Kueffner, and Kaushik Mallik.
    “Monitoring Algorithmic Fairness.” In <i>Computer Aided Verification</i>, 13965:358–382.
    Springer Nature, 2023. <a href="https://doi.org/10.1007/978-3-031-37703-7_17">https://doi.org/10.1007/978-3-031-37703-7_17</a>.
  ieee: T. A. Henzinger, M. Karimi, K. Kueffner, and K. Mallik, “Monitoring algorithmic
    fairness,” in <i>Computer Aided Verification</i>, Paris, France, 2023, vol. 13965,
    pp. 358–382.
  ista: 'Henzinger TA, Karimi M, Kueffner K, Mallik K. 2023. Monitoring algorithmic
    fairness. Computer Aided Verification. CAV: Computer Aided Verification, LNCS,
    vol. 13965, 358–382.'
  mla: Henzinger, Thomas A., et al. “Monitoring Algorithmic Fairness.” <i>Computer
    Aided Verification</i>, vol. 13965, Springer Nature, 2023, pp. 358–382, doi:<a
    href="https://doi.org/10.1007/978-3-031-37703-7_17">10.1007/978-3-031-37703-7_17</a>.
  short: T.A. Henzinger, M. Karimi, K. Kueffner, K. Mallik, in:, Computer Aided Verification,
    Springer Nature, 2023, pp. 358–382.
conference:
  end_date: 2023-07-22
  location: Paris, France
  name: 'CAV: Computer Aided Verification'
  start_date: 2023-07-17
corr_author: '1'
date_created: 2023-07-25T18:32:40Z
date_published: 2023-07-18T00:00:00Z
date_updated: 2026-01-21T07:24:31Z
day: '18'
ddc:
- '000'
department:
- _id: GradSch
- _id: ToHe
doi: 10.1007/978-3-031-37703-7_17
ec_funded: 1
external_id:
  arxiv:
  - '2305.15979'
  isi:
  - '001310804800017'
file:
- access_level: open_access
  checksum: ccaf94bf7d658ba012c016e11869b54c
  content_type: application/pdf
  creator: dernst
  date_created: 2023-07-31T08:11:20Z
  date_updated: 2023-07-31T08:11:20Z
  file_id: '13327'
  file_name: 2023_LNCS_CAV_HenzingerT.pdf
  file_size: 647760
  relation: main_file
  success: 1
file_date_updated: 2023-07-31T08:11:20Z
has_accepted_license: '1'
intvolume: '     13965'
isi: 1
language:
- iso: eng
license: https://creativecommons.org/licenses/by/4.0/
month: '07'
oa: 1
oa_version: Published Version
page: 358–382
project:
- _id: 62781420-2b32-11ec-9570-8d9b63373d4d
  call_identifier: H2020
  grant_number: '101020093'
  name: Vigilant Algorithmic Monitoring of Software
publication: Computer Aided Verification
publication_identifier:
  eisbn:
  - '9783031377037'
  eissn:
  - 1611-3349
  isbn:
  - '9783031377020'
  issn:
  - 0302-9743
publication_status: published
publisher: Springer Nature
quality_controlled: '1'
scopus_import: '1'
status: public
title: Monitoring algorithmic fairness
tmp:
  image: /images/cc_by.png
  legal_code_url: https://creativecommons.org/licenses/by/4.0/legalcode
  name: Creative Commons Attribution 4.0 International Public License (CC-BY 4.0)
  short: CC BY (4.0)
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 13965
year: '2023'
...
