---
OA_place: publisher
_id: '10799'
abstract:
- lang: eng
  text: "Because of the increasing popularity of machine learning methods, it is becoming
    important to understand the impact of learned components on automated decision-making
    systems and to guarantee that their consequences are beneficial to society. In
    other words, it is necessary to ensure that machine learning is sufficiently trustworthy
    to be used in real-world applications. This thesis studies two properties of machine
    learning models that are highly desirable for the\r\nsake of reliability: robustness
    and fairness. In the first part of the thesis we study the robustness of learning
    algorithms to training data corruption. Previous work has shown that machine learning
    models are vulnerable to a range\r\nof training set issues, varying from label
    noise through systematic biases to worst-case data manipulations. This is an especially
    relevant problem from a present perspective, since modern machine learning methods
    are particularly data hungry and therefore practitioners often have to rely on
    data collected from various external sources, e.g. from the Internet, from app
    users or via crowdsourcing. Naturally, such sources vary greatly in the quality
    and reliability of the\r\ndata they provide. With these considerations in mind,
    we study the problem of designing machine learning algorithms that are robust
    to corruptions in data coming from multiple sources. We show that, in contrast
    to the case of a single dataset with outliers, successful learning within this
    model is possible both theoretically and practically, even under worst-case data
    corruptions. The second part of this thesis deals with fairness-aware machine
    learning. There are multiple areas where machine learning models have shown promising
    results, but where careful considerations are required, in order to avoid discrimanative
    decisions taken by such learned components. Ensuring fairness can be particularly
    challenging, because real-world training datasets are expected to contain various
    forms of historical bias that may affect the learning process. In this thesis
    we show that data corruption can indeed render the problem of achieving fairness
    impossible, by tightly characterizing the theoretical limits of fair learning
    under worst-case data manipulations. However, assuming access to clean data, we
    also show how fairness-aware learning can be made practical in contexts beyond
    binary classification, in particular in the challenging learning to rank setting."
alternative_title:
- ISTA Thesis
article_processing_charge: No
author:
- first_name: Nikola H
  full_name: Konstantinov, Nikola H
  id: 4B9D76E4-F248-11E8-B48F-1D18A9856A87
  last_name: Konstantinov
  orcid: 0009-0009-5204-7621
citation:
  ama: Konstantinov NH. Robustness and fairness in machine learning. 2022. doi:<a
    href="https://doi.org/10.15479/at:ista:10799">10.15479/at:ista:10799</a>
  apa: Konstantinov, N. H. (2022). <i>Robustness and fairness in machine learning</i>.
    Institute of Science and Technology Austria. <a href="https://doi.org/10.15479/at:ista:10799">https://doi.org/10.15479/at:ista:10799</a>
  chicago: Konstantinov, Nikola H. “Robustness and Fairness in Machine Learning.”
    Institute of Science and Technology Austria, 2022. <a href="https://doi.org/10.15479/at:ista:10799">https://doi.org/10.15479/at:ista:10799</a>.
  ieee: N. H. Konstantinov, “Robustness and fairness in machine learning,” Institute
    of Science and Technology Austria, 2022.
  ista: Konstantinov NH. 2022. Robustness and fairness in machine learning. Institute
    of Science and Technology Austria.
  mla: Konstantinov, Nikola H. <i>Robustness and Fairness in Machine Learning</i>.
    Institute of Science and Technology Austria, 2022, doi:<a href="https://doi.org/10.15479/at:ista:10799">10.15479/at:ista:10799</a>.
  short: N.H. Konstantinov, Robustness and Fairness in Machine Learning, Institute
    of Science and Technology Austria, 2022.
corr_author: '1'
date_created: 2022-02-28T13:03:49Z
date_published: 2022-03-08T00:00:00Z
date_updated: 2026-04-07T14:19:48Z
day: '08'
ddc:
- '000'
degree_awarded: PhD
department:
- _id: GradSch
- _id: ChLa
doi: 10.15479/at:ista:10799
ec_funded: 1
file:
- access_level: open_access
  checksum: 626bc523ae8822d20e635d0e2d95182e
  content_type: application/pdf
  creator: nkonstan
  date_created: 2022-03-06T11:42:54Z
  date_updated: 2022-03-06T11:42:54Z
  file_id: '10823'
  file_name: thesis.pdf
  file_size: 4204905
  relation: main_file
  success: 1
- access_level: closed
  checksum: e2ca2b88350ac8ea1515b948885cbcb1
  content_type: application/x-zip-compressed
  creator: nkonstan
  date_created: 2022-03-06T11:42:57Z
  date_updated: 2022-03-10T12:11:48Z
  file_id: '10824'
  file_name: thesis.zip
  file_size: 22841103
  relation: source_file
file_date_updated: 2022-03-10T12:11:48Z
has_accepted_license: '1'
keyword:
- robustness
- fairness
- machine learning
- PAC learning
- adversarial learning
language:
- iso: eng
month: '03'
oa: 1
oa_version: Published Version
page: '176'
project:
- _id: 2564DBCA-B435-11E9-9278-68D0E5697425
  call_identifier: H2020
  grant_number: '665385'
  name: International IST Doctoral Program
publication_identifier:
  isbn:
  - 978-3-99078-015-2
  issn:
  - 2663-337X
publication_status: published
publisher: Institute of Science and Technology Austria
related_material:
  record:
  - id: '10802'
    relation: part_of_dissertation
    status: public
  - id: '10803'
    relation: part_of_dissertation
    status: public
  - id: '6590'
    relation: part_of_dissertation
    status: public
  - id: '8724'
    relation: part_of_dissertation
    status: public
status: public
supervisor:
- first_name: Christoph
  full_name: Lampert, Christoph
  id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
  last_name: Lampert
  orcid: 0000-0001-8622-7887
title: Robustness and fairness in machine learning
type: dissertation
user_id: ba8df636-2132-11f1-aed0-ed93e2281fdd
year: '2022'
...
