Nikola H Konstantinov
Graduate School
Lampert Group
9 Publications
2022 | Published | Conference Paper | IST-REx-ID: 13241 |

Konstantinov, Nikola H, and Christoph Lampert. “On the Impossibility of Fairness-Aware Learning from Corrupted Data.” In Proceedings of Machine Learning Research, 171:59–83. ML Research Press, 2022.
[Preprint]
View
| Files available
| Download Preprint (ext.)
| arXiv
2022 | Published | Journal Article | IST-REx-ID: 12495 |

Iofinova, Eugenia B, Nikola H Konstantinov, and Christoph Lampert. “FLEA: Provably Robust Fair Multisource Learning from Unreliable Training Data.” Transactions on Machine Learning Research. ML Research Press, 2022.
[Published Version]
View
| Files available
| Download Published Version (ext.)
| arXiv
2022 | Published | Journal Article | IST-REx-ID: 10802 |

Konstantinov, Nikola H, and Christoph Lampert. “Fairness-Aware PAC Learning from Corrupted Data.” Journal of Machine Learning Research. ML Research Press, 2022.
[Published Version]
View
| Files available
| arXiv
2022 | Published | Thesis | IST-REx-ID: 10799 |

Konstantinov, Nikola H. “Robustness and Fairness in Machine Learning.” Institute of Science and Technology Austria, 2022. https://doi.org/10.15479/at:ista:10799.
[Published Version]
View
| Files available
| DOI
2021 | Draft | Preprint | IST-REx-ID: 10803 |

Konstantinov, Nikola H, and Christoph Lampert. “Fairness through Regularization for Learning to Rank.” ArXiv, n.d. https://doi.org/10.48550/arXiv.2102.05996.
[Preprint]
View
| Files available
| DOI
| Download Preprint (ext.)
| arXiv
2020 | Published | Conference Paper | IST-REx-ID: 8724 |

Konstantinov, Nikola H, Elias Frantar, Dan-Adrian Alistarh, and Christoph Lampert. “On the Sample Complexity of Adversarial Multi-Source PAC Learning.” In Proceedings of the 37th International Conference on Machine Learning, 119:5416–25. ML Research Press, 2020.
[Published Version]
View
| Files available
| arXiv
2019 | Published | Conference Paper | IST-REx-ID: 6590 |

Konstantinov, Nikola H, and Christoph Lampert. “Robust Learning from Untrusted Sources.” In Proceedings of the 36th International Conference on Machine Learning, 97:3488–98. ML Research Press, 2019.
[Preprint]
View
| Files available
| Download Preprint (ext.)
| arXiv
2018 | Published | Conference Paper | IST-REx-ID: 5962 |

Alistarh, Dan-Adrian, Christopher De Sa, and Nikola H Konstantinov. “The Convergence of Stochastic Gradient Descent in Asynchronous Shared Memory.” In Proceedings of the 2018 ACM Symposium on Principles of Distributed Computing - PODC ’18, 169–78. ACM Press, 2018. https://doi.org/10.1145/3212734.3212763.
[Preprint]
View
| DOI
| Download Preprint (ext.)
| WoS
| arXiv
2018 | Published | Conference Paper | IST-REx-ID: 6589 |

Alistarh, Dan-Adrian, Torsten Hoefler, Mikael Johansson, Nikola H Konstantinov, Sarit Khirirat, and Cedric Renggli. “The Convergence of Sparsified Gradient Methods.” In Advances in Neural Information Processing Systems 31, Volume 2018:5973–83. Neural Information Processing Systems Foundation, 2018.
[Preprint]
View
| Download Preprint (ext.)
| WoS
| arXiv
Search
Filter Publications
Display / Sort
Export / Embed
Grants
9 Publications
2022 | Published | Conference Paper | IST-REx-ID: 13241 |

Konstantinov, Nikola H, and Christoph Lampert. “On the Impossibility of Fairness-Aware Learning from Corrupted Data.” In Proceedings of Machine Learning Research, 171:59–83. ML Research Press, 2022.
[Preprint]
View
| Files available
| Download Preprint (ext.)
| arXiv
2022 | Published | Journal Article | IST-REx-ID: 12495 |

Iofinova, Eugenia B, Nikola H Konstantinov, and Christoph Lampert. “FLEA: Provably Robust Fair Multisource Learning from Unreliable Training Data.” Transactions on Machine Learning Research. ML Research Press, 2022.
[Published Version]
View
| Files available
| Download Published Version (ext.)
| arXiv
2022 | Published | Journal Article | IST-REx-ID: 10802 |

Konstantinov, Nikola H, and Christoph Lampert. “Fairness-Aware PAC Learning from Corrupted Data.” Journal of Machine Learning Research. ML Research Press, 2022.
[Published Version]
View
| Files available
| arXiv
2022 | Published | Thesis | IST-REx-ID: 10799 |

Konstantinov, Nikola H. “Robustness and Fairness in Machine Learning.” Institute of Science and Technology Austria, 2022. https://doi.org/10.15479/at:ista:10799.
[Published Version]
View
| Files available
| DOI
2021 | Draft | Preprint | IST-REx-ID: 10803 |

Konstantinov, Nikola H, and Christoph Lampert. “Fairness through Regularization for Learning to Rank.” ArXiv, n.d. https://doi.org/10.48550/arXiv.2102.05996.
[Preprint]
View
| Files available
| DOI
| Download Preprint (ext.)
| arXiv
2020 | Published | Conference Paper | IST-REx-ID: 8724 |

Konstantinov, Nikola H, Elias Frantar, Dan-Adrian Alistarh, and Christoph Lampert. “On the Sample Complexity of Adversarial Multi-Source PAC Learning.” In Proceedings of the 37th International Conference on Machine Learning, 119:5416–25. ML Research Press, 2020.
[Published Version]
View
| Files available
| arXiv
2019 | Published | Conference Paper | IST-REx-ID: 6590 |

Konstantinov, Nikola H, and Christoph Lampert. “Robust Learning from Untrusted Sources.” In Proceedings of the 36th International Conference on Machine Learning, 97:3488–98. ML Research Press, 2019.
[Preprint]
View
| Files available
| Download Preprint (ext.)
| arXiv
2018 | Published | Conference Paper | IST-REx-ID: 5962 |

Alistarh, Dan-Adrian, Christopher De Sa, and Nikola H Konstantinov. “The Convergence of Stochastic Gradient Descent in Asynchronous Shared Memory.” In Proceedings of the 2018 ACM Symposium on Principles of Distributed Computing - PODC ’18, 169–78. ACM Press, 2018. https://doi.org/10.1145/3212734.3212763.
[Preprint]
View
| DOI
| Download Preprint (ext.)
| WoS
| arXiv
2018 | Published | Conference Paper | IST-REx-ID: 6589 |

Alistarh, Dan-Adrian, Torsten Hoefler, Mikael Johansson, Nikola H Konstantinov, Sarit Khirirat, and Cedric Renggli. “The Convergence of Sparsified Gradient Methods.” In Advances in Neural Information Processing Systems 31, Volume 2018:5973–83. Neural Information Processing Systems Foundation, 2018.
[Preprint]
View
| Download Preprint (ext.)
| WoS
| arXiv