---
_id: '13992'
abstract:
- lang: eng
  text: Understanding the chirality of molecular reaction pathways is essential for
    a broad range of fundamental and applied sciences. However, the current ability
    to probe chirality on the time scale of primary processes underlying chemical
    reactions remains very limited. Here, we demonstrate time-resolved photoelectron
    circular dichroism (TRPECD) with ultrashort circularly polarized vacuum-ultraviolet
    (VUV) pulses from a tabletop source. We demonstrate the capabilities of VUV-TRPECD
    by resolving the chirality changes in time during the photodissociation of atomic
    iodine from two chiral molecules. We identify several general key features of
    TRPECD, which include the ability to probe dynamical chirality along the complete
    photochemical reaction path, the sensitivity to the local chirality of the evolving
    scattering potential, and the influence of electron scattering off dissociating
    photofragments. Our results are interpreted by comparison with high-level ab-initio
    calculations of transient PECDs from molecular photoionization calculations. Our
    experimental and theoretical techniques define a general approach to femtochirality.
article_number: abq2811
article_processing_charge: No
article_type: original
arxiv: 1
author:
- first_name: Vít
  full_name: Svoboda, Vít
  last_name: Svoboda
- first_name: Niraghatam Bhargava
  full_name: Ram, Niraghatam Bhargava
  last_name: Ram
- first_name: Denitsa Rangelova
  full_name: Baykusheva, Denitsa Rangelova
  id: 71b4d059-2a03-11ee-914d-dfa3beed6530
  last_name: Baykusheva
- first_name: Daniel
  full_name: Zindel, Daniel
  last_name: Zindel
- first_name: Max D. J.
  full_name: Waters, Max D. J.
  last_name: Waters
- first_name: Benjamin
  full_name: Spenger, Benjamin
  last_name: Spenger
- first_name: Manuel
  full_name: Ochsner, Manuel
  last_name: Ochsner
- first_name: Holger
  full_name: Herburger, Holger
  last_name: Herburger
- first_name: Jürgen
  full_name: Stohner, Jürgen
  last_name: Stohner
- first_name: Hans Jakob
  full_name: Wörner, Hans Jakob
  last_name: Wörner
citation:
  ama: Svoboda V, Ram NB, Baykusheva DR, et al. Femtosecond photoelectron circular
    dichroism of chemical reactions. <i>Science Advances</i>. 2022;8(28). doi:<a href="https://doi.org/10.1126/sciadv.abq2811">10.1126/sciadv.abq2811</a>
  apa: Svoboda, V., Ram, N. B., Baykusheva, D. R., Zindel, D., Waters, M. D. J., Spenger,
    B., … Wörner, H. J. (2022). Femtosecond photoelectron circular dichroism of chemical
    reactions. <i>Science Advances</i>. American Association for the Advancement of
    Science. <a href="https://doi.org/10.1126/sciadv.abq2811">https://doi.org/10.1126/sciadv.abq2811</a>
  chicago: Svoboda, Vít, Niraghatam Bhargava Ram, Denitsa Rangelova Baykusheva, Daniel
    Zindel, Max D. J. Waters, Benjamin Spenger, Manuel Ochsner, Holger Herburger,
    Jürgen Stohner, and Hans Jakob Wörner. “Femtosecond Photoelectron Circular Dichroism
    of Chemical Reactions.” <i>Science Advances</i>. American Association for the
    Advancement of Science, 2022. <a href="https://doi.org/10.1126/sciadv.abq2811">https://doi.org/10.1126/sciadv.abq2811</a>.
  ieee: V. Svoboda <i>et al.</i>, “Femtosecond photoelectron circular dichroism of
    chemical reactions,” <i>Science Advances</i>, vol. 8, no. 28. American Association
    for the Advancement of Science, 2022.
  ista: Svoboda V, Ram NB, Baykusheva DR, Zindel D, Waters MDJ, Spenger B, Ochsner
    M, Herburger H, Stohner J, Wörner HJ. 2022. Femtosecond photoelectron circular
    dichroism of chemical reactions. Science Advances. 8(28), abq2811.
  mla: Svoboda, Vít, et al. “Femtosecond Photoelectron Circular Dichroism of Chemical
    Reactions.” <i>Science Advances</i>, vol. 8, no. 28, abq2811, American Association
    for the Advancement of Science, 2022, doi:<a href="https://doi.org/10.1126/sciadv.abq2811">10.1126/sciadv.abq2811</a>.
  short: V. Svoboda, N.B. Ram, D.R. Baykusheva, D. Zindel, M.D.J. Waters, B. Spenger,
    M. Ochsner, H. Herburger, J. Stohner, H.J. Wörner, Science Advances 8 (2022).
date_created: 2023-08-09T13:08:04Z
date_published: 2022-07-15T00:00:00Z
date_updated: 2023-08-22T07:24:01Z
day: '15'
doi: 10.1126/sciadv.abq2811
extern: '1'
external_id:
  arxiv:
  - '2206.04099'
  pmid:
  - '35857523'
intvolume: '         8'
issue: '28'
keyword:
- Multidisciplinary
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.1126/sciadv.abq2811
month: '07'
oa: 1
oa_version: Published Version
pmid: 1
publication: Science Advances
publication_identifier:
  eissn:
  - 2375-2548
publication_status: published
publisher: American Association for the Advancement of Science
quality_controlled: '1'
scopus_import: '1'
status: public
title: Femtosecond photoelectron circular dichroism of chemical reactions
type: journal_article
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 8
year: '2022'
...
---
_id: '13993'
abstract:
- lang: eng
  text: Photoionization is a process taking place on attosecond time scales. How its
    properties evolve from isolated particles to the condensed phase is an open question
    of both fundamental and practical relevance. Here, we review recent work that
    has advanced the study of photoionization dynamics from atoms to molecules, clusters
    and the liquid phase. The first measurements of molecular photoionization delays
    have revealed the attosecond dynamics of electron emission from a molecular shape
    resonance and their sensitivity to the molecular potential. Using electron-ion
    coincidence spectroscopy these measurements have been extended from isolated molecules
    to clusters. A continuous increase of the delays with the water-cluster size has
    been observed up to a size of 4-5 molecules, followed by a saturation towards
    larger clusters. Comparison with calculations has revealed a correlation of the
    time delay with the spatial extension of the created electron hole. Using cylindrical
    liquid-microjet techniques, these measurements have also been extended to liquid
    water, revealing a delay relative to isolated water molecules that was very similar
    to the largest water clusters studied. Detailed modeling based on Monte-Carlo
    simulations confirmed that these delays are dominated by the contributions of
    the first two solvation shells, which agrees with the results of the cluster measurements.
    These combined results open the perspective of experimentally characterizing the
    delocalization of electronic wave functions in complex systems and studying their
    evolution on attosecond time scales.
article_processing_charge: No
article_type: original
author:
- first_name: Xiaochun
  full_name: Gong, Xiaochun
  last_name: Gong
- first_name: Inga
  full_name: Jordan, Inga
  last_name: Jordan
- first_name: Martin
  full_name: Huppert, Martin
  last_name: Huppert
- first_name: Saijoscha
  full_name: Heck, Saijoscha
  last_name: Heck
- first_name: Denitsa Rangelova
  full_name: Baykusheva, Denitsa Rangelova
  id: 71b4d059-2a03-11ee-914d-dfa3beed6530
  last_name: Baykusheva
- first_name: Denis
  full_name: Jelovina, Denis
  last_name: Jelovina
- first_name: Axel
  full_name: Schild, Axel
  last_name: Schild
- first_name: Hans Jakob
  full_name: Wörner, Hans Jakob
  last_name: Wörner
citation:
  ama: 'Gong X, Jordan I, Huppert M, et al. Attosecond photoionization dynamics: from
    molecules over clusters to the liquid phase. <i>Chimia</i>. 2022;76(6):520-528.
    doi:<a href="https://doi.org/10.2533/chimia.2022.520">10.2533/chimia.2022.520</a>'
  apa: 'Gong, X., Jordan, I., Huppert, M., Heck, S., Baykusheva, D. R., Jelovina,
    D., … Wörner, H. J. (2022). Attosecond photoionization dynamics: from molecules
    over clusters to the liquid phase. <i>Chimia</i>. Swiss Chemical Society. <a href="https://doi.org/10.2533/chimia.2022.520">https://doi.org/10.2533/chimia.2022.520</a>'
  chicago: 'Gong, Xiaochun, Inga Jordan, Martin Huppert, Saijoscha Heck, Denitsa Rangelova
    Baykusheva, Denis Jelovina, Axel Schild, and Hans Jakob Wörner. “Attosecond Photoionization
    Dynamics: From Molecules over Clusters to the Liquid Phase.” <i>Chimia</i>. Swiss
    Chemical Society, 2022. <a href="https://doi.org/10.2533/chimia.2022.520">https://doi.org/10.2533/chimia.2022.520</a>.'
  ieee: 'X. Gong <i>et al.</i>, “Attosecond photoionization dynamics: from molecules
    over clusters to the liquid phase,” <i>Chimia</i>, vol. 76, no. 6. Swiss Chemical
    Society, pp. 520–528, 2022.'
  ista: 'Gong X, Jordan I, Huppert M, Heck S, Baykusheva DR, Jelovina D, Schild A,
    Wörner HJ. 2022. Attosecond photoionization dynamics: from molecules over clusters
    to the liquid phase. Chimia. 76(6), 520–528.'
  mla: 'Gong, Xiaochun, et al. “Attosecond Photoionization Dynamics: From Molecules
    over Clusters to the Liquid Phase.” <i>Chimia</i>, vol. 76, no. 6, Swiss Chemical
    Society, 2022, pp. 520–28, doi:<a href="https://doi.org/10.2533/chimia.2022.520">10.2533/chimia.2022.520</a>.'
  short: X. Gong, I. Jordan, M. Huppert, S. Heck, D.R. Baykusheva, D. Jelovina, A.
    Schild, H.J. Wörner, Chimia 76 (2022) 520–528.
date_created: 2023-08-09T13:08:15Z
date_published: 2022-06-29T00:00:00Z
date_updated: 2023-08-22T07:26:39Z
day: '29'
doi: 10.2533/chimia.2022.520
extern: '1'
intvolume: '        76'
issue: '6'
keyword:
- General Medicine
- General Chemistry
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.2533/chimia.2022.520
month: '06'
oa: 1
oa_version: Published Version
page: 520-528
publication: Chimia
publication_identifier:
  eissn:
  - 2673-2424
  issn:
  - 0009-4293
publication_status: published
publisher: Swiss Chemical Society
quality_controlled: '1'
scopus_import: '1'
status: public
title: 'Attosecond photoionization dynamics: from molecules over clusters to the liquid
  phase'
type: journal_article
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 76
year: '2022'
...
---
_id: '13994'
abstract:
- lang: eng
  text: "Ultrafast lasers are an increasingly important tool to control and stabilize
    emergent phases in quantum materials. Among a variety of possible excitation protocols,
    a particularly intriguing route is the direct light engineering of microscopic
    electronic parameters, such as the electron hopping and the local Coulomb repulsion
    (Hubbard \r\nU). In this work, we use time-resolved x-ray absorption spectroscopy
    to demonstrate the light-induced renormalization of the Hubbard U in a cuprate
    superconductor, La1.905Ba0.095CuO4. We show that intense femtosecond laser pulses
    induce a substantial redshift of the upper Hubbard band while leaving the Zhang-Rice
    singlet energy unaffected. By comparing the experimental data to time-dependent
    spectra of single- and three-band Hubbard models, we assign this effect to an
    approximately 140-meV reduction of the on-site Coulomb repulsion on the copper
    sites. Our demonstration of a dynamical Hubbard U renormalization in a copper
    oxide paves the way to a novel strategy for the manipulation of superconductivity
    and magnetism as well as to the realization of other long-range-ordered phases
    in light-driven quantum materials."
article_number: '011013'
article_processing_charge: No
article_type: original
arxiv: 1
author:
- first_name: Denitsa Rangelova
  full_name: Baykusheva, Denitsa Rangelova
  id: 71b4d059-2a03-11ee-914d-dfa3beed6530
  last_name: Baykusheva
- first_name: Hoyoung
  full_name: Jang, Hoyoung
  last_name: Jang
- first_name: Ali A.
  full_name: Husain, Ali A.
  last_name: Husain
- first_name: Sangjun
  full_name: Lee, Sangjun
  last_name: Lee
- first_name: Sophia F. R.
  full_name: TenHuisen, Sophia F. R.
  last_name: TenHuisen
- first_name: Preston
  full_name: Zhou, Preston
  last_name: Zhou
- first_name: Sunwook
  full_name: Park, Sunwook
  last_name: Park
- first_name: Hoon
  full_name: Kim, Hoon
  last_name: Kim
- first_name: Jin-Kwang
  full_name: Kim, Jin-Kwang
  last_name: Kim
- first_name: Hyeong-Do
  full_name: Kim, Hyeong-Do
  last_name: Kim
- first_name: Minseok
  full_name: Kim, Minseok
  last_name: Kim
- first_name: Sang-Youn
  full_name: Park, Sang-Youn
  last_name: Park
- first_name: Peter
  full_name: Abbamonte, Peter
  last_name: Abbamonte
- first_name: B. J.
  full_name: Kim, B. J.
  last_name: Kim
- first_name: G. D.
  full_name: Gu, G. D.
  last_name: Gu
- first_name: Yao
  full_name: Wang, Yao
  last_name: Wang
- first_name: Matteo
  full_name: Mitrano, Matteo
  last_name: Mitrano
citation:
  ama: Baykusheva DR, Jang H, Husain AA, et al. Ultrafast renormalization of the on-site
    Coulomb repulsion in a cuprate superconductor. <i>Physical Review X</i>. 2022;12(1).
    doi:<a href="https://doi.org/10.1103/physrevx.12.011013">10.1103/physrevx.12.011013</a>
  apa: Baykusheva, D. R., Jang, H., Husain, A. A., Lee, S., TenHuisen, S. F. R., Zhou,
    P., … Mitrano, M. (2022). Ultrafast renormalization of the on-site Coulomb repulsion
    in a cuprate superconductor. <i>Physical Review X</i>. American Physical Society.
    <a href="https://doi.org/10.1103/physrevx.12.011013">https://doi.org/10.1103/physrevx.12.011013</a>
  chicago: Baykusheva, Denitsa Rangelova, Hoyoung Jang, Ali A. Husain, Sangjun Lee,
    Sophia F. R. TenHuisen, Preston Zhou, Sunwook Park, et al. “Ultrafast Renormalization
    of the On-Site Coulomb Repulsion in a Cuprate Superconductor.” <i>Physical Review
    X</i>. American Physical Society, 2022. <a href="https://doi.org/10.1103/physrevx.12.011013">https://doi.org/10.1103/physrevx.12.011013</a>.
  ieee: D. R. Baykusheva <i>et al.</i>, “Ultrafast renormalization of the on-site
    Coulomb repulsion in a cuprate superconductor,” <i>Physical Review X</i>, vol.
    12, no. 1. American Physical Society, 2022.
  ista: Baykusheva DR, Jang H, Husain AA, Lee S, TenHuisen SFR, Zhou P, Park S, Kim
    H, Kim J-K, Kim H-D, Kim M, Park S-Y, Abbamonte P, Kim BJ, Gu GD, Wang Y, Mitrano
    M. 2022. Ultrafast renormalization of the on-site Coulomb repulsion in a cuprate
    superconductor. Physical Review X. 12(1), 011013.
  mla: Baykusheva, Denitsa Rangelova, et al. “Ultrafast Renormalization of the On-Site
    Coulomb Repulsion in a Cuprate Superconductor.” <i>Physical Review X</i>, vol.
    12, no. 1, 011013, American Physical Society, 2022, doi:<a href="https://doi.org/10.1103/physrevx.12.011013">10.1103/physrevx.12.011013</a>.
  short: D.R. Baykusheva, H. Jang, A.A. Husain, S. Lee, S.F.R. TenHuisen, P. Zhou,
    S. Park, H. Kim, J.-K. Kim, H.-D. Kim, M. Kim, S.-Y. Park, P. Abbamonte, B.J.
    Kim, G.D. Gu, Y. Wang, M. Mitrano, Physical Review X 12 (2022).
date_created: 2023-08-09T13:08:26Z
date_published: 2022-01-20T00:00:00Z
date_updated: 2024-10-14T12:23:26Z
day: '20'
doi: 10.1103/physrevx.12.011013
extern: '1'
external_id:
  arxiv:
  - '2109.13229'
intvolume: '        12'
issue: '1'
keyword:
- General Physics and Astronomy
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.1103/PhysRevX.12.011013
month: '01'
oa: 1
oa_version: Published Version
publication: Physical Review X
publication_identifier:
  eissn:
  - 2160-3308
publication_status: published
publisher: American Physical Society
quality_controlled: '1'
scopus_import: '1'
status: public
title: Ultrafast renormalization of the on-site Coulomb repulsion in a cuprate superconductor
type: journal_article
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 12
year: '2022'
...
---
_id: '14093'
abstract:
- lang: eng
  text: ' We propose a stochastic conditional gradient method (CGM) for minimizing
    convex finite-sum objectives formed as a sum of smooth and non-smooth terms. Existing
    CGM variants for this template either suffer from slow convergence rates, or require
    carefully increasing the batch size over the course of the algorithm’s execution,
    which leads to computing full gradients. In contrast, the proposed method, equipped
    with a stochastic average gradient (SAG) estimator, requires only one sample per
    iteration. Nevertheless, it guarantees fast convergence rates on par with more
    sophisticated variance reduction techniques. In applications we put special emphasis
    on problems with a large number of separable constraints. Such problems are prevalent
    among semidefinite programming (SDP) formulations arising in machine learning
    and theoretical computer science. We provide numerical experiments on matrix completion,
    unsupervised clustering, and sparsest-cut SDPs. '
alternative_title:
- PMLR
article_processing_charge: No
arxiv: 1
author:
- first_name: Gideon
  full_name: Dresdner, Gideon
  last_name: Dresdner
- first_name: Maria-Luiza
  full_name: Vladarean, Maria-Luiza
  last_name: Vladarean
- first_name: Gunnar
  full_name: Rätsch, Gunnar
  last_name: Rätsch
- first_name: Francesco
  full_name: Locatello, Francesco
  id: 26cfd52f-2483-11ee-8040-88983bcc06d4
  last_name: Locatello
  orcid: 0000-0002-4850-0683
- first_name: Volkan
  full_name: Cevher, Volkan
  last_name: Cevher
- first_name: Alp
  full_name: Yurtsever, Alp
  last_name: Yurtsever
citation:
  ama: 'Dresdner G, Vladarean M-L, Rätsch G, Locatello F, Cevher V, Yurtsever A.  Faster
    one-sample stochastic conditional gradient method for composite convex minimization.
    In: <i>Proceedings of the 25th International Conference on Artificial Intelligence
    and Statistics</i>. Vol 151. ML Research Press; 2022:8439-8457.'
  apa: 'Dresdner, G., Vladarean, M.-L., Rätsch, G., Locatello, F., Cevher, V., &#38;
    Yurtsever, A. (2022).  Faster one-sample stochastic conditional gradient method
    for composite convex minimization. In <i>Proceedings of the 25th International
    Conference on Artificial Intelligence and Statistics</i> (Vol. 151, pp. 8439–8457).
    Virtual: ML Research Press.'
  chicago: Dresdner, Gideon, Maria-Luiza Vladarean, Gunnar Rätsch, Francesco Locatello,
    Volkan Cevher, and Alp Yurtsever. “ Faster One-Sample Stochastic Conditional Gradient
    Method for Composite Convex Minimization.” In <i>Proceedings of the 25th International
    Conference on Artificial Intelligence and Statistics</i>, 151:8439–57. ML Research
    Press, 2022.
  ieee: G. Dresdner, M.-L. Vladarean, G. Rätsch, F. Locatello, V. Cevher, and A. Yurtsever,
    “ Faster one-sample stochastic conditional gradient method for composite convex
    minimization,” in <i>Proceedings of the 25th International Conference on Artificial
    Intelligence and Statistics</i>, Virtual, 2022, vol. 151, pp. 8439–8457.
  ista: 'Dresdner G, Vladarean M-L, Rätsch G, Locatello F, Cevher V, Yurtsever A.
    2022.  Faster one-sample stochastic conditional gradient method for composite
    convex minimization. Proceedings of the 25th International Conference on Artificial
    Intelligence and Statistics. AISTATS: Conference on Artificial Intelligence and
    Statistics, PMLR, vol. 151, 8439–8457.'
  mla: Dresdner, Gideon, et al. “ Faster One-Sample Stochastic Conditional Gradient
    Method for Composite Convex Minimization.” <i>Proceedings of the 25th International
    Conference on Artificial Intelligence and Statistics</i>, vol. 151, ML Research
    Press, 2022, pp. 8439–57.
  short: G. Dresdner, M.-L. Vladarean, G. Rätsch, F. Locatello, V. Cevher, A. Yurtsever,
    in:, Proceedings of the 25th International Conference on Artificial Intelligence
    and Statistics, ML Research Press, 2022, pp. 8439–8457.
conference:
  end_date: 2022-03-30
  location: Virtual
  name: 'AISTATS: Conference on Artificial Intelligence and Statistics'
  start_date: 2022-03-28
date_created: 2023-08-21T09:27:43Z
date_published: 2022-04-01T00:00:00Z
date_updated: 2023-09-06T10:28:17Z
day: '01'
department:
- _id: FrLo
extern: '1'
external_id:
  arxiv:
  - '2202.13212'
intvolume: '       151'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://arxiv.org/abs/2202.13212
month: '04'
oa: 1
oa_version: Preprint
page: 8439-8457
publication: Proceedings of the 25th International Conference on Artificial Intelligence
  and Statistics
publication_identifier:
  issn:
  - 2640-3498
publication_status: published
publisher: ML Research Press
quality_controlled: '1'
scopus_import: '1'
status: public
title: ' Faster one-sample stochastic conditional gradient method for composite convex
  minimization'
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 151
year: '2022'
...
---
_id: '14098'
abstract:
- lang: eng
  text: Magnetic fields can drastically change predictions of evolutionary models
    of massive stars via mass-loss quenching, magnetic braking, and efficient angular
    momentum transport, which we aim to quantify in this work. We use the MESA software
    instrument to compute an extensive main-sequence grid of stellar structure and
    evolution models, as well as isochrones, accounting for the effects attributed
    to a surface fossil magnetic field. The grid is densely populated in initial mass
    (3–60 M⊙), surface equatorial magnetic field strength (0–50 kG), and metallicity
    (representative of the Solar neighbourhood and the Magellanic Clouds). We use
    two magnetic braking and two chemical mixing schemes and compare the model predictions
    for slowly rotating, nitrogen-enriched (‘Group 2’) stars with observations in
    the Large Magellanic Cloud. We quantify a range of initial field strengths that
    allow for producing Group 2 stars and find that typical values (up to a few kG)
    lead to solutions. Between the subgrids, we find notable departures in surface
    abundances and evolutionary paths. In our magnetic models, chemical mixing is
    always less efficient compared to non-magnetic models due to the rapid spin-down.
    We identify that quasi-chemically homogeneous main sequence evolution by efficient
    mixing could be prevented by fossil magnetic fields. We recommend comparing this
    grid of evolutionary models with spectropolarimetric and spectroscopic observations
    with the goals of (i) revisiting the derived stellar parameters of known magnetic
    stars, and (ii) observationally constraining the uncertain magnetic braking and
    chemical mixing schemes.
article_processing_charge: No
article_type: original
arxiv: 1
author:
- first_name: Z.
  full_name: Keszthelyi, Z.
  last_name: Keszthelyi
- first_name: A. de
  full_name: Koter, A. de
  last_name: Koter
- first_name: Ylva Louise Linsdotter
  full_name: Götberg, Ylva Louise Linsdotter
  id: d0648d0c-0f64-11ee-a2e0-dd0faa2e4f7d
  last_name: Götberg
  orcid: 0000-0002-6960-6911
- first_name: G.
  full_name: Meynet, G.
  last_name: Meynet
- first_name: S. A.
  full_name: Brands, S. A.
  last_name: Brands
- first_name: V.
  full_name: Petit, V.
  last_name: Petit
- first_name: M.
  full_name: Carrington, M.
  last_name: Carrington
- first_name: A. David-Uraz
  full_name: A. David-Uraz, A. David-Uraz
  last_name: A. David-Uraz
- first_name: S. T.
  full_name: Geen, S. T.
  last_name: Geen
- first_name: C.
  full_name: Georgy, C.
  last_name: Georgy
- first_name: R.
  full_name: Hirschi, R.
  last_name: Hirschi
- first_name: J.
  full_name: Puls, J.
  last_name: Puls
- first_name: K. J.
  full_name: Ramalatswa, K. J.
  last_name: Ramalatswa
- first_name: M. E.
  full_name: Shultz, M. E.
  last_name: Shultz
- first_name: A. ud-Doula
  full_name: A. ud-Doula, A. ud-Doula
  last_name: A. ud-Doula
citation:
  ama: 'Keszthelyi Z, Koter A de, Götberg YLL, et al. The effects of surface fossil
    magnetic fields on massive star evolution: IV. Grids of models at solar, LMC,
    and SMC metallicities. <i>Monthly Notices of the Royal Astronomical Society</i>.
    2022;517(2):2028-2055. doi:<a href="https://doi.org/10.1093/mnras/stac2598">10.1093/mnras/stac2598</a>'
  apa: 'Keszthelyi, Z., Koter, A. de, Götberg, Y. L. L., Meynet, G., Brands, S. A.,
    Petit, V., … A. ud-Doula, A. ud-Doula. (2022). The effects of surface fossil magnetic
    fields on massive star evolution: IV. Grids of models at solar, LMC, and SMC metallicities.
    <i>Monthly Notices of the Royal Astronomical Society</i>. Oxford Academic. <a
    href="https://doi.org/10.1093/mnras/stac2598">https://doi.org/10.1093/mnras/stac2598</a>'
  chicago: 'Keszthelyi, Z., A. de Koter, Ylva Louise Linsdotter Götberg, G. Meynet,
    S. A. Brands, V. Petit, M. Carrington, et al. “The Effects of Surface Fossil Magnetic
    Fields on Massive Star Evolution: IV. Grids of Models at Solar, LMC, and SMC Metallicities.”
    <i>Monthly Notices of the Royal Astronomical Society</i>. Oxford Academic, 2022.
    <a href="https://doi.org/10.1093/mnras/stac2598">https://doi.org/10.1093/mnras/stac2598</a>.'
  ieee: 'Z. Keszthelyi <i>et al.</i>, “The effects of surface fossil magnetic fields
    on massive star evolution: IV. Grids of models at solar, LMC, and SMC metallicities,”
    <i>Monthly Notices of the Royal Astronomical Society</i>, vol. 517, no. 2. Oxford
    Academic, pp. 2028–2055, 2022.'
  ista: 'Keszthelyi Z, Koter A de, Götberg YLL, Meynet G, Brands SA, Petit V, Carrington
    M, A. David-Uraz AD-U, Geen ST, Georgy C, Hirschi R, Puls J, Ramalatswa KJ, Shultz
    ME, A. ud-Doula A ud-Doula. 2022. The effects of surface fossil magnetic fields
    on massive star evolution: IV. Grids of models at solar, LMC, and SMC metallicities.
    Monthly Notices of the Royal Astronomical Society. 517(2), 2028–2055.'
  mla: 'Keszthelyi, Z., et al. “The Effects of Surface Fossil Magnetic Fields on Massive
    Star Evolution: IV. Grids of Models at Solar, LMC, and SMC Metallicities.” <i>Monthly
    Notices of the Royal Astronomical Society</i>, vol. 517, no. 2, Oxford Academic,
    2022, pp. 2028–55, doi:<a href="https://doi.org/10.1093/mnras/stac2598">10.1093/mnras/stac2598</a>.'
  short: Z. Keszthelyi, A. de Koter, Y.L.L. Götberg, G. Meynet, S.A. Brands, V. Petit,
    M. Carrington, A.D.-U. A. David-Uraz, S.T. Geen, C. Georgy, R. Hirschi, J. Puls,
    K.J. Ramalatswa, M.E. Shultz, A. ud-Doula A. ud-Doula, Monthly Notices of the
    Royal Astronomical Society 517 (2022) 2028–2055.
date_created: 2023-08-21T10:11:21Z
date_published: 2022-12-01T00:00:00Z
date_updated: 2023-08-22T13:18:34Z
day: '01'
doi: 10.1093/mnras/stac2598
extern: '1'
external_id:
  arxiv:
  - '2209.06350'
intvolume: '       517'
issue: '2'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.1093/mnras/stac2598
month: '12'
oa: 1
oa_version: Published Version
page: 2028-2055
publication: Monthly Notices of the Royal Astronomical Society
publication_identifier:
  eissn:
  - 1365-2966
  issn:
  - 0035-8711
publication_status: published
publisher: Oxford Academic
quality_controlled: '1'
scopus_import: '1'
status: public
title: 'The effects of surface fossil magnetic fields on massive star evolution: IV.
  Grids of models at solar, LMC, and SMC metallicities'
type: journal_article
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 517
year: '2022'
...
---
_id: '14099'
abstract:
- lang: eng
  text: Magnetism can greatly impact the evolution of stars. In some stars with OBA
    spectral types there is direct evidence via the Zeeman effect for stable, large-scale
    magnetospheres, which lead to the spin-down of the stellar surface and reduced
    mass loss. So far, a comprehensive grid of stellar structure and evolution models
    accounting for these effects was lacking. For this reason, we computed and studied
    models with two magnetic braking and two chemical mixing schemes in three metallicity
    environments with the MESA software instrument. We find notable differences between
    the subgrids, which affects the model predictions and thus the detailed characterisation
    of stars. We are able to quantify the impact of magnetic fields in terms of preventing
    quasi-chemically homogeneous evolution and producing slowly-rotating, nitrogen-enriched
    ("Group 2") stars. Our model grid is fully open access and open source.
article_number: '2211.07060'
article_processing_charge: No
arxiv: 1
author:
- first_name: Z.
  full_name: Keszthelyi, Z.
  last_name: Keszthelyi
- first_name: A. de
  full_name: Koter, A. de
  last_name: Koter
- first_name: Ylva Louise Linsdotter
  full_name: Götberg, Ylva Louise Linsdotter
  id: d0648d0c-0f64-11ee-a2e0-dd0faa2e4f7d
  last_name: Götberg
  orcid: 0000-0002-6960-6911
- first_name: G.
  full_name: Meynet, G.
  last_name: Meynet
- first_name: S. A.
  full_name: Brands, S. A.
  last_name: Brands
- first_name: V.
  full_name: Petit, V.
  last_name: Petit
- first_name: M.
  full_name: Carrington, M.
  last_name: Carrington
- first_name: A. David-Uraz
  full_name: A. David-Uraz, A. David-Uraz
  last_name: A. David-Uraz
- first_name: S. T.
  full_name: Geen, S. T.
  last_name: Geen
- first_name: C.
  full_name: Georgy, C.
  last_name: Georgy
- first_name: R.
  full_name: Hirschi, R.
  last_name: Hirschi
- first_name: J.
  full_name: Puls, J.
  last_name: Puls
- first_name: K. J.
  full_name: Ramalatswa, K. J.
  last_name: Ramalatswa
- first_name: M. E.
  full_name: Shultz, M. E.
  last_name: Shultz
- first_name: A. ud-Doula
  full_name: A. ud-Doula, A. ud-Doula
  last_name: A. ud-Doula
citation:
  ama: Keszthelyi Z, Koter A de, Götberg YLL, et al. Spin-down and reduced mass loss
    in early-type stars with large-scale magnetic fields. <i>arXiv</i>. doi:<a href="https://doi.org/10.48550/arXiv.2211.07060">10.48550/arXiv.2211.07060</a>
  apa: Keszthelyi, Z., Koter, A. de, Götberg, Y. L. L., Meynet, G., Brands, S. A.,
    Petit, V., … A. ud-Doula, A. ud-Doula. (n.d.). Spin-down and reduced mass loss
    in early-type stars with large-scale magnetic fields. <i>arXiv</i>. <a href="https://doi.org/10.48550/arXiv.2211.07060">https://doi.org/10.48550/arXiv.2211.07060</a>
  chicago: Keszthelyi, Z., A. de Koter, Ylva Louise Linsdotter Götberg, G. Meynet,
    S. A. Brands, V. Petit, M. Carrington, et al. “Spin-down and Reduced Mass Loss
    in Early-Type Stars with Large-Scale Magnetic Fields.” <i>ArXiv</i>, n.d. <a href="https://doi.org/10.48550/arXiv.2211.07060">https://doi.org/10.48550/arXiv.2211.07060</a>.
  ieee: Z. Keszthelyi <i>et al.</i>, “Spin-down and reduced mass loss in early-type
    stars with large-scale magnetic fields,” <i>arXiv</i>. .
  ista: Keszthelyi Z, Koter A de, Götberg YLL, Meynet G, Brands SA, Petit V, Carrington
    M, A. David-Uraz AD-U, Geen ST, Georgy C, Hirschi R, Puls J, Ramalatswa KJ, Shultz
    ME, A. ud-Doula A ud-Doula. Spin-down and reduced mass loss in early-type stars
    with large-scale magnetic fields. arXiv, 2211.07060.
  mla: Keszthelyi, Z., et al. “Spin-down and Reduced Mass Loss in Early-Type Stars
    with Large-Scale Magnetic Fields.” <i>ArXiv</i>, 2211.07060, doi:<a href="https://doi.org/10.48550/arXiv.2211.07060">10.48550/arXiv.2211.07060</a>.
  short: Z. Keszthelyi, A. de Koter, Y.L.L. Götberg, G. Meynet, S.A. Brands, V. Petit,
    M. Carrington, A.D.-U. A. David-Uraz, S.T. Geen, C. Georgy, R. Hirschi, J. Puls,
    K.J. Ramalatswa, M.E. Shultz, A. ud-Doula A. ud-Doula, ArXiv (n.d.).
date_created: 2023-08-21T10:11:37Z
date_published: 2022-11-14T00:00:00Z
date_updated: 2023-08-22T13:20:15Z
day: '14'
doi: 10.48550/arXiv.2211.07060
extern: '1'
external_id:
  arxiv:
  - '2211.07060'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.48550/arXiv.2211.07060
month: '11'
oa: 1
oa_version: Submitted Version
publication: arXiv
publication_status: submitted
status: public
title: Spin-down and reduced mass loss in early-type stars with large-scale magnetic
  fields
type: preprint
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2022'
...
---
_id: '14106'
abstract:
- lang: eng
  text: "We show that deep networks trained to satisfy demographic parity often do
    so\r\nthrough a form of race or gender awareness, and that the more we force a
    network\r\nto be fair, the more accurately we can recover race or gender from
    the internal state\r\nof the network. Based on this observation, we investigate
    an alternative fairness\r\napproach: we add a second classification head to the
    network to explicitly predict\r\nthe protected attribute (such as race or gender)
    alongside the original task. After\r\ntraining the two-headed network, we enforce
    demographic parity by merging the\r\ntwo heads, creating a network with the same
    architecture as the original network.\r\nWe establish a close relationship between
    existing approaches and our approach\r\nby showing (1) that the decisions of a
    fair classifier are well-approximated by our\r\napproach, and (2) that an unfair
    and optimally accurate classifier can be recovered\r\nfrom a fair classifier and
    our second head predicting the protected attribute. We use\r\nour explicit formulation
    to argue that the existing fairness approaches, just as ours,\r\ndemonstrate disparate
    treatment and that they are likely to be unlawful in a wide\r\nrange of scenarios
    under US law."
alternative_title:
- Advances in Neural Information Processing Systems
article_processing_charge: No
arxiv: 1
author:
- first_name: Michael
  full_name: Lohaus, Michael
  last_name: Lohaus
- first_name: Matthäus
  full_name: Kleindessner, Matthäus
  last_name: Kleindessner
- first_name: Krishnaram
  full_name: Kenthapadi, Krishnaram
  last_name: Kenthapadi
- first_name: Francesco
  full_name: Locatello, Francesco
  id: 26cfd52f-2483-11ee-8040-88983bcc06d4
  last_name: Locatello
  orcid: 0000-0002-4850-0683
- first_name: Chris
  full_name: Russell, Chris
  last_name: Russell
citation:
  ama: 'Lohaus M, Kleindessner M, Kenthapadi K, Locatello F, Russell C. Are two heads
    the same as one? Identifying disparate treatment in fair neural networks. In:
    <i>36th Conference on Neural Information Processing Systems</i>. Vol 35. Neural
    Information Processing Systems Foundation; 2022:16548-16562.'
  apa: 'Lohaus, M., Kleindessner, M., Kenthapadi, K., Locatello, F., &#38; Russell,
    C. (2022). Are two heads the same as one? Identifying disparate treatment in fair
    neural networks. In <i>36th Conference on Neural Information Processing Systems</i>
    (Vol. 35, pp. 16548–16562). New Orleans, LA, United States: Neural Information
    Processing Systems Foundation.'
  chicago: Lohaus, Michael, Matthäus Kleindessner, Krishnaram Kenthapadi, Francesco
    Locatello, and Chris Russell. “Are Two Heads the Same as One? Identifying Disparate
    Treatment in Fair Neural Networks.” In <i>36th Conference on Neural Information
    Processing Systems</i>, 35:16548–62. Neural Information Processing Systems Foundation,
    2022.
  ieee: M. Lohaus, M. Kleindessner, K. Kenthapadi, F. Locatello, and C. Russell, “Are
    two heads the same as one? Identifying disparate treatment in fair neural networks,”
    in <i>36th Conference on Neural Information Processing Systems</i>, New Orleans,
    LA, United States, 2022, vol. 35, pp. 16548–16562.
  ista: 'Lohaus M, Kleindessner M, Kenthapadi K, Locatello F, Russell C. 2022. Are
    two heads the same as one? Identifying disparate treatment in fair neural networks.
    36th Conference on Neural Information Processing Systems. NeurIPS: Neural Information
    Processing Systems, Advances in Neural Information Processing Systems, vol. 35,
    16548–16562.'
  mla: Lohaus, Michael, et al. “Are Two Heads the Same as One? Identifying Disparate
    Treatment in Fair Neural Networks.” <i>36th Conference on Neural Information Processing
    Systems</i>, vol. 35, Neural Information Processing Systems Foundation, 2022,
    pp. 16548–62.
  short: M. Lohaus, M. Kleindessner, K. Kenthapadi, F. Locatello, C. Russell, in:,
    36th Conference on Neural Information Processing Systems, Neural Information Processing
    Systems Foundation, 2022, pp. 16548–16562.
conference:
  end_date: 2022-12-09
  location: New Orleans, LA, United States
  name: 'NeurIPS: Neural Information Processing Systems'
  start_date: 2022-11-28
date_created: 2023-08-21T12:12:42Z
date_published: 2022-12-15T00:00:00Z
date_updated: 2024-10-14T12:27:01Z
day: '15'
department:
- _id: FrLo
extern: '1'
external_id:
  arxiv:
  - '2204.04440'
intvolume: '        35'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://arxiv.org/abs/2204.04440
month: '12'
oa: 1
oa_version: Preprint
page: 16548-16562
publication: 36th Conference on Neural Information Processing Systems
publication_identifier:
  isbn:
  - '9781713871088'
publication_status: published
publisher: Neural Information Processing Systems Foundation
quality_controlled: '1'
scopus_import: '1'
status: public
title: Are two heads the same as one? Identifying disparate treatment in fair neural
  networks
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 35
year: '2022'
...
---
_id: '14107'
abstract:
- lang: eng
  text: "Amodal perception requires inferring the full shape of an object that is
    partially occluded. This task is particularly challenging on two levels: (1) it
    requires more information than what is contained in the instant retina or imaging
    sensor, (2) it is difficult to obtain enough well-annotated amodal labels for
    supervision. To this end, this paper develops a new framework of\r\nSelf-supervised
    amodal Video object segmentation (SaVos). Our method efficiently leverages the
    visual information of video temporal sequences to infer the amodal mask of objects.
    The key intuition is that the occluded part of an object can be explained away
    if that part is visible in other frames, possibly deformed as long as the deformation
    can be reasonably learned.\r\nAccordingly, we derive a novel self-supervised learning
    paradigm that efficiently utilizes the visible object parts as the supervision
    to guide the training on videos. In addition to learning type prior to complete
    masks for known types, SaVos also learns the spatiotemporal prior, which is also
    useful for the amodal task and could generalize to unseen types. The proposed\r\nframework
    achieves the state-of-the-art performance on the synthetic amodal segmentation
    benchmark FISHBOWL and the real world benchmark KINS-Video-Car. Further, it lends
    itself well to being transferred to novel distributions using test-time adaptation,
    outperforming existing models even after the transfer to a new distribution."
article_processing_charge: No
arxiv: 1
author:
- first_name: Jian
  full_name: Yao, Jian
  last_name: Yao
- first_name: Yuxin
  full_name: Hong, Yuxin
  last_name: Hong
- first_name: Chiyu
  full_name: Wang, Chiyu
  last_name: Wang
- first_name: Tianjun
  full_name: Xiao, Tianjun
  last_name: Xiao
- first_name: Tong
  full_name: He, Tong
  last_name: He
- first_name: Francesco
  full_name: Locatello, Francesco
  id: 26cfd52f-2483-11ee-8040-88983bcc06d4
  last_name: Locatello
  orcid: 0000-0002-4850-0683
- first_name: David
  full_name: Wipf, David
  last_name: Wipf
- first_name: Yanwei
  full_name: Fu, Yanwei
  last_name: Fu
- first_name: Zheng
  full_name: Zhang, Zheng
  last_name: Zhang
citation:
  ama: 'Yao J, Hong Y, Wang C, et al. Self-supervised amodal video object segmentation.
    In: <i>36th Conference on Neural Information Processing Systems</i>. ; 2022. doi:<a
    href="https://doi.org/10.48550/arXiv.2210.12733">10.48550/arXiv.2210.12733</a>'
  apa: Yao, J., Hong, Y., Wang, C., Xiao, T., He, T., Locatello, F., … Zhang, Z. (2022).
    Self-supervised amodal video object segmentation. In <i>36th Conference on Neural
    Information Processing Systems</i>. New Orleans, LA, United States. <a href="https://doi.org/10.48550/arXiv.2210.12733">https://doi.org/10.48550/arXiv.2210.12733</a>
  chicago: Yao, Jian, Yuxin Hong, Chiyu Wang, Tianjun Xiao, Tong He, Francesco Locatello,
    David Wipf, Yanwei Fu, and Zheng Zhang. “Self-Supervised Amodal Video Object Segmentation.”
    In <i>36th Conference on Neural Information Processing Systems</i>, 2022. <a href="https://doi.org/10.48550/arXiv.2210.12733">https://doi.org/10.48550/arXiv.2210.12733</a>.
  ieee: J. Yao <i>et al.</i>, “Self-supervised amodal video object segmentation,”
    in <i>36th Conference on Neural Information Processing Systems</i>, New Orleans,
    LA, United States, 2022.
  ista: 'Yao J, Hong Y, Wang C, Xiao T, He T, Locatello F, Wipf D, Fu Y, Zhang Z.
    2022. Self-supervised amodal video object segmentation. 36th Conference on Neural
    Information Processing Systems. NeurIPS: Neural Information Processing Systems.'
  mla: Yao, Jian, et al. “Self-Supervised Amodal Video Object Segmentation.” <i>36th
    Conference on Neural Information Processing Systems</i>, 2022, doi:<a href="https://doi.org/10.48550/arXiv.2210.12733">10.48550/arXiv.2210.12733</a>.
  short: J. Yao, Y. Hong, C. Wang, T. Xiao, T. He, F. Locatello, D. Wipf, Y. Fu, Z.
    Zhang, in:, 36th Conference on Neural Information Processing Systems, 2022.
conference:
  end_date: 2022-12-01
  location: New Orleans, LA, United States
  name: 'NeurIPS: Neural Information Processing Systems'
  start_date: 2022-11-28
date_created: 2023-08-21T12:13:25Z
date_published: 2022-10-23T00:00:00Z
date_updated: 2023-09-11T09:34:17Z
day: '23'
department:
- _id: FrLo
doi: 10.48550/arXiv.2210.12733
extern: '1'
external_id:
  arxiv:
  - '2210.12733'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.48550/arXiv.2210.12733
month: '10'
oa: 1
oa_version: Preprint
publication: 36th Conference on Neural Information Processing Systems
publication_status: published
status: public
title: Self-supervised amodal video object segmentation
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2022'
...
---
_id: '14114'
abstract:
- lang: eng
  text: Algorithmic fairness is frequently motivated in terms of a trade-off in which
    overall performance is decreased so as to improve performance on disadvantaged
    groups where the algorithm would otherwise be less accurate. Contrary to this,
    we find that applying existing fairness approaches to computer vision improve
    fairness by degrading the performance of classifiers across all groups (with increased
    degradation on the best performing groups). Extending the bias-variance decomposition
    for classification to fairness, we theoretically explain why the majority of fairness
    methods designed for low capacity models should not be used in settings involving
    high-capacity models, a scenario common to computer vision. We corroborate this
    analysis with extensive experimental support that shows that many of the fairness
    heuristics used in computer vision also degrade performance on the most disadvantaged
    groups. Building on these insights, we propose an adaptive augmentation strategy
    that, uniquely, of all methods tested, improves performance for the disadvantaged
    groups.
article_processing_charge: No
arxiv: 1
author:
- first_name: Dominik
  full_name: Zietlow, Dominik
  last_name: Zietlow
- first_name: Michael
  full_name: Lohaus, Michael
  last_name: Lohaus
- first_name: Guha
  full_name: Balakrishnan, Guha
  last_name: Balakrishnan
- first_name: Matthaus
  full_name: Kleindessner, Matthaus
  last_name: Kleindessner
- first_name: Francesco
  full_name: Locatello, Francesco
  id: 26cfd52f-2483-11ee-8040-88983bcc06d4
  last_name: Locatello
  orcid: 0000-0002-4850-0683
- first_name: Bernhard
  full_name: Scholkopf, Bernhard
  last_name: Scholkopf
- first_name: Chris
  full_name: Russell, Chris
  last_name: Russell
citation:
  ama: 'Zietlow D, Lohaus M, Balakrishnan G, et al. Leveling down in computer vision:
    Pareto inefficiencies in fair deep classifiers. In: <i>2022 IEEE/CVF Conference
    on Computer Vision and Pattern Recognition</i>. Institute of Electrical and Electronics
    Engineers; 2022:10400-10411. doi:<a href="https://doi.org/10.1109/cvpr52688.2022.01016">10.1109/cvpr52688.2022.01016</a>'
  apa: 'Zietlow, D., Lohaus, M., Balakrishnan, G., Kleindessner, M., Locatello, F.,
    Scholkopf, B., &#38; Russell, C. (2022). Leveling down in computer vision: Pareto
    inefficiencies in fair deep classifiers. In <i>2022 IEEE/CVF Conference on Computer
    Vision and Pattern Recognition</i> (pp. 10400–10411). New Orleans, LA, United
    States: Institute of Electrical and Electronics Engineers. <a href="https://doi.org/10.1109/cvpr52688.2022.01016">https://doi.org/10.1109/cvpr52688.2022.01016</a>'
  chicago: 'Zietlow, Dominik, Michael Lohaus, Guha Balakrishnan, Matthaus Kleindessner,
    Francesco Locatello, Bernhard Scholkopf, and Chris Russell. “Leveling down in
    Computer Vision: Pareto Inefficiencies in Fair Deep Classifiers.” In <i>2022 IEEE/CVF
    Conference on Computer Vision and Pattern Recognition</i>, 10400–411. Institute
    of Electrical and Electronics Engineers, 2022. <a href="https://doi.org/10.1109/cvpr52688.2022.01016">https://doi.org/10.1109/cvpr52688.2022.01016</a>.'
  ieee: 'D. Zietlow <i>et al.</i>, “Leveling down in computer vision: Pareto inefficiencies
    in fair deep classifiers,” in <i>2022 IEEE/CVF Conference on Computer Vision and
    Pattern Recognition</i>, New Orleans, LA, United States, 2022, pp. 10400–10411.'
  ista: 'Zietlow D, Lohaus M, Balakrishnan G, Kleindessner M, Locatello F, Scholkopf
    B, Russell C. 2022. Leveling down in computer vision: Pareto inefficiencies in
    fair deep classifiers. 2022 IEEE/CVF Conference on Computer Vision and Pattern
    Recognition. CVPR: Conference on Computer Vision and Pattern Recognition, 10400–10411.'
  mla: 'Zietlow, Dominik, et al. “Leveling down in Computer Vision: Pareto Inefficiencies
    in Fair Deep Classifiers.” <i>2022 IEEE/CVF Conference on Computer Vision and
    Pattern Recognition</i>, Institute of Electrical and Electronics Engineers, 2022,
    pp. 10400–11, doi:<a href="https://doi.org/10.1109/cvpr52688.2022.01016">10.1109/cvpr52688.2022.01016</a>.'
  short: D. Zietlow, M. Lohaus, G. Balakrishnan, M. Kleindessner, F. Locatello, B.
    Scholkopf, C. Russell, in:, 2022 IEEE/CVF Conference on Computer Vision and Pattern
    Recognition, Institute of Electrical and Electronics Engineers, 2022, pp. 10400–10411.
conference:
  end_date: 2022-06-24
  location: New Orleans, LA, United States
  name: 'CVPR: Conference on Computer Vision and Pattern Recognition'
  start_date: 2022-06-18
date_created: 2023-08-21T12:18:00Z
date_published: 2022-07-01T00:00:00Z
date_updated: 2023-09-11T09:19:14Z
day: '01'
department:
- _id: FrLo
doi: 10.1109/cvpr52688.2022.01016
extern: '1'
external_id:
  arxiv:
  - '2203.04913'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://arxiv.org/abs/2203.04913
month: '07'
oa: 1
oa_version: Preprint
page: 10400-10411
publication: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition
publication_identifier:
  eissn:
  - 2575-7075
  isbn:
  - '9781665469470'
  issn:
  - 1063-6919
publication_status: published
publisher: Institute of Electrical and Electronics Engineers
quality_controlled: '1'
scopus_import: '1'
status: public
title: 'Leveling down in computer vision: Pareto inefficiencies in fair deep classifiers'
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2022'
...
---
_id: '14168'
abstract:
- lang: eng
  text: "Recent work has seen the development of general purpose neural architectures\r\nthat
    can be trained to perform tasks across diverse data modalities. General\r\npurpose
    models typically make few assumptions about the underlying\r\ndata-structure and
    are known to perform well in the large-data regime. At the\r\nsame time, there
    has been growing interest in modular neural architectures that\r\nrepresent the
    data using sparsely interacting modules. These models can be more\r\nrobust out-of-distribution,
    computationally efficient, and capable of\r\nsample-efficient adaptation to new
    data. However, they tend to make\r\ndomain-specific assumptions about the data,
    and present challenges in how\r\nmodule behavior (i.e., parameterization) and
    connectivity (i.e., their layout)\r\ncan be jointly learned. In this work, we
    introduce a general purpose, yet\r\nmodular neural architecture called Neural
    Attentive Circuits (NACs) that\r\njointly learns the parameterization and a sparse
    connectivity of neural modules\r\nwithout using domain knowledge. NACs are best
    understood as the combination of\r\ntwo systems that are jointly trained end-to-end:
    one that determines the module\r\nconfiguration and the other that executes it
    on an input. We demonstrate\r\nqualitatively that NACs learn diverse and meaningful
    module configurations on\r\nthe NLVR2 dataset without additional supervision.
    Quantitatively, we show that\r\nby incorporating modularity in this way, NACs
    improve upon a strong non-modular\r\nbaseline in terms of low-shot adaptation
    on CIFAR and CUBs dataset by about\r\n10%, and OOD robustness on Tiny ImageNet-R
    by about 2.5%. Further, we find that\r\nNACs can achieve an 8x speedup at inference
    time while losing less than 3%\r\nperformance. Finally, we find NACs to yield
    competitive results on diverse data\r\nmodalities spanning point-cloud classification,
    symbolic processing and\r\ntext-classification from ASCII bytes, thereby confirming
    its general purpose\r\nnature."
alternative_title:
- ' Advances in Neural Information Processing Systems'
article_processing_charge: No
arxiv: 1
author:
- first_name: Nasim
  full_name: Rahaman, Nasim
  last_name: Rahaman
- first_name: Martin
  full_name: Weiss, Martin
  last_name: Weiss
- first_name: Francesco
  full_name: Locatello, Francesco
  id: 26cfd52f-2483-11ee-8040-88983bcc06d4
  last_name: Locatello
  orcid: 0000-0002-4850-0683
- first_name: Chris
  full_name: Pal, Chris
  last_name: Pal
- first_name: Yoshua
  full_name: Bengio, Yoshua
  last_name: Bengio
- first_name: Bernhard
  full_name: Schölkopf, Bernhard
  last_name: Schölkopf
- first_name: Li Erran
  full_name: Li, Li Erran
  last_name: Li
- first_name: Nicolas
  full_name: Ballas, Nicolas
  last_name: Ballas
citation:
  ama: 'Rahaman N, Weiss M, Locatello F, et al. Neural attentive circuits. In: <i>36th
    Conference on Neural Information Processing Systems</i>. Vol 35. ; 2022.'
  apa: Rahaman, N., Weiss, M., Locatello, F., Pal, C., Bengio, Y., Schölkopf, B.,
    … Ballas, N. (2022). Neural attentive circuits. In <i>36th Conference on Neural
    Information Processing Systems</i> (Vol. 35). New Orleans, United States.
  chicago: Rahaman, Nasim, Martin Weiss, Francesco Locatello, Chris Pal, Yoshua Bengio,
    Bernhard Schölkopf, Li Erran Li, and Nicolas Ballas. “Neural Attentive Circuits.”
    In <i>36th Conference on Neural Information Processing Systems</i>, Vol. 35, 2022.
  ieee: N. Rahaman <i>et al.</i>, “Neural attentive circuits,” in <i>36th Conference
    on Neural Information Processing Systems</i>, New Orleans, United States, 2022,
    vol. 35.
  ista: 'Rahaman N, Weiss M, Locatello F, Pal C, Bengio Y, Schölkopf B, Li LE, Ballas
    N. 2022. Neural attentive circuits. 36th Conference on Neural Information Processing
    Systems. NeurIPS: Neural Information Processing Systems,  Advances in Neural Information
    Processing Systems, vol. 35.'
  mla: Rahaman, Nasim, et al. “Neural Attentive Circuits.” <i>36th Conference on Neural
    Information Processing Systems</i>, vol. 35, 2022.
  short: N. Rahaman, M. Weiss, F. Locatello, C. Pal, Y. Bengio, B. Schölkopf, L.E.
    Li, N. Ballas, in:, 36th Conference on Neural Information Processing Systems,
    2022.
conference:
  end_date: 2022-12-01
  location: New Orleans, United States
  name: 'NeurIPS: Neural Information Processing Systems'
  start_date: 2022-11-29
date_created: 2023-08-22T13:57:27Z
date_published: 2022-10-14T00:00:00Z
date_updated: 2023-09-11T09:29:09Z
day: '14'
department:
- _id: FrLo
extern: '1'
external_id:
  arxiv:
  - '2210.08031'
intvolume: '        35'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.48550/arXiv.2210.08031
month: '10'
oa: 1
oa_version: Preprint
publication: 36th Conference on Neural Information Processing Systems
publication_status: published
status: public
title: Neural attentive circuits
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 35
year: '2022'
...
---
_id: '14170'
abstract:
- lang: eng
  text: "The idea behind object-centric representation learning is that natural scenes
    can better be modeled as compositions of objects and their relations as opposed
    to distributed representations. This inductive bias can be injected into neural
    networks to potentially improve systematic generalization and performance of downstream
    tasks in scenes with multiple objects. In this paper, we train state-of-the-art
    unsupervised models on five common multi-object datasets and evaluate segmentation
    metrics and downstream object property prediction. In addition, we study generalization
    and robustness by investigating the settings where either a single object is out
    of distribution -- e.g., having an unseen color, texture, or shape -- or global
    properties of the scene are altered -- e.g., by occlusions, cropping, or increasing
    the number of objects. From our experimental study, we find object-centric representations
    to be useful for\r\ndownstream tasks and generally robust to most distribution
    shifts affecting objects. However, when the distribution shift affects the input
    in a less structured manner, robustness in terms of segmentation and downstream
    task performance may vary significantly across models and distribution shifts. "
alternative_title:
- PMLR
article_processing_charge: No
arxiv: 1
author:
- first_name: Andrea
  full_name: Dittadi, Andrea
  last_name: Dittadi
- first_name: Samuele
  full_name: Papa, Samuele
  last_name: Papa
- first_name: Michele De
  full_name: Vita, Michele De
  last_name: Vita
- first_name: Bernhard
  full_name: Schölkopf, Bernhard
  last_name: Schölkopf
- first_name: Ole
  full_name: Winther, Ole
  last_name: Winther
- first_name: Francesco
  full_name: Locatello, Francesco
  id: 26cfd52f-2483-11ee-8040-88983bcc06d4
  last_name: Locatello
  orcid: 0000-0002-4850-0683
citation:
  ama: 'Dittadi A, Papa S, Vita MD, Schölkopf B, Winther O, Locatello F. Generalization
    and robustness implications in object-centric learning. In: <i>Proceedings of
    the 39th International Conference on Machine Learning</i>. Vol 2022. ML Research
    Press; :5221-5285.'
  apa: 'Dittadi, A., Papa, S., Vita, M. D., Schölkopf, B., Winther, O., &#38; Locatello,
    F. (n.d.). Generalization and robustness implications in object-centric learning.
    In <i>Proceedings of the 39th International Conference on Machine Learning</i>
    (Vol. 2022, pp. 5221–5285). Baltimore, MD, United States: ML Research Press.'
  chicago: Dittadi, Andrea, Samuele Papa, Michele De Vita, Bernhard Schölkopf, Ole
    Winther, and Francesco Locatello. “Generalization and Robustness Implications
    in Object-Centric Learning.” In <i>Proceedings of the 39th International Conference
    on Machine Learning</i>, 2022:5221–85. ML Research Press, n.d.
  ieee: A. Dittadi, S. Papa, M. D. Vita, B. Schölkopf, O. Winther, and F. Locatello,
    “Generalization and robustness implications in object-centric learning,” in <i>Proceedings
    of the 39th International Conference on Machine Learning</i>, Baltimore, MD, United
    States, vol. 2022, pp. 5221–5285.
  ista: Dittadi A, Papa S, Vita MD, Schölkopf B, Winther O, Locatello F. Generalization
    and robustness implications in object-centric learning. Proceedings of the 39th
    International Conference on Machine Learning. International Conference on Machine
    Learning, PMLR, vol. 2022, 5221–5285.
  mla: Dittadi, Andrea, et al. “Generalization and Robustness Implications in Object-Centric
    Learning.” <i>Proceedings of the 39th International Conference on Machine Learning</i>,
    vol. 2022, ML Research Press, pp. 5221–85.
  short: A. Dittadi, S. Papa, M.D. Vita, B. Schölkopf, O. Winther, F. Locatello, in:,
    Proceedings of the 39th International Conference on Machine Learning, ML Research
    Press, n.d., pp. 5221–5285.
conference:
  end_date: 2022-07-23
  location: Baltimore, MD, United States
  name: International Conference on Machine Learning
  start_date: 2022-07-17
date_created: 2023-08-22T13:59:55Z
date_published: 2022-07-22T00:00:00Z
date_updated: 2023-09-11T10:08:14Z
day: '22'
department:
- _id: FrLo
extern: '1'
external_id:
  arxiv:
  - '2107.00637'
intvolume: '      2022'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://arxiv.org/abs/2107.00637
month: '07'
oa: 1
oa_version: Preprint
page: 5221-5285
publication: Proceedings of the 39th International Conference on Machine Learning
publication_status: submitted
publisher: ML Research Press
quality_controlled: '1'
status: public
title: Generalization and robustness implications in object-centric learning
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 2022
year: '2022'
...
---
_id: '14171'
abstract:
- lang: eng
  text: "This paper demonstrates how to recover causal graphs from the score of the\r\ndata
    distribution in non-linear additive (Gaussian) noise models. Using score\r\nmatching
    algorithms as a building block, we show how to design a new generation\r\nof scalable
    causal discovery methods. To showcase our approach, we also propose\r\na new efficient
    method for approximating the score's Jacobian, enabling to\r\nrecover the causal
    graph. Empirically, we find that the new algorithm, called\r\nSCORE, is competitive
    with state-of-the-art causal discovery methods while\r\nbeing significantly faster."
alternative_title:
- PMLR
article_processing_charge: No
arxiv: 1
author:
- first_name: Paul
  full_name: Rolland, Paul
  last_name: Rolland
- first_name: Volkan
  full_name: Cevher, Volkan
  last_name: Cevher
- first_name: Matthäus
  full_name: Kleindessner, Matthäus
  last_name: Kleindessner
- first_name: Chris
  full_name: Russel, Chris
  last_name: Russel
- first_name: Bernhard
  full_name: Schölkopf, Bernhard
  last_name: Schölkopf
- first_name: Dominik
  full_name: Janzing, Dominik
  last_name: Janzing
- first_name: Francesco
  full_name: Locatello, Francesco
  id: 26cfd52f-2483-11ee-8040-88983bcc06d4
  last_name: Locatello
  orcid: 0000-0002-4850-0683
citation:
  ama: 'Rolland P, Cevher V, Kleindessner M, et al. Score matching enables causal
    discovery of nonlinear additive noise  models. In: <i>Proceedings of the 39th
    International Conference on Machine Learning</i>. Vol 162. ML Research Press;
    2022:18741-18753.'
  apa: 'Rolland, P., Cevher, V., Kleindessner, M., Russel, C., Schölkopf, B., Janzing,
    D., &#38; Locatello, F. (2022). Score matching enables causal discovery of nonlinear
    additive noise  models. In <i>Proceedings of the 39th International Conference
    on Machine Learning</i> (Vol. 162, pp. 18741–18753). Baltimore, MD, United States:
    ML Research Press.'
  chicago: Rolland, Paul, Volkan Cevher, Matthäus Kleindessner, Chris Russel, Bernhard
    Schölkopf, Dominik Janzing, and Francesco Locatello. “Score Matching Enables Causal
    Discovery of Nonlinear Additive Noise  Models.” In <i>Proceedings of the 39th
    International Conference on Machine Learning</i>, 162:18741–53. ML Research Press,
    2022.
  ieee: P. Rolland <i>et al.</i>, “Score matching enables causal discovery of nonlinear
    additive noise  models,” in <i>Proceedings of the 39th International Conference
    on Machine Learning</i>, Baltimore, MD, United States, 2022, vol. 162, pp. 18741–18753.
  ista: Rolland P, Cevher V, Kleindessner M, Russel C, Schölkopf B, Janzing D, Locatello
    F. 2022. Score matching enables causal discovery of nonlinear additive noise 
    models. Proceedings of the 39th International Conference on Machine Learning.
    International Conference on Machine Learning, PMLR, vol. 162, 18741–18753.
  mla: Rolland, Paul, et al. “Score Matching Enables Causal Discovery of Nonlinear
    Additive Noise  Models.” <i>Proceedings of the 39th International Conference on
    Machine Learning</i>, vol. 162, ML Research Press, 2022, pp. 18741–53.
  short: P. Rolland, V. Cevher, M. Kleindessner, C. Russel, B. Schölkopf, D. Janzing,
    F. Locatello, in:, Proceedings of the 39th International Conference on Machine
    Learning, ML Research Press, 2022, pp. 18741–18753.
conference:
  end_date: 2022-07-23
  location: Baltimore, MD, United States
  name: International Conference on Machine Learning
  start_date: 2022-07-17
date_created: 2023-08-22T14:00:18Z
date_published: 2022-07-22T00:00:00Z
date_updated: 2023-09-11T10:14:20Z
day: '22'
department:
- _id: FrLo
extern: '1'
external_id:
  arxiv:
  - '2203.04413'
intvolume: '       162'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://arxiv.org/abs/2203.04413
month: '07'
oa: 1
oa_version: Preprint
page: 18741-18753
publication: Proceedings of the 39th International Conference on Machine Learning
publication_status: published
publisher: ML Research Press
quality_controlled: '1'
status: public
title: Score matching enables causal discovery of nonlinear additive noise  models
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 162
year: '2022'
...
---
_id: '14172'
abstract:
- lang: eng
  text: "An important component for generalization in machine learning is to uncover
    underlying latent factors of variation as well as the mechanism through which
    each factor acts in the world. In this paper, we test whether 17 unsupervised,
    weakly supervised, and fully supervised representation learning approaches correctly
    infer the generative factors of variation in simple datasets (dSprites, Shapes3D,
    MPI3D) from controlled environments, and on our contributed CelebGlow dataset.
    In contrast to prior robustness work that introduces novel factors of variation
    during test time, such as blur or other (un)structured noise, we here recompose,
    interpolate, or extrapolate only existing factors of variation from the training
    data set (e.g., small and medium-sized objects during training and large objects
    during testing). Models\r\nthat learn the correct mechanism should be able to
    generalize to this benchmark. In total, we train and test 2000+ models and observe
    that all of them struggle to learn the underlying mechanism regardless of supervision
    signal and architectural bias. Moreover, the generalization capabilities of all
    tested models drop significantly as we move from artificial datasets towards\r\nmore
    realistic real-world datasets. Despite their inability to identify the correct
    mechanism, the models are quite modular as their ability to infer other in-distribution
    factors remains fairly stable, providing only a single factoris out-of-distribution.
    These results point to an important yet understudied problem of learning mechanistic
    models of observations that can facilitate\r\ngeneralization."
article_processing_charge: No
arxiv: 1
author:
- first_name: Lukas
  full_name: Schott, Lukas
  last_name: Schott
- first_name: Julius von
  full_name: Kügelgen, Julius von
  last_name: Kügelgen
- first_name: Frederik
  full_name: Träuble, Frederik
  last_name: Träuble
- first_name: Peter
  full_name: Gehler, Peter
  last_name: Gehler
- first_name: Chris
  full_name: Russell, Chris
  last_name: Russell
- first_name: Matthias
  full_name: Bethge, Matthias
  last_name: Bethge
- first_name: Bernhard
  full_name: Schölkopf, Bernhard
  last_name: Schölkopf
- first_name: Francesco
  full_name: Locatello, Francesco
  id: 26cfd52f-2483-11ee-8040-88983bcc06d4
  last_name: Locatello
  orcid: 0000-0002-4850-0683
- first_name: Wieland
  full_name: Brendel, Wieland
  last_name: Brendel
citation:
  ama: 'Schott L, Kügelgen J von, Träuble F, et al. Visual representation learning
    does not generalize strongly within the  same domain. In: <i>10th International
    Conference on Learning Representations</i>. ; 2022.'
  apa: Schott, L., Kügelgen, J. von, Träuble, F., Gehler, P., Russell, C., Bethge,
    M., … Brendel, W. (2022). Visual representation learning does not generalize strongly
    within the  same domain. In <i>10th International Conference on Learning Representations</i>.
    Virtual.
  chicago: Schott, Lukas, Julius von Kügelgen, Frederik Träuble, Peter Gehler, Chris
    Russell, Matthias Bethge, Bernhard Schölkopf, Francesco Locatello, and Wieland
    Brendel. “Visual Representation Learning Does Not Generalize Strongly within the 
    Same Domain.” In <i>10th International Conference on Learning Representations</i>,
    2022.
  ieee: L. Schott <i>et al.</i>, “Visual representation learning does not generalize
    strongly within the  same domain,” in <i>10th International Conference on Learning
    Representations</i>, Virtual, 2022.
  ista: 'Schott L, Kügelgen J von, Träuble F, Gehler P, Russell C, Bethge M, Schölkopf
    B, Locatello F, Brendel W. 2022. Visual representation learning does not generalize
    strongly within the  same domain. 10th International Conference on Learning Representations.
    ICLR: International Conference on Learning Representations.'
  mla: Schott, Lukas, et al. “Visual Representation Learning Does Not Generalize Strongly
    within the  Same Domain.” <i>10th International Conference on Learning Representations</i>,
    2022.
  short: L. Schott, J. von Kügelgen, F. Träuble, P. Gehler, C. Russell, M. Bethge,
    B. Schölkopf, F. Locatello, W. Brendel, in:, 10th International Conference on
    Learning Representations, 2022.
conference:
  end_date: 2022-04-29
  location: Virtual
  name: 'ICLR: International Conference on Learning Representations'
  start_date: 2022-04-25
date_created: 2023-08-22T14:00:50Z
date_published: 2022-04-25T00:00:00Z
date_updated: 2023-09-11T09:40:52Z
day: '25'
department:
- _id: FrLo
extern: '1'
external_id:
  arxiv:
  - '2107.08221'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.48550/arXiv.2107.08221
month: '04'
oa: 1
oa_version: Preprint
publication: 10th International Conference on Learning Representations
publication_status: published
quality_controlled: '1'
status: public
title: Visual representation learning does not generalize strongly within the  same
  domain
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2022'
...
---
_id: '14173'
abstract:
- lang: eng
  text: "Since out-of-distribution generalization is a generally ill-posed problem,
    various proxy targets (e.g., calibration, adversarial robustness, algorithmic
    corruptions, invariance across shifts) were studied across different research
    programs resulting in different recommendations. While sharing the same aspirational
    goal, these approaches have never been tested under the same\r\nexperimental conditions
    on real data. In this paper, we take a unified view of previous work, highlighting
    message discrepancies that we address empirically, and providing recommendations
    on how to measure the robustness of a model and how to improve it. To this end,
    we collect 172 publicly available dataset pairs for training and out-of-distribution
    evaluation of accuracy, calibration error, adversarial attacks, environment invariance,
    and synthetic corruptions. We fine-tune over 31k networks, from nine different
    architectures in the many- and\r\nfew-shot setting. Our findings confirm that
    in- and out-of-distribution accuracies tend to increase jointly, but show that
    their relation is largely dataset-dependent, and in general more nuanced and more
    complex than posited by previous, smaller scale studies."
alternative_title:
- Advances in Neural Information Processing Systems
article_processing_charge: No
arxiv: 1
author:
- first_name: Florian
  full_name: Wenzel, Florian
  last_name: Wenzel
- first_name: Andrea
  full_name: Dittadi, Andrea
  last_name: Dittadi
- first_name: Peter Vincent
  full_name: Gehler, Peter Vincent
  last_name: Gehler
- first_name: Carl-Johann Simon-Gabriel
  full_name: Carl-Johann Simon-Gabriel, Carl-Johann Simon-Gabriel
  last_name: Carl-Johann Simon-Gabriel
- first_name: Max
  full_name: Horn, Max
  last_name: Horn
- first_name: Dominik
  full_name: Zietlow, Dominik
  last_name: Zietlow
- first_name: David
  full_name: Kernert, David
  last_name: Kernert
- first_name: Chris
  full_name: Russell, Chris
  last_name: Russell
- first_name: Thomas
  full_name: Brox, Thomas
  last_name: Brox
- first_name: Bernt
  full_name: Schiele, Bernt
  last_name: Schiele
- first_name: Bernhard
  full_name: Schölkopf, Bernhard
  last_name: Schölkopf
- first_name: Francesco
  full_name: Locatello, Francesco
  id: 26cfd52f-2483-11ee-8040-88983bcc06d4
  last_name: Locatello
  orcid: 0000-0002-4850-0683
citation:
  ama: 'Wenzel F, Dittadi A, Gehler PV, et al. Assaying out-of-distribution generalization
    in transfer learning. In: <i>36th Conference on Neural Information Processing
    Systems</i>. Vol 35. Neural Information Processing Systems Foundation; 2022:7181-7198.'
  apa: 'Wenzel, F., Dittadi, A., Gehler, P. V., Carl-Johann Simon-Gabriel, C.-J. S.-G.,
    Horn, M., Zietlow, D., … Locatello, F. (2022). Assaying out-of-distribution generalization
    in transfer learning. In <i>36th Conference on Neural Information Processing Systems</i>
    (Vol. 35, pp. 7181–7198). New Orleans, LA, United States: Neural Information Processing
    Systems Foundation.'
  chicago: Wenzel, Florian, Andrea Dittadi, Peter Vincent Gehler, Carl-Johann Simon-Gabriel
    Carl-Johann Simon-Gabriel, Max Horn, Dominik Zietlow, David Kernert, et al. “Assaying
    Out-of-Distribution Generalization in Transfer Learning.” In <i>36th Conference
    on Neural Information Processing Systems</i>, 35:7181–98. Neural Information Processing
    Systems Foundation, 2022.
  ieee: F. Wenzel <i>et al.</i>, “Assaying out-of-distribution generalization in transfer
    learning,” in <i>36th Conference on Neural Information Processing Systems</i>,
    New Orleans, LA, United States, 2022, vol. 35, pp. 7181–7198.
  ista: 'Wenzel F, Dittadi A, Gehler PV, Carl-Johann Simon-Gabriel C-JS-G, Horn M,
    Zietlow D, Kernert D, Russell C, Brox T, Schiele B, Schölkopf B, Locatello F.
    2022. Assaying out-of-distribution generalization in transfer learning. 36th Conference
    on Neural Information Processing Systems. NeurIPS: Neural Information Processing
    Systems, Advances in Neural Information Processing Systems, vol. 35, 7181–7198.'
  mla: Wenzel, Florian, et al. “Assaying Out-of-Distribution Generalization in Transfer
    Learning.” <i>36th Conference on Neural Information Processing Systems</i>, vol.
    35, Neural Information Processing Systems Foundation, 2022, pp. 7181–98.
  short: F. Wenzel, A. Dittadi, P.V. Gehler, C.-J.S.-G. Carl-Johann Simon-Gabriel,
    M. Horn, D. Zietlow, D. Kernert, C. Russell, T. Brox, B. Schiele, B. Schölkopf,
    F. Locatello, in:, 36th Conference on Neural Information Processing Systems, Neural
    Information Processing Systems Foundation, 2022, pp. 7181–7198.
conference:
  end_date: 2022-12-09
  location: New Orleans, LA, United States
  name: 'NeurIPS: Neural Information Processing Systems'
  start_date: 2022-11-28
date_created: 2023-08-22T14:01:13Z
date_published: 2022-12-15T00:00:00Z
date_updated: 2023-09-06T10:34:43Z
day: '15'
department:
- _id: FrLo
extern: '1'
external_id:
  arxiv:
  - '2207.09239'
intvolume: '        35'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://arxiv.org/abs/2207.09239
month: '12'
oa: 1
oa_version: Preprint
page: 7181-7198
publication: 36th Conference on Neural Information Processing Systems
publication_identifier:
  isbn:
  - '9781713871088'
publication_status: published
publisher: Neural Information Processing Systems Foundation
quality_controlled: '1'
scopus_import: '1'
status: public
title: Assaying out-of-distribution generalization in transfer learning
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 35
year: '2022'
...
---
_id: '14174'
abstract:
- lang: eng
  text: "Building sample-efficient agents that generalize out-of-distribution (OOD)
    in real-world settings remains a fundamental unsolved problem on the path towards
    achieving higher-level cognition. One particularly promising approach is to begin
    with low-dimensional, pretrained representations of our world, which should facilitate
    efficient downstream learning and generalization. By training 240 representations
    and over 10,000 reinforcement learning (RL) policies on a simulated robotic setup,
    we evaluate to what extent different properties of\r\npretrained VAE-based representations
    affect the OOD generalization of downstream agents. We observe that many agents
    are surprisingly robust to realistic distribution shifts, including the challenging
    sim-to-real case. In addition, we find that the generalization performance of
    a simple downstream proxy task reliably predicts the generalization performance
    of our RL agents\r\nunder a wide range of OOD settings. Such proxy tasks can thus
    be used to select pretrained representations that will lead to agents that generalize."
article_processing_charge: No
arxiv: 1
author:
- first_name: Andrea
  full_name: Dittadi, Andrea
  last_name: Dittadi
- first_name: Frederik
  full_name: Träuble, Frederik
  last_name: Träuble
- first_name: Manuel
  full_name: Wüthrich, Manuel
  last_name: Wüthrich
- first_name: Felix
  full_name: Widmaier, Felix
  last_name: Widmaier
- first_name: Peter
  full_name: Gehler, Peter
  last_name: Gehler
- first_name: Ole
  full_name: Winther, Ole
  last_name: Winther
- first_name: Francesco
  full_name: Locatello, Francesco
  id: 26cfd52f-2483-11ee-8040-88983bcc06d4
  last_name: Locatello
  orcid: 0000-0002-4850-0683
- first_name: Olivier
  full_name: Bachem, Olivier
  last_name: Bachem
- first_name: Bernhard
  full_name: Schölkopf, Bernhard
  last_name: Schölkopf
- first_name: Stefan
  full_name: Bauer, Stefan
  last_name: Bauer
citation:
  ama: 'Dittadi A, Träuble F, Wüthrich M, et al. The role of pretrained representations
    for the OOD generalization of  reinforcement learning agents. In: <i>10th International
    Conference on Learning Representations</i>. ; 2022.'
  apa: Dittadi, A., Träuble, F., Wüthrich, M., Widmaier, F., Gehler, P., Winther,
    O., … Bauer, S. (2022). The role of pretrained representations for the OOD generalization
    of  reinforcement learning agents. In <i>10th International Conference on Learning
    Representations</i>. Virtual.
  chicago: Dittadi, Andrea, Frederik Träuble, Manuel Wüthrich, Felix Widmaier, Peter
    Gehler, Ole Winther, Francesco Locatello, Olivier Bachem, Bernhard Schölkopf,
    and Stefan Bauer. “The Role of Pretrained Representations for the OOD Generalization
    of  Reinforcement Learning Agents.” In <i>10th International Conference on Learning
    Representations</i>, 2022.
  ieee: A. Dittadi <i>et al.</i>, “The role of pretrained representations for the
    OOD generalization of  reinforcement learning agents,” in <i>10th International
    Conference on Learning Representations</i>, Virtual, 2022.
  ista: 'Dittadi A, Träuble F, Wüthrich M, Widmaier F, Gehler P, Winther O, Locatello
    F, Bachem O, Schölkopf B, Bauer S. 2022. The role of pretrained representations
    for the OOD generalization of  reinforcement learning agents. 10th International
    Conference on Learning Representations. ICLR: International Conference on Learning
    Representations.'
  mla: Dittadi, Andrea, et al. “The Role of Pretrained Representations for the OOD
    Generalization of  Reinforcement Learning Agents.” <i>10th International Conference
    on Learning Representations</i>, 2022.
  short: A. Dittadi, F. Träuble, M. Wüthrich, F. Widmaier, P. Gehler, O. Winther,
    F. Locatello, O. Bachem, B. Schölkopf, S. Bauer, in:, 10th International Conference
    on Learning Representations, 2022.
conference:
  end_date: 2022-04-29
  location: Virtual
  name: 'ICLR: International Conference on Learning Representations'
  start_date: 2022-04-25
date_created: 2023-08-22T14:02:13Z
date_published: 2022-04-25T00:00:00Z
date_updated: 2023-09-11T09:48:36Z
day: '25'
department:
- _id: FrLo
extern: '1'
external_id:
  arxiv:
  - '2107.05686'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: ' https://doi.org/10.48550/arXiv.2107.05686'
month: '04'
oa: 1
oa_version: Preprint
publication: 10th International Conference on Learning Representations
publication_status: published
quality_controlled: '1'
status: public
title: The role of pretrained representations for the OOD generalization of  reinforcement
  learning agents
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2022'
...
---
_id: '14175'
abstract:
- lang: eng
  text: "Predicting the future trajectory of a moving agent can be easy when the past
    trajectory continues smoothly but is challenging when complex interactions with
    other agents are involved. Recent deep learning approaches for trajectory prediction
    show promising performance and partially attribute this to successful reasoning
    about agent-agent interactions. However, it remains unclear which features such
    black-box models actually learn to use for making predictions. This paper proposes
    a procedure that quantifies the contributions\r\nof different cues to model performance
    based on a variant of Shapley values. Applying this procedure to state-of-the-art
    trajectory prediction methods on standard benchmark datasets shows that they are,
    in fact, unable to reason about interactions. Instead, the past trajectory of
    the target is the only feature used for predicting its future. For a task with
    richer social\r\ninteraction patterns, on the other hand, the tested models do
    pick up such interactions to a certain extent, as quantified by our feature attribution
    method. We discuss the limits of the proposed method and its links to causality."
article_processing_charge: No
arxiv: 1
author:
- first_name: Osama
  full_name: Makansi, Osama
  last_name: Makansi
- first_name: Julius von
  full_name: Kügelgen, Julius von
  last_name: Kügelgen
- first_name: Francesco
  full_name: Locatello, Francesco
  id: 26cfd52f-2483-11ee-8040-88983bcc06d4
  last_name: Locatello
  orcid: 0000-0002-4850-0683
- first_name: Peter
  full_name: Gehler, Peter
  last_name: Gehler
- first_name: Dominik
  full_name: Janzing, Dominik
  last_name: Janzing
- first_name: Thomas
  full_name: Brox, Thomas
  last_name: Brox
- first_name: Bernhard
  full_name: Schölkopf, Bernhard
  last_name: Schölkopf
citation:
  ama: 'Makansi O, Kügelgen J von, Locatello F, et al. You mostly walk alone: Analyzing
    feature attribution in trajectory prediction. In: <i>10th International Conference
    on Learning Representations</i>. ; 2022.'
  apa: 'Makansi, O., Kügelgen, J. von, Locatello, F., Gehler, P., Janzing, D., Brox,
    T., &#38; Schölkopf, B. (2022). You mostly walk alone: Analyzing feature attribution
    in trajectory prediction. In <i>10th International Conference on Learning Representations</i>.
    Virtual.'
  chicago: 'Makansi, Osama, Julius von Kügelgen, Francesco Locatello, Peter Gehler,
    Dominik Janzing, Thomas Brox, and Bernhard Schölkopf. “You Mostly Walk Alone:
    Analyzing Feature Attribution in Trajectory Prediction.” In <i>10th International
    Conference on Learning Representations</i>, 2022.'
  ieee: 'O. Makansi <i>et al.</i>, “You mostly walk alone: Analyzing feature attribution
    in trajectory prediction,” in <i>10th International Conference on Learning Representations</i>,
    Virtual, 2022.'
  ista: 'Makansi O, Kügelgen J von, Locatello F, Gehler P, Janzing D, Brox T, Schölkopf
    B. 2022. You mostly walk alone: Analyzing feature attribution in trajectory prediction.
    10th International Conference on Learning Representations. ICLR: International
    Conference on Learning Representations.'
  mla: 'Makansi, Osama, et al. “You Mostly Walk Alone: Analyzing Feature Attribution
    in Trajectory Prediction.” <i>10th International Conference on Learning Representations</i>,
    2022.'
  short: O. Makansi, J. von Kügelgen, F. Locatello, P. Gehler, D. Janzing, T. Brox,
    B. Schölkopf, in:, 10th International Conference on Learning Representations,
    2022.
conference:
  end_date: 2022-04-29
  location: Virtual
  name: 'ICLR: International Conference on Learning Representations'
  start_date: 2022-04-25
date_created: 2023-08-22T14:02:34Z
date_published: 2022-04-25T00:00:00Z
date_updated: 2023-09-11T09:52:20Z
day: '25'
department:
- _id: FrLo
extern: '1'
external_id:
  arxiv:
  - '2110.05304'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.48550/arXiv.2110.05304
month: '04'
oa: 1
oa_version: Preprint
publication: 10th International Conference on Learning Representations
publication_status: published
quality_controlled: '1'
status: public
title: 'You mostly walk alone: Analyzing feature attribution in trajectory prediction'
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2022'
...
---
_id: '14215'
abstract:
- lang: eng
  text: Geospatial Information Systems are used by researchers and Humanitarian Assistance
    and Disaster Response (HADR) practitioners to support a wide variety of important
    applications. However, collaboration between these actors is difficult due to
    the heterogeneous nature of geospatial data modalities (e.g., multi-spectral images
    of various resolutions, timeseries, weather data) and diversity of tasks (e.g.,
    regression of human activity indicators or detecting forest fires). In this work,
    we present a roadmap towards the construction of a general-purpose neural architecture
    (GPNA) with a geospatial inductive bias, pre-trained on large amounts of unlabelled
    earth observation data in a self-supervised manner. We envision how such a model
    may facilitate cooperation between members of the community. We show preliminary
    results on the first step of the roadmap, where we instantiate an architecture
    that can process a wide variety of geospatial data modalities and demonstrate
    that it can achieve competitive performance with domain-specific architectures
    on tasks relating to the U.N.'s Sustainable Development Goals.
article_processing_charge: No
arxiv: 1
author:
- first_name: Nasim
  full_name: Rahaman, Nasim
  last_name: Rahaman
- first_name: Martin
  full_name: Weiss, Martin
  last_name: Weiss
- first_name: Frederik
  full_name: Träuble, Frederik
  last_name: Träuble
- first_name: Francesco
  full_name: Locatello, Francesco
  id: 26cfd52f-2483-11ee-8040-88983bcc06d4
  last_name: Locatello
  orcid: 0000-0002-4850-0683
- first_name: Alexandre
  full_name: Lacoste, Alexandre
  last_name: Lacoste
- first_name: Yoshua
  full_name: Bengio, Yoshua
  last_name: Bengio
- first_name: Chris
  full_name: Pal, Chris
  last_name: Pal
- first_name: Li Erran
  full_name: Li, Li Erran
  last_name: Li
- first_name: Bernhard
  full_name: Schölkopf, Bernhard
  last_name: Schölkopf
citation:
  ama: 'Rahaman N, Weiss M, Träuble F, et al. A general purpose neural architecture
    for geospatial systems. In: <i>36th Conference on Neural Information Processing
    Systems</i>.'
  apa: Rahaman, N., Weiss, M., Träuble, F., Locatello, F., Lacoste, A., Bengio, Y.,
    … Schölkopf, B. (n.d.). A general purpose neural architecture for geospatial systems.
    In <i>36th Conference on Neural Information Processing Systems</i>. New Orleans,
    LA, United States.
  chicago: Rahaman, Nasim, Martin Weiss, Frederik Träuble, Francesco Locatello, Alexandre
    Lacoste, Yoshua Bengio, Chris Pal, Li Erran Li, and Bernhard Schölkopf. “A General
    Purpose Neural Architecture for Geospatial Systems.” In <i>36th Conference on
    Neural Information Processing Systems</i>, n.d.
  ieee: N. Rahaman <i>et al.</i>, “A general purpose neural architecture for geospatial
    systems,” in <i>36th Conference on Neural Information Processing Systems</i>,
    New Orleans, LA, United States.
  ista: 'Rahaman N, Weiss M, Träuble F, Locatello F, Lacoste A, Bengio Y, Pal C, Li
    LE, Schölkopf B. A general purpose neural architecture for geospatial systems.
    36th Conference on Neural Information Processing Systems. NeurIPS: Neural Information
    Processing Systems.'
  mla: Rahaman, Nasim, et al. “A General Purpose Neural Architecture for Geospatial
    Systems.” <i>36th Conference on Neural Information Processing Systems</i>.
  short: N. Rahaman, M. Weiss, F. Träuble, F. Locatello, A. Lacoste, Y. Bengio, C.
    Pal, L.E. Li, B. Schölkopf, in:, 36th Conference on Neural Information Processing
    Systems, n.d.
conference:
  end_date: 2022-12-09
  location: New Orleans, LA, United States
  name: 'NeurIPS: Neural Information Processing Systems'
  start_date: 2022-11-28
date_created: 2023-08-22T14:21:47Z
date_published: 2022-11-04T00:00:00Z
date_updated: 2023-09-13T09:35:59Z
day: '04'
department:
- _id: FrLo
extern: '1'
external_id:
  arxiv:
  - '2211.02348'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.48550/arXiv.2211.02348
month: '11'
oa: 1
oa_version: Preprint
publication: 36th Conference on Neural Information Processing Systems
publication_status: submitted
quality_controlled: '1'
status: public
title: A general purpose neural architecture for geospatial systems
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2022'
...
---
_id: '14220'
abstract:
- lang: eng
  text: Although reinforcement learning has seen remarkable progress over the last
    years, solving robust dexterous object-manipulation tasks in multi-object settings
    remains a challenge. In this paper, we focus on models that can learn manipulation
    tasks in fixed multi-object settings and extrapolate this skill zero-shot without
    any drop in performance when the number of objects changes. We consider the generic
    task of bringing a specific cube out of a set to a goal position. We find that
    previous approaches, which primarily leverage attention and graph neural network-based
    architectures, do not generalize their skills when the number of input objects
    changes while scaling as K2. We propose an alternative plug-and-play module based
    on relational inductive biases to overcome these limitations. Besides exceeding
    performances in their training environment, we show that our approach, which scales
    linearly in K, allows agents to extrapolate and generalize zero-shot to any new
    object number.
article_number: '2201.13388'
article_processing_charge: No
arxiv: 1
author:
- first_name: Davide
  full_name: Mambelli, Davide
  last_name: Mambelli
- first_name: Frederik
  full_name: Träuble, Frederik
  last_name: Träuble
- first_name: Stefan
  full_name: Bauer, Stefan
  last_name: Bauer
- first_name: Bernhard
  full_name: Schölkopf, Bernhard
  last_name: Schölkopf
- first_name: Francesco
  full_name: Locatello, Francesco
  id: 26cfd52f-2483-11ee-8040-88983bcc06d4
  last_name: Locatello
  orcid: 0000-0002-4850-0683
citation:
  ama: Mambelli D, Träuble F, Bauer S, Schölkopf B, Locatello F. Compositional multi-object
    reinforcement learning with linear relation networks. <i>arXiv</i>. doi:<a href="https://doi.org/10.48550/arXiv.2201.13388">10.48550/arXiv.2201.13388</a>
  apa: Mambelli, D., Träuble, F., Bauer, S., Schölkopf, B., &#38; Locatello, F. (n.d.).
    Compositional multi-object reinforcement learning with linear relation networks.
    <i>arXiv</i>. <a href="https://doi.org/10.48550/arXiv.2201.13388">https://doi.org/10.48550/arXiv.2201.13388</a>
  chicago: Mambelli, Davide, Frederik Träuble, Stefan Bauer, Bernhard Schölkopf, and
    Francesco Locatello. “Compositional Multi-Object Reinforcement Learning with Linear
    Relation Networks.” <i>ArXiv</i>, n.d. <a href="https://doi.org/10.48550/arXiv.2201.13388">https://doi.org/10.48550/arXiv.2201.13388</a>.
  ieee: D. Mambelli, F. Träuble, S. Bauer, B. Schölkopf, and F. Locatello, “Compositional
    multi-object reinforcement learning with linear relation networks,” <i>arXiv</i>.
    .
  ista: Mambelli D, Träuble F, Bauer S, Schölkopf B, Locatello F. Compositional multi-object
    reinforcement learning with linear relation networks. arXiv, 2201.13388.
  mla: Mambelli, Davide, et al. “Compositional Multi-Object Reinforcement Learning
    with Linear Relation Networks.” <i>ArXiv</i>, 2201.13388, doi:<a href="https://doi.org/10.48550/arXiv.2201.13388">10.48550/arXiv.2201.13388</a>.
  short: D. Mambelli, F. Träuble, S. Bauer, B. Schölkopf, F. Locatello, ArXiv (n.d.).
date_created: 2023-08-22T14:23:16Z
date_published: 2022-01-31T00:00:00Z
date_updated: 2024-10-14T12:27:39Z
day: '31'
department:
- _id: FrLo
doi: 10.48550/arXiv.2201.13388
extern: '1'
external_id:
  arxiv:
  - '2201.13388'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.48550/arXiv.2201.13388
month: '01'
oa: 1
oa_version: Preprint
publication: arXiv
publication_status: submitted
status: public
title: Compositional multi-object reinforcement learning with linear relation networks
type: preprint
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2022'
...
---
_id: '14236'
abstract:
- lang: eng
  text: We show an $(1+\epsilon)$-approximation algorithm for maintaining maximum
    $s$-$t$ flow under $m$ edge insertions in $m^{1/2+o(1)} \epsilon^{-1/2}$ amortized
    update time for directed, unweighted graphs. This constitutes the first sublinear
    dynamic maximum flow algorithm in general sparse graphs with arbitrarily good
    approximation guarantee.
article_number: '2211.09606'
article_processing_charge: No
arxiv: 1
author:
- first_name: Gramoz
  full_name: Goranci, Gramoz
  last_name: Goranci
- first_name: Monika H
  full_name: Henzinger, Monika H
  id: 540c9bbd-f2de-11ec-812d-d04a5be85630
  last_name: Henzinger
  orcid: 0000-0002-5008-6530
citation:
  ama: Goranci G, Henzinger M. Incremental approximate maximum flow in m1/2+o(1) update
    time. <i>arXiv</i>. doi:<a href="https://doi.org/10.48550/arXiv.2211.09606">10.48550/arXiv.2211.09606</a>
  apa: Goranci, G., &#38; Henzinger, M. (n.d.). Incremental approximate maximum flow
    in m1/2+o(1) update time. <i>arXiv</i>. <a href="https://doi.org/10.48550/arXiv.2211.09606">https://doi.org/10.48550/arXiv.2211.09606</a>
  chicago: Goranci, Gramoz, and Monika Henzinger. “Incremental Approximate Maximum
    Flow in M1/2+o(1) Update Time.” <i>ArXiv</i>, n.d. <a href="https://doi.org/10.48550/arXiv.2211.09606">https://doi.org/10.48550/arXiv.2211.09606</a>.
  ieee: G. Goranci and M. Henzinger, “Incremental approximate maximum flow in m1/2+o(1)
    update time,” <i>arXiv</i>. .
  ista: Goranci G, Henzinger M. Incremental approximate maximum flow in m1/2+o(1)
    update time. arXiv, 2211.09606.
  mla: Goranci, Gramoz, and Monika Henzinger. “Incremental Approximate Maximum Flow
    in M1/2+o(1) Update Time.” <i>ArXiv</i>, 2211.09606, doi:<a href="https://doi.org/10.48550/arXiv.2211.09606">10.48550/arXiv.2211.09606</a>.
  short: G. Goranci, M. Henzinger, ArXiv (n.d.).
date_created: 2023-08-25T15:04:29Z
date_published: 2022-11-17T00:00:00Z
date_updated: 2024-11-06T12:01:45Z
day: '17'
doi: 10.48550/arXiv.2211.09606
extern: '1'
external_id:
  arxiv:
  - '2211.09606'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.48550/arXiv.2211.09606
month: '11'
oa: 1
oa_version: Preprint
publication: arXiv
publication_status: submitted
status: public
title: Incremental approximate maximum flow in m1/2+o(1) update time
type: preprint
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2022'
...
---
_id: '14248'
abstract:
- lang: eng
  text: "Recent work by Forsgård indicates that not every convex lattice polygon arises
    as the characteristic polygon of an affine dimer or, equivalently, an admissible
    oriented line arrangement on the torus in general position. We begin the classication
    of convex lattice polygons arising as characteristic polygons of affine dimers.
    We present several general constructions of new affine dimers from old, and an
    algorithm for finding affine dimers with prescribed polygon.\r\n\r\nWith these
    tools we prove that all lattice triangles, generalised parallelograms, and polygons
    of genus at most two admit an affine dimer."
article_processing_charge: No
article_type: original
arxiv: 1
author:
- first_name: Daniel
  full_name: Holmes, Daniel
  id: 3a443b4c-080d-11ed-979a-feb062bdcee0
  last_name: Holmes
citation:
  ama: Holmes D. Affine dimers from characteristic polygons. <i>PUMP Journal of Undergraduate
    Research</i>. 2022;5:24-51.
  apa: Holmes, D. (2022). Affine dimers from characteristic polygons. <i>PUMP Journal
    of Undergraduate Research</i>. California State University.
  chicago: Holmes, Daniel. “Affine Dimers from Characteristic Polygons.” <i>PUMP Journal
    of Undergraduate Research</i>. California State University, 2022.
  ieee: D. Holmes, “Affine dimers from characteristic polygons,” <i>PUMP Journal of
    Undergraduate Research</i>, vol. 5. California State University, pp. 24–51, 2022.
  ista: Holmes D. 2022. Affine dimers from characteristic polygons. PUMP Journal of
    Undergraduate Research. 5, 24–51.
  mla: Holmes, Daniel. “Affine Dimers from Characteristic Polygons.” <i>PUMP Journal
    of Undergraduate Research</i>, vol. 5, California State University, 2022, pp.
    24–51.
  short: D. Holmes, PUMP Journal of Undergraduate Research 5 (2022) 24–51.
corr_author: '1'
date_created: 2023-08-29T13:08:09Z
date_published: 2022-02-13T00:00:00Z
date_updated: 2024-10-09T21:06:47Z
day: '13'
extern: '1'
external_id:
  arxiv:
  - '2110.01703'
intvolume: '         5'
keyword:
- dimer model
- hyperplane arrangement
- torus
- lattice polygon
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://journals.calstate.edu/pump/article/view/2711
month: '02'
oa: 1
oa_version: Published Version
page: 24-51
publication: PUMP Journal of Undergraduate Research
publication_identifier:
  issn:
  - 2576-3725
publication_status: published
publisher: California State University
quality_controlled: '1'
status: public
title: Affine dimers from characteristic polygons
type: journal_article
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 5
year: '2022'
...
