---
_id: '12251'
abstract:
- lang: eng
  text: Amyloid formation is linked to devastating neurodegenerative diseases, motivating
    detailed studies of the mechanisms of amyloid formation. For Aβ, the peptide associated
    with Alzheimer’s disease, the mechanism and rate of aggregation have been established
    for a range of variants and conditions <jats:italic>in vitro</jats:italic> and
    in bodily fluids. A key outstanding question is how the relative stabilities of
    monomers, fibrils and intermediates affect each step in the fibril formation process.
    By monitoring the kinetics of aggregation of Aβ42, in the presence of urea or
    guanidinium hydrochloride (GuHCl), we here determine the rates of the underlying
    microscopic steps and establish the importance of changes in relative stability
    induced by the presence of denaturant for each individual step. Denaturants shift
    the equilibrium towards the unfolded state of each species. We find that a non-ionic
    denaturant, urea, reduces the overall aggregation rate, and that the effect on
    nucleation is stronger than the effect on elongation. Urea reduces the rate of
    secondary nucleation by decreasing the coverage of fibril surfaces and the rate
    of nucleus formation. It also reduces the rate of primary nucleation, increasing
    its reaction order. The ionic denaturant, GuHCl, accelerates the aggregation at
    low denaturant concentrations and decelerates the aggregation at high denaturant
    concentrations. Below approximately 0.25 M GuHCl, the screening of repulsive electrostatic
    interactions between peptides by the charged denaturant dominates, leading to
    an increased aggregation rate. At higher GuHCl concentrations, the electrostatic
    repulsion is completely screened, and the denaturing effect dominates. The results
    illustrate how the differential effects of denaturants on stability of monomer,
    oligomer and fibril translate to differential effects on microscopic steps, with
    the rate of nucleation being most strongly reduced.
acknowledgement: This work was supported by grants from the Swedish Research Council
  (grant no. 2015-00143) and the European Research Council (grant no. 340890).
article_number: '943355'
article_processing_charge: No
article_type: original
author:
- first_name: Tanja
  full_name: Weiffert, Tanja
  last_name: Weiffert
- first_name: Georg
  full_name: Meisl, Georg
  last_name: Meisl
- first_name: Samo
  full_name: Curk, Samo
  last_name: Curk
- first_name: Risto
  full_name: Cukalevski, Risto
  last_name: Cukalevski
- first_name: Anđela
  full_name: Šarić, Anđela
  id: bf63d406-f056-11eb-b41d-f263a6566d8b
  last_name: Šarić
  orcid: 0000-0002-7854-2139
- first_name: Tuomas P. J.
  full_name: Knowles, Tuomas P. J.
  last_name: Knowles
- first_name: Sara
  full_name: Linse, Sara
  last_name: Linse
citation:
  ama: Weiffert T, Meisl G, Curk S, et al. Influence of denaturants on amyloid β42
    aggregation kinetics. <i>Frontiers in Neuroscience</i>. 2022;16. doi:<a href="https://doi.org/10.3389/fnins.2022.943355">10.3389/fnins.2022.943355</a>
  apa: Weiffert, T., Meisl, G., Curk, S., Cukalevski, R., Šarić, A., Knowles, T. P.
    J., &#38; Linse, S. (2022). Influence of denaturants on amyloid β42 aggregation
    kinetics. <i>Frontiers in Neuroscience</i>. Frontiers Media. <a href="https://doi.org/10.3389/fnins.2022.943355">https://doi.org/10.3389/fnins.2022.943355</a>
  chicago: Weiffert, Tanja, Georg Meisl, Samo Curk, Risto Cukalevski, Anđela Šarić,
    Tuomas P. J. Knowles, and Sara Linse. “Influence of Denaturants on Amyloid Β42
    Aggregation Kinetics.” <i>Frontiers in Neuroscience</i>. Frontiers Media, 2022.
    <a href="https://doi.org/10.3389/fnins.2022.943355">https://doi.org/10.3389/fnins.2022.943355</a>.
  ieee: T. Weiffert <i>et al.</i>, “Influence of denaturants on amyloid β42 aggregation
    kinetics,” <i>Frontiers in Neuroscience</i>, vol. 16. Frontiers Media, 2022.
  ista: Weiffert T, Meisl G, Curk S, Cukalevski R, Šarić A, Knowles TPJ, Linse S.
    2022. Influence of denaturants on amyloid β42 aggregation kinetics. Frontiers
    in Neuroscience. 16, 943355.
  mla: Weiffert, Tanja, et al. “Influence of Denaturants on Amyloid Β42 Aggregation
    Kinetics.” <i>Frontiers in Neuroscience</i>, vol. 16, 943355, Frontiers Media,
    2022, doi:<a href="https://doi.org/10.3389/fnins.2022.943355">10.3389/fnins.2022.943355</a>.
  short: T. Weiffert, G. Meisl, S. Curk, R. Cukalevski, A. Šarić, T.P.J. Knowles,
    S. Linse, Frontiers in Neuroscience 16 (2022).
date_created: 2023-01-16T09:56:43Z
date_published: 2022-09-20T00:00:00Z
date_updated: 2023-08-04T09:48:56Z
day: '20'
ddc:
- '570'
department:
- _id: AnSa
doi: 10.3389/fnins.2022.943355
external_id:
  isi:
  - '000866287100001'
file:
- access_level: open_access
  checksum: e67d16113ffb4fb4fa38a183d169f210
  content_type: application/pdf
  creator: dernst
  date_created: 2023-01-30T09:15:13Z
  date_updated: 2023-01-30T09:15:13Z
  file_id: '12442'
  file_name: 2022_FrontiersNeuroscience_Weiffert2.pdf
  file_size: 19798610
  relation: main_file
  success: 1
file_date_updated: 2023-01-30T09:15:13Z
has_accepted_license: '1'
intvolume: '        16'
isi: 1
keyword:
- General Neuroscience
language:
- iso: eng
license: https://creativecommons.org/licenses/by/4.0/
month: '09'
oa: 1
oa_version: Published Version
publication: Frontiers in Neuroscience
publication_identifier:
  issn:
  - 1662-453X
publication_status: published
publisher: Frontiers Media
quality_controlled: '1'
scopus_import: '1'
status: public
title: Influence of denaturants on amyloid β42 aggregation kinetics
tmp:
  image: /images/cc_by.png
  legal_code_url: https://creativecommons.org/licenses/by/4.0/legalcode
  name: Creative Commons Attribution 4.0 International Public License (CC-BY 4.0)
  short: CC BY (4.0)
type: journal_article
user_id: 4359f0d1-fa6c-11eb-b949-802e58b17ae8
volume: 16
year: '2022'
...
---
_id: '12252'
abstract:
- lang: eng
  text: The COVID−19 pandemic not only resulted in a global crisis, but also accelerated
    vaccine development and antibody discovery. Herein we report a synthetic humanized
    VHH library development pipeline for nanomolar-range affinity VHH binders to SARS-CoV-2
    variants of concern (VoC) receptor binding domains (RBD) isolation. Trinucleotide-based
    randomization of CDRs by Kunkel mutagenesis with the subsequent rolling-cycle
    amplification resulted in more than 10<jats:sup>11</jats:sup> diverse phage display
    library in a manageable for a single person number of electroporation reactions.
    We identified a number of nanomolar-range affinity VHH binders to SARS-CoV-2 variants
    of concern (VoC) receptor binding domains (RBD) by screening a novel synthetic
    humanized antibody library. In order to explore the most robust and fast method
    for affinity improvement, we performed affinity maturation by CDR1 and CDR2 shuffling
    and avidity engineering by multivalent trimeric VHH fusion protein construction.
    As a result, H7-Fc and G12x3-Fc binders were developed with the affinities in
    nM and pM range respectively. Importantly, these affinities are weakly influenced
    by most of SARS-CoV-2 VoC mutations and they retain moderate binding to BA.4\5.
    The plaque reduction neutralization test (PRNT) resulted in IC50 = 100 ng\ml and
    9.6 ng\ml for H7-Fc and G12x3-Fc antibodies, respectively, for the emerging Omicron
    BA.1 variant. Therefore, these VHH could expand the present landscape of SARS-CoV-2
    neutralization binders with the therapeutic potential for present and future SARS-CoV-2
    variants.
acknowledgement: The authors declare that this study received funding from Immunofusion.
  The funder was not involved in the study design, collection, analysis, interpretation
  of data, the writing of this article or the decision to submit it for publication.
article_number: '965446'
article_processing_charge: No
article_type: original
author:
- first_name: Dmitri
  full_name: Dormeshkin, Dmitri
  last_name: Dormeshkin
- first_name: Michail
  full_name: Shapira, Michail
  last_name: Shapira
- first_name: Simon
  full_name: Dubovik, Simon
  last_name: Dubovik
- first_name: Anton
  full_name: Kavaleuski, Anton
  id: 4968f7ad-eb97-11eb-a6c2-8ed382e8912c
  last_name: Kavaleuski
  orcid: 0000-0003-2091-526X
- first_name: Mikalai
  full_name: Katsin, Mikalai
  last_name: Katsin
- first_name: Alexandr
  full_name: Migas, Alexandr
  last_name: Migas
- first_name: Alexander
  full_name: Meleshko, Alexander
  last_name: Meleshko
- first_name: Sergei
  full_name: Semyonov, Sergei
  last_name: Semyonov
citation:
  ama: Dormeshkin D, Shapira M, Dubovik S, et al. Isolation of an escape-resistant
    SARS-CoV-2 neutralizing nanobody from a novel synthetic nanobody library. <i>Frontiers
    in Immunology</i>. 2022;13. doi:<a href="https://doi.org/10.3389/fimmu.2022.965446">10.3389/fimmu.2022.965446</a>
  apa: Dormeshkin, D., Shapira, M., Dubovik, S., Kavaleuski, A., Katsin, M., Migas,
    A., … Semyonov, S. (2022). Isolation of an escape-resistant SARS-CoV-2 neutralizing
    nanobody from a novel synthetic nanobody library. <i>Frontiers in Immunology</i>.
    Frontiers Media. <a href="https://doi.org/10.3389/fimmu.2022.965446">https://doi.org/10.3389/fimmu.2022.965446</a>
  chicago: Dormeshkin, Dmitri, Michail Shapira, Simon Dubovik, Anton Kavaleuski, Mikalai
    Katsin, Alexandr Migas, Alexander Meleshko, and Sergei Semyonov. “Isolation of
    an Escape-Resistant SARS-CoV-2 Neutralizing Nanobody from a Novel Synthetic Nanobody
    Library.” <i>Frontiers in Immunology</i>. Frontiers Media, 2022. <a href="https://doi.org/10.3389/fimmu.2022.965446">https://doi.org/10.3389/fimmu.2022.965446</a>.
  ieee: D. Dormeshkin <i>et al.</i>, “Isolation of an escape-resistant SARS-CoV-2
    neutralizing nanobody from a novel synthetic nanobody library,” <i>Frontiers in
    Immunology</i>, vol. 13. Frontiers Media, 2022.
  ista: Dormeshkin D, Shapira M, Dubovik S, Kavaleuski A, Katsin M, Migas A, Meleshko
    A, Semyonov S. 2022. Isolation of an escape-resistant SARS-CoV-2 neutralizing
    nanobody from a novel synthetic nanobody library. Frontiers in Immunology. 13,
    965446.
  mla: Dormeshkin, Dmitri, et al. “Isolation of an Escape-Resistant SARS-CoV-2 Neutralizing
    Nanobody from a Novel Synthetic Nanobody Library.” <i>Frontiers in Immunology</i>,
    vol. 13, 965446, Frontiers Media, 2022, doi:<a href="https://doi.org/10.3389/fimmu.2022.965446">10.3389/fimmu.2022.965446</a>.
  short: D. Dormeshkin, M. Shapira, S. Dubovik, A. Kavaleuski, M. Katsin, A. Migas,
    A. Meleshko, S. Semyonov, Frontiers in Immunology 13 (2022).
date_created: 2023-01-16T09:56:57Z
date_published: 2022-09-16T00:00:00Z
date_updated: 2025-06-11T13:42:26Z
day: '16'
ddc:
- '570'
department:
- _id: LeSa
doi: 10.3389/fimmu.2022.965446
external_id:
  isi:
  - '000862479100001'
  pmid:
  - '36189235'
file:
- access_level: open_access
  checksum: f8f5d8110710033d0532e7e08bf9dad4
  content_type: application/pdf
  creator: dernst
  date_created: 2023-01-30T09:22:26Z
  date_updated: 2023-01-30T09:22:26Z
  file_id: '12443'
  file_name: 2022_FrontiersImmunology_Dormeshkin.pdf
  file_size: 5695892
  relation: main_file
  success: 1
file_date_updated: 2023-01-30T09:22:26Z
has_accepted_license: '1'
intvolume: '        13'
isi: 1
keyword:
- Immunology
- Immunology and Allergy
- COVID-19
- SARS-CoV-2
- synthetic library
- RBD
- neutralization nanobody
- VHH
language:
- iso: eng
month: '09'
oa: 1
oa_version: Published Version
pmid: 1
publication: Frontiers in Immunology
publication_identifier:
  issn:
  - 1664-3224
publication_status: published
publisher: Frontiers Media
quality_controlled: '1'
scopus_import: '1'
status: public
title: Isolation of an escape-resistant SARS-CoV-2 neutralizing nanobody from a novel
  synthetic nanobody library
tmp:
  image: /images/cc_by.png
  legal_code_url: https://creativecommons.org/licenses/by/4.0/legalcode
  name: Creative Commons Attribution 4.0 International Public License (CC-BY 4.0)
  short: CC BY (4.0)
type: journal_article
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 13
year: '2022'
...
---
_id: '12253'
abstract:
- lang: eng
  text: The sculpting of germ layers during gastrulation relies on the coordinated
    migration of progenitor cells, yet the cues controlling these long-range directed
    movements remain largely unknown. While directional migration often relies on
    a chemokine gradient generated from a localized source, we find that zebrafish
    ventrolateral mesoderm is guided by a self-generated gradient of the initially
    uniformly expressed and secreted protein Toddler/ELABELA/Apela. We show that the
    Apelin receptor, which is specifically expressed in mesodermal cells, has a dual
    role during gastrulation, acting as a scavenger receptor to generate a Toddler
    gradient, and as a chemokine receptor to sense this guidance cue. Thus, we uncover
    a single receptor–based self-generated gradient as the enigmatic guidance cue
    that can robustly steer the directional migration of mesoderm through the complex
    and continuously changing environment of the gastrulating embryo.
acknowledgement: 'We thank K. Aumayer and the team of the biooptics facility at the
  Vienna Biocenter, particularly P. Pasierbek and T. Müller, for support with microscopy;
  K. Panser, C. Pribitzer, and the animal facility personnel for taking care of zebrafish;
  M. Binner and A. Bandura for help with genotyping; M. Codina Tobias for help with
  establishing the conditions for the Toddler overexpression compensation experiment;
  T. Lubiana Alves for sharing the code for scRNA-Seq analyses; the Heisenberg laboratory,
  particularly D. Pinheiro, for joint laboratory meetings, discussions on the project,
  and providing the tg(gsc:CAAX-GFP) fish line; the Raz laboratory for providing the
  Lifeact-GFP plasmid; A. Andersen, A. Schier, C.-P. Heisenberg, and E. Tanaka for
  comments on the manuscript; and the entire Pauli laboratory, particularly K. Gert
  and V. Deneke, for valuable discussions and feedback on the manuscript. Funding:
  Work in A.P.’s laboratory has been supported by the IMP, which receives institutional
  funding from Boehringer Ingelheim and the Austrian Research Promotion Agency (Headquarter
  grant FFG-852936), as well as the FWF START program (Y 1031-B28 to A.P.), the Human
  Frontier Science Program (HFSP) Career Development Award (CDA00066/2015 to A.P.)
  and Young Investigator Grant (RGY0079/2020 to A.P.), the SFB RNA-Deco (project number
  F 80 to A.P.), a Whitman Center Fellowship from the Marine Biological Laboratory
  (to A.P.), and EMBO-YIP funds (to A.P.). This work was supported by the European
  Union (European Research Council Starting Grant 851288 to E.H.). For the purpose
  of Open Access, the authors have applied a CC BY public copyright license to any
  Author Accepted Manuscript (AAM) version arising from this submission.'
article_number: eadd2488
article_processing_charge: No
article_type: original
author:
- first_name: Jessica
  full_name: Stock, Jessica
  last_name: Stock
- first_name: Tomas
  full_name: Kazmar, Tomas
  last_name: Kazmar
- first_name: Friederike
  full_name: Schlumm, Friederike
  last_name: Schlumm
- first_name: Edouard B
  full_name: Hannezo, Edouard B
  id: 3A9DB764-F248-11E8-B48F-1D18A9856A87
  last_name: Hannezo
  orcid: 0000-0001-6005-1561
- first_name: Andrea
  full_name: Pauli, Andrea
  last_name: Pauli
citation:
  ama: Stock J, Kazmar T, Schlumm F, Hannezo EB, Pauli A. A self-generated Toddler
    gradient guides mesodermal cell migration. <i>Science Advances</i>. 2022;8(37).
    doi:<a href="https://doi.org/10.1126/sciadv.add2488">10.1126/sciadv.add2488</a>
  apa: Stock, J., Kazmar, T., Schlumm, F., Hannezo, E. B., &#38; Pauli, A. (2022).
    A self-generated Toddler gradient guides mesodermal cell migration. <i>Science
    Advances</i>. American Association for the Advancement of Science. <a href="https://doi.org/10.1126/sciadv.add2488">https://doi.org/10.1126/sciadv.add2488</a>
  chicago: Stock, Jessica, Tomas Kazmar, Friederike Schlumm, Edouard B Hannezo, and
    Andrea Pauli. “A Self-Generated Toddler Gradient Guides Mesodermal Cell Migration.”
    <i>Science Advances</i>. American Association for the Advancement of Science,
    2022. <a href="https://doi.org/10.1126/sciadv.add2488">https://doi.org/10.1126/sciadv.add2488</a>.
  ieee: J. Stock, T. Kazmar, F. Schlumm, E. B. Hannezo, and A. Pauli, “A self-generated
    Toddler gradient guides mesodermal cell migration,” <i>Science Advances</i>, vol.
    8, no. 37. American Association for the Advancement of Science, 2022.
  ista: Stock J, Kazmar T, Schlumm F, Hannezo EB, Pauli A. 2022. A self-generated
    Toddler gradient guides mesodermal cell migration. Science Advances. 8(37), eadd2488.
  mla: Stock, Jessica, et al. “A Self-Generated Toddler Gradient Guides Mesodermal
    Cell Migration.” <i>Science Advances</i>, vol. 8, no. 37, eadd2488, American Association
    for the Advancement of Science, 2022, doi:<a href="https://doi.org/10.1126/sciadv.add2488">10.1126/sciadv.add2488</a>.
  short: J. Stock, T. Kazmar, F. Schlumm, E.B. Hannezo, A. Pauli, Science Advances
    8 (2022).
date_created: 2023-01-16T09:57:10Z
date_published: 2022-09-14T00:00:00Z
date_updated: 2025-04-14T07:52:27Z
day: '14'
ddc:
- '570'
department:
- _id: EdHa
doi: 10.1126/sciadv.add2488
ec_funded: 1
external_id:
  isi:
  - '000888875000009'
  pmid:
  - '36103529'
file:
- access_level: open_access
  checksum: f59cdb824e5d4221045def81f46f6c65
  content_type: application/pdf
  creator: dernst
  date_created: 2023-01-30T09:27:49Z
  date_updated: 2023-01-30T09:27:49Z
  file_id: '12444'
  file_name: 2022_ScienceAdvances_Stock.pdf
  file_size: 1636732
  relation: main_file
  success: 1
file_date_updated: 2023-01-30T09:27:49Z
has_accepted_license: '1'
intvolume: '         8'
isi: 1
issue: '37'
language:
- iso: eng
month: '09'
oa: 1
oa_version: Published Version
pmid: 1
project:
- _id: 05943252-7A3F-11EA-A408-12923DDC885E
  call_identifier: H2020
  grant_number: '851288'
  name: Design Principles of Branching Morphogenesis
publication: Science Advances
publication_identifier:
  issn:
  - 2375-2548
publication_status: published
publisher: American Association for the Advancement of Science
quality_controlled: '1'
scopus_import: '1'
status: public
title: A self-generated Toddler gradient guides mesodermal cell migration
tmp:
  image: /images/cc_by.png
  legal_code_url: https://creativecommons.org/licenses/by/4.0/legalcode
  name: Creative Commons Attribution 4.0 International Public License (CC-BY 4.0)
  short: CC BY (4.0)
type: journal_article
user_id: 4359f0d1-fa6c-11eb-b949-802e58b17ae8
volume: 8
year: '2022'
...
---
_id: '12259'
abstract:
- lang: eng
  text: 'Theoretical foundations of chaos have been predominantly laid out for finite-dimensional
    dynamical systems, such as the three-body problem in classical mechanics and the
    Lorenz model in dissipative systems. In contrast, many real-world chaotic phenomena,
    e.g., weather, arise in systems with many (formally infinite) degrees of freedom,
    which limits direct quantitative analysis of such systems using chaos theory.
    In the present work, we demonstrate that the hydrodynamic pilot-wave systems offer
    a bridge between low- and high-dimensional chaotic phenomena by allowing for a
    systematic study of how the former connects to the latter. Specifically, we present
    experimental results, which show the formation of low-dimensional chaotic attractors
    upon destabilization of regular dynamics and a final transition to high-dimensional
    chaos via the merging of distinct chaotic regions through a crisis bifurcation.
    Moreover, we show that the post-crisis dynamics of the system can be rationalized
    as consecutive scatterings from the nonattracting chaotic sets with lifetimes
    following exponential distributions. '
acknowledgement: 'This work was partially funded by the Institute of Science and Technology
  Austria Interdisciplinary Project Committee Grant “Pilot-Wave Hydrodynamics: Chaos
  and Quantum Analogies.”'
article_number: '093138'
article_processing_charge: No
article_type: original
arxiv: 1
author:
- first_name: George H
  full_name: Choueiri, George H
  id: 448BD5BC-F248-11E8-B48F-1D18A9856A87
  last_name: Choueiri
- first_name: Balachandra
  full_name: Suri, Balachandra
  id: 47A5E706-F248-11E8-B48F-1D18A9856A87
  last_name: Suri
- first_name: Jack
  full_name: Merrin, Jack
  id: 4515C308-F248-11E8-B48F-1D18A9856A87
  last_name: Merrin
  orcid: 0000-0001-5145-4609
- first_name: Maksym
  full_name: Serbyn, Maksym
  id: 47809E7E-F248-11E8-B48F-1D18A9856A87
  last_name: Serbyn
  orcid: 0000-0002-2399-5827
- first_name: Björn
  full_name: Hof, Björn
  id: 3A374330-F248-11E8-B48F-1D18A9856A87
  last_name: Hof
  orcid: 0000-0003-2057-2754
- first_name: Nazmi B
  full_name: Budanur, Nazmi B
  id: 3EA1010E-F248-11E8-B48F-1D18A9856A87
  last_name: Budanur
  orcid: 0000-0003-0423-5010
citation:
  ama: 'Choueiri GH, Suri B, Merrin J, Serbyn M, Hof B, Budanur NB. Crises and chaotic
    scattering in hydrodynamic pilot-wave experiments. <i>Chaos: An Interdisciplinary
    Journal of Nonlinear Science</i>. 2022;32(9). doi:<a href="https://doi.org/10.1063/5.0102904">10.1063/5.0102904</a>'
  apa: 'Choueiri, G. H., Suri, B., Merrin, J., Serbyn, M., Hof, B., &#38; Budanur,
    N. B. (2022). Crises and chaotic scattering in hydrodynamic pilot-wave experiments.
    <i>Chaos: An Interdisciplinary Journal of Nonlinear Science</i>. AIP Publishing.
    <a href="https://doi.org/10.1063/5.0102904">https://doi.org/10.1063/5.0102904</a>'
  chicago: 'Choueiri, George H, Balachandra Suri, Jack Merrin, Maksym Serbyn, Björn
    Hof, and Nazmi B Budanur. “Crises and Chaotic Scattering in Hydrodynamic Pilot-Wave
    Experiments.” <i>Chaos: An Interdisciplinary Journal of Nonlinear Science</i>.
    AIP Publishing, 2022. <a href="https://doi.org/10.1063/5.0102904">https://doi.org/10.1063/5.0102904</a>.'
  ieee: 'G. H. Choueiri, B. Suri, J. Merrin, M. Serbyn, B. Hof, and N. B. Budanur,
    “Crises and chaotic scattering in hydrodynamic pilot-wave experiments,” <i>Chaos:
    An Interdisciplinary Journal of Nonlinear Science</i>, vol. 32, no. 9. AIP Publishing,
    2022.'
  ista: 'Choueiri GH, Suri B, Merrin J, Serbyn M, Hof B, Budanur NB. 2022. Crises
    and chaotic scattering in hydrodynamic pilot-wave experiments. Chaos: An Interdisciplinary
    Journal of Nonlinear Science. 32(9), 093138.'
  mla: 'Choueiri, George H., et al. “Crises and Chaotic Scattering in Hydrodynamic
    Pilot-Wave Experiments.” <i>Chaos: An Interdisciplinary Journal of Nonlinear Science</i>,
    vol. 32, no. 9, 093138, AIP Publishing, 2022, doi:<a href="https://doi.org/10.1063/5.0102904">10.1063/5.0102904</a>.'
  short: 'G.H. Choueiri, B. Suri, J. Merrin, M. Serbyn, B. Hof, N.B. Budanur, Chaos:
    An Interdisciplinary Journal of Nonlinear Science 32 (2022).'
date_created: 2023-01-16T09:58:16Z
date_published: 2022-09-26T00:00:00Z
date_updated: 2025-06-11T13:41:34Z
day: '26'
ddc:
- '530'
department:
- _id: MaSe
- _id: BjHo
- _id: NanoFab
doi: 10.1063/5.0102904
external_id:
  arxiv:
  - '2206.01531'
  isi:
  - '000861009600005'
  pmid:
  - '36182399'
file:
- access_level: open_access
  checksum: 17881eff8b21969359a2dd64620120ba
  content_type: application/pdf
  creator: dernst
  date_created: 2023-01-30T09:41:12Z
  date_updated: 2023-01-30T09:41:12Z
  file_id: '12445'
  file_name: 2022_Chaos_Choueiri.pdf
  file_size: 3209644
  relation: main_file
  success: 1
file_date_updated: 2023-01-30T09:41:12Z
has_accepted_license: '1'
intvolume: '        32'
isi: 1
issue: '9'
keyword:
- Applied Mathematics
- General Physics and Astronomy
- Mathematical Physics
- Statistical and Nonlinear Physics
language:
- iso: eng
month: '09'
oa: 1
oa_version: Published Version
pmid: 1
publication: 'Chaos: An Interdisciplinary Journal of Nonlinear Science'
publication_identifier:
  eissn:
  - 1089-7682
  issn:
  - 1054-1500
publication_status: published
publisher: AIP Publishing
quality_controlled: '1'
scopus_import: '1'
status: public
title: Crises and chaotic scattering in hydrodynamic pilot-wave experiments
tmp:
  image: /images/cc_by.png
  legal_code_url: https://creativecommons.org/licenses/by/4.0/legalcode
  name: Creative Commons Attribution 4.0 International Public License (CC-BY 4.0)
  short: CC BY (4.0)
type: journal_article
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 32
year: '2022'
...
---
_id: '14093'
abstract:
- lang: eng
  text: ' We propose a stochastic conditional gradient method (CGM) for minimizing
    convex finite-sum objectives formed as a sum of smooth and non-smooth terms. Existing
    CGM variants for this template either suffer from slow convergence rates, or require
    carefully increasing the batch size over the course of the algorithm’s execution,
    which leads to computing full gradients. In contrast, the proposed method, equipped
    with a stochastic average gradient (SAG) estimator, requires only one sample per
    iteration. Nevertheless, it guarantees fast convergence rates on par with more
    sophisticated variance reduction techniques. In applications we put special emphasis
    on problems with a large number of separable constraints. Such problems are prevalent
    among semidefinite programming (SDP) formulations arising in machine learning
    and theoretical computer science. We provide numerical experiments on matrix completion,
    unsupervised clustering, and sparsest-cut SDPs. '
alternative_title:
- PMLR
article_processing_charge: No
arxiv: 1
author:
- first_name: Gideon
  full_name: Dresdner, Gideon
  last_name: Dresdner
- first_name: Maria-Luiza
  full_name: Vladarean, Maria-Luiza
  last_name: Vladarean
- first_name: Gunnar
  full_name: Rätsch, Gunnar
  last_name: Rätsch
- first_name: Francesco
  full_name: Locatello, Francesco
  id: 26cfd52f-2483-11ee-8040-88983bcc06d4
  last_name: Locatello
  orcid: 0000-0002-4850-0683
- first_name: Volkan
  full_name: Cevher, Volkan
  last_name: Cevher
- first_name: Alp
  full_name: Yurtsever, Alp
  last_name: Yurtsever
citation:
  ama: 'Dresdner G, Vladarean M-L, Rätsch G, Locatello F, Cevher V, Yurtsever A.  Faster
    one-sample stochastic conditional gradient method for composite convex minimization.
    In: <i>Proceedings of the 25th International Conference on Artificial Intelligence
    and Statistics</i>. Vol 151. ML Research Press; 2022:8439-8457.'
  apa: 'Dresdner, G., Vladarean, M.-L., Rätsch, G., Locatello, F., Cevher, V., &#38;
    Yurtsever, A. (2022).  Faster one-sample stochastic conditional gradient method
    for composite convex minimization. In <i>Proceedings of the 25th International
    Conference on Artificial Intelligence and Statistics</i> (Vol. 151, pp. 8439–8457).
    Virtual: ML Research Press.'
  chicago: Dresdner, Gideon, Maria-Luiza Vladarean, Gunnar Rätsch, Francesco Locatello,
    Volkan Cevher, and Alp Yurtsever. “ Faster One-Sample Stochastic Conditional Gradient
    Method for Composite Convex Minimization.” In <i>Proceedings of the 25th International
    Conference on Artificial Intelligence and Statistics</i>, 151:8439–57. ML Research
    Press, 2022.
  ieee: G. Dresdner, M.-L. Vladarean, G. Rätsch, F. Locatello, V. Cevher, and A. Yurtsever,
    “ Faster one-sample stochastic conditional gradient method for composite convex
    minimization,” in <i>Proceedings of the 25th International Conference on Artificial
    Intelligence and Statistics</i>, Virtual, 2022, vol. 151, pp. 8439–8457.
  ista: 'Dresdner G, Vladarean M-L, Rätsch G, Locatello F, Cevher V, Yurtsever A.
    2022.  Faster one-sample stochastic conditional gradient method for composite
    convex minimization. Proceedings of the 25th International Conference on Artificial
    Intelligence and Statistics. AISTATS: Conference on Artificial Intelligence and
    Statistics, PMLR, vol. 151, 8439–8457.'
  mla: Dresdner, Gideon, et al. “ Faster One-Sample Stochastic Conditional Gradient
    Method for Composite Convex Minimization.” <i>Proceedings of the 25th International
    Conference on Artificial Intelligence and Statistics</i>, vol. 151, ML Research
    Press, 2022, pp. 8439–57.
  short: G. Dresdner, M.-L. Vladarean, G. Rätsch, F. Locatello, V. Cevher, A. Yurtsever,
    in:, Proceedings of the 25th International Conference on Artificial Intelligence
    and Statistics, ML Research Press, 2022, pp. 8439–8457.
conference:
  end_date: 2022-03-30
  location: Virtual
  name: 'AISTATS: Conference on Artificial Intelligence and Statistics'
  start_date: 2022-03-28
date_created: 2023-08-21T09:27:43Z
date_published: 2022-04-01T00:00:00Z
date_updated: 2023-09-06T10:28:17Z
day: '01'
department:
- _id: FrLo
extern: '1'
external_id:
  arxiv:
  - '2202.13212'
intvolume: '       151'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://arxiv.org/abs/2202.13212
month: '04'
oa: 1
oa_version: Preprint
page: 8439-8457
publication: Proceedings of the 25th International Conference on Artificial Intelligence
  and Statistics
publication_identifier:
  issn:
  - 2640-3498
publication_status: published
publisher: ML Research Press
quality_controlled: '1'
scopus_import: '1'
status: public
title: ' Faster one-sample stochastic conditional gradient method for composite convex
  minimization'
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 151
year: '2022'
...
---
_id: '14106'
abstract:
- lang: eng
  text: "We show that deep networks trained to satisfy demographic parity often do
    so\r\nthrough a form of race or gender awareness, and that the more we force a
    network\r\nto be fair, the more accurately we can recover race or gender from
    the internal state\r\nof the network. Based on this observation, we investigate
    an alternative fairness\r\napproach: we add a second classification head to the
    network to explicitly predict\r\nthe protected attribute (such as race or gender)
    alongside the original task. After\r\ntraining the two-headed network, we enforce
    demographic parity by merging the\r\ntwo heads, creating a network with the same
    architecture as the original network.\r\nWe establish a close relationship between
    existing approaches and our approach\r\nby showing (1) that the decisions of a
    fair classifier are well-approximated by our\r\napproach, and (2) that an unfair
    and optimally accurate classifier can be recovered\r\nfrom a fair classifier and
    our second head predicting the protected attribute. We use\r\nour explicit formulation
    to argue that the existing fairness approaches, just as ours,\r\ndemonstrate disparate
    treatment and that they are likely to be unlawful in a wide\r\nrange of scenarios
    under US law."
alternative_title:
- Advances in Neural Information Processing Systems
article_processing_charge: No
arxiv: 1
author:
- first_name: Michael
  full_name: Lohaus, Michael
  last_name: Lohaus
- first_name: Matthäus
  full_name: Kleindessner, Matthäus
  last_name: Kleindessner
- first_name: Krishnaram
  full_name: Kenthapadi, Krishnaram
  last_name: Kenthapadi
- first_name: Francesco
  full_name: Locatello, Francesco
  id: 26cfd52f-2483-11ee-8040-88983bcc06d4
  last_name: Locatello
  orcid: 0000-0002-4850-0683
- first_name: Chris
  full_name: Russell, Chris
  last_name: Russell
citation:
  ama: 'Lohaus M, Kleindessner M, Kenthapadi K, Locatello F, Russell C. Are two heads
    the same as one? Identifying disparate treatment in fair neural networks. In:
    <i>36th Conference on Neural Information Processing Systems</i>. Vol 35. Neural
    Information Processing Systems Foundation; 2022:16548-16562.'
  apa: 'Lohaus, M., Kleindessner, M., Kenthapadi, K., Locatello, F., &#38; Russell,
    C. (2022). Are two heads the same as one? Identifying disparate treatment in fair
    neural networks. In <i>36th Conference on Neural Information Processing Systems</i>
    (Vol. 35, pp. 16548–16562). New Orleans, LA, United States: Neural Information
    Processing Systems Foundation.'
  chicago: Lohaus, Michael, Matthäus Kleindessner, Krishnaram Kenthapadi, Francesco
    Locatello, and Chris Russell. “Are Two Heads the Same as One? Identifying Disparate
    Treatment in Fair Neural Networks.” In <i>36th Conference on Neural Information
    Processing Systems</i>, 35:16548–62. Neural Information Processing Systems Foundation,
    2022.
  ieee: M. Lohaus, M. Kleindessner, K. Kenthapadi, F. Locatello, and C. Russell, “Are
    two heads the same as one? Identifying disparate treatment in fair neural networks,”
    in <i>36th Conference on Neural Information Processing Systems</i>, New Orleans,
    LA, United States, 2022, vol. 35, pp. 16548–16562.
  ista: 'Lohaus M, Kleindessner M, Kenthapadi K, Locatello F, Russell C. 2022. Are
    two heads the same as one? Identifying disparate treatment in fair neural networks.
    36th Conference on Neural Information Processing Systems. NeurIPS: Neural Information
    Processing Systems, Advances in Neural Information Processing Systems, vol. 35,
    16548–16562.'
  mla: Lohaus, Michael, et al. “Are Two Heads the Same as One? Identifying Disparate
    Treatment in Fair Neural Networks.” <i>36th Conference on Neural Information Processing
    Systems</i>, vol. 35, Neural Information Processing Systems Foundation, 2022,
    pp. 16548–62.
  short: M. Lohaus, M. Kleindessner, K. Kenthapadi, F. Locatello, C. Russell, in:,
    36th Conference on Neural Information Processing Systems, Neural Information Processing
    Systems Foundation, 2022, pp. 16548–16562.
conference:
  end_date: 2022-12-09
  location: New Orleans, LA, United States
  name: 'NeurIPS: Neural Information Processing Systems'
  start_date: 2022-11-28
date_created: 2023-08-21T12:12:42Z
date_published: 2022-12-15T00:00:00Z
date_updated: 2024-10-14T12:27:01Z
day: '15'
department:
- _id: FrLo
extern: '1'
external_id:
  arxiv:
  - '2204.04440'
intvolume: '        35'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://arxiv.org/abs/2204.04440
month: '12'
oa: 1
oa_version: Preprint
page: 16548-16562
publication: 36th Conference on Neural Information Processing Systems
publication_identifier:
  isbn:
  - '9781713871088'
publication_status: published
publisher: Neural Information Processing Systems Foundation
quality_controlled: '1'
scopus_import: '1'
status: public
title: Are two heads the same as one? Identifying disparate treatment in fair neural
  networks
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 35
year: '2022'
...
---
_id: '14107'
abstract:
- lang: eng
  text: "Amodal perception requires inferring the full shape of an object that is
    partially occluded. This task is particularly challenging on two levels: (1) it
    requires more information than what is contained in the instant retina or imaging
    sensor, (2) it is difficult to obtain enough well-annotated amodal labels for
    supervision. To this end, this paper develops a new framework of\r\nSelf-supervised
    amodal Video object segmentation (SaVos). Our method efficiently leverages the
    visual information of video temporal sequences to infer the amodal mask of objects.
    The key intuition is that the occluded part of an object can be explained away
    if that part is visible in other frames, possibly deformed as long as the deformation
    can be reasonably learned.\r\nAccordingly, we derive a novel self-supervised learning
    paradigm that efficiently utilizes the visible object parts as the supervision
    to guide the training on videos. In addition to learning type prior to complete
    masks for known types, SaVos also learns the spatiotemporal prior, which is also
    useful for the amodal task and could generalize to unseen types. The proposed\r\nframework
    achieves the state-of-the-art performance on the synthetic amodal segmentation
    benchmark FISHBOWL and the real world benchmark KINS-Video-Car. Further, it lends
    itself well to being transferred to novel distributions using test-time adaptation,
    outperforming existing models even after the transfer to a new distribution."
article_processing_charge: No
arxiv: 1
author:
- first_name: Jian
  full_name: Yao, Jian
  last_name: Yao
- first_name: Yuxin
  full_name: Hong, Yuxin
  last_name: Hong
- first_name: Chiyu
  full_name: Wang, Chiyu
  last_name: Wang
- first_name: Tianjun
  full_name: Xiao, Tianjun
  last_name: Xiao
- first_name: Tong
  full_name: He, Tong
  last_name: He
- first_name: Francesco
  full_name: Locatello, Francesco
  id: 26cfd52f-2483-11ee-8040-88983bcc06d4
  last_name: Locatello
  orcid: 0000-0002-4850-0683
- first_name: David
  full_name: Wipf, David
  last_name: Wipf
- first_name: Yanwei
  full_name: Fu, Yanwei
  last_name: Fu
- first_name: Zheng
  full_name: Zhang, Zheng
  last_name: Zhang
citation:
  ama: 'Yao J, Hong Y, Wang C, et al. Self-supervised amodal video object segmentation.
    In: <i>36th Conference on Neural Information Processing Systems</i>. ; 2022. doi:<a
    href="https://doi.org/10.48550/arXiv.2210.12733">10.48550/arXiv.2210.12733</a>'
  apa: Yao, J., Hong, Y., Wang, C., Xiao, T., He, T., Locatello, F., … Zhang, Z. (2022).
    Self-supervised amodal video object segmentation. In <i>36th Conference on Neural
    Information Processing Systems</i>. New Orleans, LA, United States. <a href="https://doi.org/10.48550/arXiv.2210.12733">https://doi.org/10.48550/arXiv.2210.12733</a>
  chicago: Yao, Jian, Yuxin Hong, Chiyu Wang, Tianjun Xiao, Tong He, Francesco Locatello,
    David Wipf, Yanwei Fu, and Zheng Zhang. “Self-Supervised Amodal Video Object Segmentation.”
    In <i>36th Conference on Neural Information Processing Systems</i>, 2022. <a href="https://doi.org/10.48550/arXiv.2210.12733">https://doi.org/10.48550/arXiv.2210.12733</a>.
  ieee: J. Yao <i>et al.</i>, “Self-supervised amodal video object segmentation,”
    in <i>36th Conference on Neural Information Processing Systems</i>, New Orleans,
    LA, United States, 2022.
  ista: 'Yao J, Hong Y, Wang C, Xiao T, He T, Locatello F, Wipf D, Fu Y, Zhang Z.
    2022. Self-supervised amodal video object segmentation. 36th Conference on Neural
    Information Processing Systems. NeurIPS: Neural Information Processing Systems.'
  mla: Yao, Jian, et al. “Self-Supervised Amodal Video Object Segmentation.” <i>36th
    Conference on Neural Information Processing Systems</i>, 2022, doi:<a href="https://doi.org/10.48550/arXiv.2210.12733">10.48550/arXiv.2210.12733</a>.
  short: J. Yao, Y. Hong, C. Wang, T. Xiao, T. He, F. Locatello, D. Wipf, Y. Fu, Z.
    Zhang, in:, 36th Conference on Neural Information Processing Systems, 2022.
conference:
  end_date: 2022-12-01
  location: New Orleans, LA, United States
  name: 'NeurIPS: Neural Information Processing Systems'
  start_date: 2022-11-28
date_created: 2023-08-21T12:13:25Z
date_published: 2022-10-23T00:00:00Z
date_updated: 2023-09-11T09:34:17Z
day: '23'
department:
- _id: FrLo
doi: 10.48550/arXiv.2210.12733
extern: '1'
external_id:
  arxiv:
  - '2210.12733'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.48550/arXiv.2210.12733
month: '10'
oa: 1
oa_version: Preprint
publication: 36th Conference on Neural Information Processing Systems
publication_status: published
status: public
title: Self-supervised amodal video object segmentation
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2022'
...
---
_id: '14114'
abstract:
- lang: eng
  text: Algorithmic fairness is frequently motivated in terms of a trade-off in which
    overall performance is decreased so as to improve performance on disadvantaged
    groups where the algorithm would otherwise be less accurate. Contrary to this,
    we find that applying existing fairness approaches to computer vision improve
    fairness by degrading the performance of classifiers across all groups (with increased
    degradation on the best performing groups). Extending the bias-variance decomposition
    for classification to fairness, we theoretically explain why the majority of fairness
    methods designed for low capacity models should not be used in settings involving
    high-capacity models, a scenario common to computer vision. We corroborate this
    analysis with extensive experimental support that shows that many of the fairness
    heuristics used in computer vision also degrade performance on the most disadvantaged
    groups. Building on these insights, we propose an adaptive augmentation strategy
    that, uniquely, of all methods tested, improves performance for the disadvantaged
    groups.
article_processing_charge: No
arxiv: 1
author:
- first_name: Dominik
  full_name: Zietlow, Dominik
  last_name: Zietlow
- first_name: Michael
  full_name: Lohaus, Michael
  last_name: Lohaus
- first_name: Guha
  full_name: Balakrishnan, Guha
  last_name: Balakrishnan
- first_name: Matthaus
  full_name: Kleindessner, Matthaus
  last_name: Kleindessner
- first_name: Francesco
  full_name: Locatello, Francesco
  id: 26cfd52f-2483-11ee-8040-88983bcc06d4
  last_name: Locatello
  orcid: 0000-0002-4850-0683
- first_name: Bernhard
  full_name: Scholkopf, Bernhard
  last_name: Scholkopf
- first_name: Chris
  full_name: Russell, Chris
  last_name: Russell
citation:
  ama: 'Zietlow D, Lohaus M, Balakrishnan G, et al. Leveling down in computer vision:
    Pareto inefficiencies in fair deep classifiers. In: <i>2022 IEEE/CVF Conference
    on Computer Vision and Pattern Recognition</i>. Institute of Electrical and Electronics
    Engineers; 2022:10400-10411. doi:<a href="https://doi.org/10.1109/cvpr52688.2022.01016">10.1109/cvpr52688.2022.01016</a>'
  apa: 'Zietlow, D., Lohaus, M., Balakrishnan, G., Kleindessner, M., Locatello, F.,
    Scholkopf, B., &#38; Russell, C. (2022). Leveling down in computer vision: Pareto
    inefficiencies in fair deep classifiers. In <i>2022 IEEE/CVF Conference on Computer
    Vision and Pattern Recognition</i> (pp. 10400–10411). New Orleans, LA, United
    States: Institute of Electrical and Electronics Engineers. <a href="https://doi.org/10.1109/cvpr52688.2022.01016">https://doi.org/10.1109/cvpr52688.2022.01016</a>'
  chicago: 'Zietlow, Dominik, Michael Lohaus, Guha Balakrishnan, Matthaus Kleindessner,
    Francesco Locatello, Bernhard Scholkopf, and Chris Russell. “Leveling down in
    Computer Vision: Pareto Inefficiencies in Fair Deep Classifiers.” In <i>2022 IEEE/CVF
    Conference on Computer Vision and Pattern Recognition</i>, 10400–411. Institute
    of Electrical and Electronics Engineers, 2022. <a href="https://doi.org/10.1109/cvpr52688.2022.01016">https://doi.org/10.1109/cvpr52688.2022.01016</a>.'
  ieee: 'D. Zietlow <i>et al.</i>, “Leveling down in computer vision: Pareto inefficiencies
    in fair deep classifiers,” in <i>2022 IEEE/CVF Conference on Computer Vision and
    Pattern Recognition</i>, New Orleans, LA, United States, 2022, pp. 10400–10411.'
  ista: 'Zietlow D, Lohaus M, Balakrishnan G, Kleindessner M, Locatello F, Scholkopf
    B, Russell C. 2022. Leveling down in computer vision: Pareto inefficiencies in
    fair deep classifiers. 2022 IEEE/CVF Conference on Computer Vision and Pattern
    Recognition. CVPR: Conference on Computer Vision and Pattern Recognition, 10400–10411.'
  mla: 'Zietlow, Dominik, et al. “Leveling down in Computer Vision: Pareto Inefficiencies
    in Fair Deep Classifiers.” <i>2022 IEEE/CVF Conference on Computer Vision and
    Pattern Recognition</i>, Institute of Electrical and Electronics Engineers, 2022,
    pp. 10400–11, doi:<a href="https://doi.org/10.1109/cvpr52688.2022.01016">10.1109/cvpr52688.2022.01016</a>.'
  short: D. Zietlow, M. Lohaus, G. Balakrishnan, M. Kleindessner, F. Locatello, B.
    Scholkopf, C. Russell, in:, 2022 IEEE/CVF Conference on Computer Vision and Pattern
    Recognition, Institute of Electrical and Electronics Engineers, 2022, pp. 10400–10411.
conference:
  end_date: 2022-06-24
  location: New Orleans, LA, United States
  name: 'CVPR: Conference on Computer Vision and Pattern Recognition'
  start_date: 2022-06-18
date_created: 2023-08-21T12:18:00Z
date_published: 2022-07-01T00:00:00Z
date_updated: 2023-09-11T09:19:14Z
day: '01'
department:
- _id: FrLo
doi: 10.1109/cvpr52688.2022.01016
extern: '1'
external_id:
  arxiv:
  - '2203.04913'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://arxiv.org/abs/2203.04913
month: '07'
oa: 1
oa_version: Preprint
page: 10400-10411
publication: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition
publication_identifier:
  eissn:
  - 2575-7075
  isbn:
  - '9781665469470'
  issn:
  - 1063-6919
publication_status: published
publisher: Institute of Electrical and Electronics Engineers
quality_controlled: '1'
scopus_import: '1'
status: public
title: 'Leveling down in computer vision: Pareto inefficiencies in fair deep classifiers'
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2022'
...
---
_id: '14168'
abstract:
- lang: eng
  text: "Recent work has seen the development of general purpose neural architectures\r\nthat
    can be trained to perform tasks across diverse data modalities. General\r\npurpose
    models typically make few assumptions about the underlying\r\ndata-structure and
    are known to perform well in the large-data regime. At the\r\nsame time, there
    has been growing interest in modular neural architectures that\r\nrepresent the
    data using sparsely interacting modules. These models can be more\r\nrobust out-of-distribution,
    computationally efficient, and capable of\r\nsample-efficient adaptation to new
    data. However, they tend to make\r\ndomain-specific assumptions about the data,
    and present challenges in how\r\nmodule behavior (i.e., parameterization) and
    connectivity (i.e., their layout)\r\ncan be jointly learned. In this work, we
    introduce a general purpose, yet\r\nmodular neural architecture called Neural
    Attentive Circuits (NACs) that\r\njointly learns the parameterization and a sparse
    connectivity of neural modules\r\nwithout using domain knowledge. NACs are best
    understood as the combination of\r\ntwo systems that are jointly trained end-to-end:
    one that determines the module\r\nconfiguration and the other that executes it
    on an input. We demonstrate\r\nqualitatively that NACs learn diverse and meaningful
    module configurations on\r\nthe NLVR2 dataset without additional supervision.
    Quantitatively, we show that\r\nby incorporating modularity in this way, NACs
    improve upon a strong non-modular\r\nbaseline in terms of low-shot adaptation
    on CIFAR and CUBs dataset by about\r\n10%, and OOD robustness on Tiny ImageNet-R
    by about 2.5%. Further, we find that\r\nNACs can achieve an 8x speedup at inference
    time while losing less than 3%\r\nperformance. Finally, we find NACs to yield
    competitive results on diverse data\r\nmodalities spanning point-cloud classification,
    symbolic processing and\r\ntext-classification from ASCII bytes, thereby confirming
    its general purpose\r\nnature."
alternative_title:
- ' Advances in Neural Information Processing Systems'
article_processing_charge: No
arxiv: 1
author:
- first_name: Nasim
  full_name: Rahaman, Nasim
  last_name: Rahaman
- first_name: Martin
  full_name: Weiss, Martin
  last_name: Weiss
- first_name: Francesco
  full_name: Locatello, Francesco
  id: 26cfd52f-2483-11ee-8040-88983bcc06d4
  last_name: Locatello
  orcid: 0000-0002-4850-0683
- first_name: Chris
  full_name: Pal, Chris
  last_name: Pal
- first_name: Yoshua
  full_name: Bengio, Yoshua
  last_name: Bengio
- first_name: Bernhard
  full_name: Schölkopf, Bernhard
  last_name: Schölkopf
- first_name: Li Erran
  full_name: Li, Li Erran
  last_name: Li
- first_name: Nicolas
  full_name: Ballas, Nicolas
  last_name: Ballas
citation:
  ama: 'Rahaman N, Weiss M, Locatello F, et al. Neural attentive circuits. In: <i>36th
    Conference on Neural Information Processing Systems</i>. Vol 35. ; 2022.'
  apa: Rahaman, N., Weiss, M., Locatello, F., Pal, C., Bengio, Y., Schölkopf, B.,
    … Ballas, N. (2022). Neural attentive circuits. In <i>36th Conference on Neural
    Information Processing Systems</i> (Vol. 35). New Orleans, United States.
  chicago: Rahaman, Nasim, Martin Weiss, Francesco Locatello, Chris Pal, Yoshua Bengio,
    Bernhard Schölkopf, Li Erran Li, and Nicolas Ballas. “Neural Attentive Circuits.”
    In <i>36th Conference on Neural Information Processing Systems</i>, Vol. 35, 2022.
  ieee: N. Rahaman <i>et al.</i>, “Neural attentive circuits,” in <i>36th Conference
    on Neural Information Processing Systems</i>, New Orleans, United States, 2022,
    vol. 35.
  ista: 'Rahaman N, Weiss M, Locatello F, Pal C, Bengio Y, Schölkopf B, Li LE, Ballas
    N. 2022. Neural attentive circuits. 36th Conference on Neural Information Processing
    Systems. NeurIPS: Neural Information Processing Systems,  Advances in Neural Information
    Processing Systems, vol. 35.'
  mla: Rahaman, Nasim, et al. “Neural Attentive Circuits.” <i>36th Conference on Neural
    Information Processing Systems</i>, vol. 35, 2022.
  short: N. Rahaman, M. Weiss, F. Locatello, C. Pal, Y. Bengio, B. Schölkopf, L.E.
    Li, N. Ballas, in:, 36th Conference on Neural Information Processing Systems,
    2022.
conference:
  end_date: 2022-12-01
  location: New Orleans, United States
  name: 'NeurIPS: Neural Information Processing Systems'
  start_date: 2022-11-29
date_created: 2023-08-22T13:57:27Z
date_published: 2022-10-14T00:00:00Z
date_updated: 2023-09-11T09:29:09Z
day: '14'
department:
- _id: FrLo
extern: '1'
external_id:
  arxiv:
  - '2210.08031'
intvolume: '        35'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.48550/arXiv.2210.08031
month: '10'
oa: 1
oa_version: Preprint
publication: 36th Conference on Neural Information Processing Systems
publication_status: published
status: public
title: Neural attentive circuits
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 35
year: '2022'
...
---
_id: '14170'
abstract:
- lang: eng
  text: "The idea behind object-centric representation learning is that natural scenes
    can better be modeled as compositions of objects and their relations as opposed
    to distributed representations. This inductive bias can be injected into neural
    networks to potentially improve systematic generalization and performance of downstream
    tasks in scenes with multiple objects. In this paper, we train state-of-the-art
    unsupervised models on five common multi-object datasets and evaluate segmentation
    metrics and downstream object property prediction. In addition, we study generalization
    and robustness by investigating the settings where either a single object is out
    of distribution -- e.g., having an unseen color, texture, or shape -- or global
    properties of the scene are altered -- e.g., by occlusions, cropping, or increasing
    the number of objects. From our experimental study, we find object-centric representations
    to be useful for\r\ndownstream tasks and generally robust to most distribution
    shifts affecting objects. However, when the distribution shift affects the input
    in a less structured manner, robustness in terms of segmentation and downstream
    task performance may vary significantly across models and distribution shifts. "
alternative_title:
- PMLR
article_processing_charge: No
arxiv: 1
author:
- first_name: Andrea
  full_name: Dittadi, Andrea
  last_name: Dittadi
- first_name: Samuele
  full_name: Papa, Samuele
  last_name: Papa
- first_name: Michele De
  full_name: Vita, Michele De
  last_name: Vita
- first_name: Bernhard
  full_name: Schölkopf, Bernhard
  last_name: Schölkopf
- first_name: Ole
  full_name: Winther, Ole
  last_name: Winther
- first_name: Francesco
  full_name: Locatello, Francesco
  id: 26cfd52f-2483-11ee-8040-88983bcc06d4
  last_name: Locatello
  orcid: 0000-0002-4850-0683
citation:
  ama: 'Dittadi A, Papa S, Vita MD, Schölkopf B, Winther O, Locatello F. Generalization
    and robustness implications in object-centric learning. In: <i>Proceedings of
    the 39th International Conference on Machine Learning</i>. Vol 2022. ML Research
    Press; :5221-5285.'
  apa: 'Dittadi, A., Papa, S., Vita, M. D., Schölkopf, B., Winther, O., &#38; Locatello,
    F. (n.d.). Generalization and robustness implications in object-centric learning.
    In <i>Proceedings of the 39th International Conference on Machine Learning</i>
    (Vol. 2022, pp. 5221–5285). Baltimore, MD, United States: ML Research Press.'
  chicago: Dittadi, Andrea, Samuele Papa, Michele De Vita, Bernhard Schölkopf, Ole
    Winther, and Francesco Locatello. “Generalization and Robustness Implications
    in Object-Centric Learning.” In <i>Proceedings of the 39th International Conference
    on Machine Learning</i>, 2022:5221–85. ML Research Press, n.d.
  ieee: A. Dittadi, S. Papa, M. D. Vita, B. Schölkopf, O. Winther, and F. Locatello,
    “Generalization and robustness implications in object-centric learning,” in <i>Proceedings
    of the 39th International Conference on Machine Learning</i>, Baltimore, MD, United
    States, vol. 2022, pp. 5221–5285.
  ista: Dittadi A, Papa S, Vita MD, Schölkopf B, Winther O, Locatello F. Generalization
    and robustness implications in object-centric learning. Proceedings of the 39th
    International Conference on Machine Learning. International Conference on Machine
    Learning, PMLR, vol. 2022, 5221–5285.
  mla: Dittadi, Andrea, et al. “Generalization and Robustness Implications in Object-Centric
    Learning.” <i>Proceedings of the 39th International Conference on Machine Learning</i>,
    vol. 2022, ML Research Press, pp. 5221–85.
  short: A. Dittadi, S. Papa, M.D. Vita, B. Schölkopf, O. Winther, F. Locatello, in:,
    Proceedings of the 39th International Conference on Machine Learning, ML Research
    Press, n.d., pp. 5221–5285.
conference:
  end_date: 2022-07-23
  location: Baltimore, MD, United States
  name: International Conference on Machine Learning
  start_date: 2022-07-17
date_created: 2023-08-22T13:59:55Z
date_published: 2022-07-22T00:00:00Z
date_updated: 2023-09-11T10:08:14Z
day: '22'
department:
- _id: FrLo
extern: '1'
external_id:
  arxiv:
  - '2107.00637'
intvolume: '      2022'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://arxiv.org/abs/2107.00637
month: '07'
oa: 1
oa_version: Preprint
page: 5221-5285
publication: Proceedings of the 39th International Conference on Machine Learning
publication_status: submitted
publisher: ML Research Press
quality_controlled: '1'
status: public
title: Generalization and robustness implications in object-centric learning
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 2022
year: '2022'
...
---
_id: '14171'
abstract:
- lang: eng
  text: "This paper demonstrates how to recover causal graphs from the score of the\r\ndata
    distribution in non-linear additive (Gaussian) noise models. Using score\r\nmatching
    algorithms as a building block, we show how to design a new generation\r\nof scalable
    causal discovery methods. To showcase our approach, we also propose\r\na new efficient
    method for approximating the score's Jacobian, enabling to\r\nrecover the causal
    graph. Empirically, we find that the new algorithm, called\r\nSCORE, is competitive
    with state-of-the-art causal discovery methods while\r\nbeing significantly faster."
alternative_title:
- PMLR
article_processing_charge: No
arxiv: 1
author:
- first_name: Paul
  full_name: Rolland, Paul
  last_name: Rolland
- first_name: Volkan
  full_name: Cevher, Volkan
  last_name: Cevher
- first_name: Matthäus
  full_name: Kleindessner, Matthäus
  last_name: Kleindessner
- first_name: Chris
  full_name: Russel, Chris
  last_name: Russel
- first_name: Bernhard
  full_name: Schölkopf, Bernhard
  last_name: Schölkopf
- first_name: Dominik
  full_name: Janzing, Dominik
  last_name: Janzing
- first_name: Francesco
  full_name: Locatello, Francesco
  id: 26cfd52f-2483-11ee-8040-88983bcc06d4
  last_name: Locatello
  orcid: 0000-0002-4850-0683
citation:
  ama: 'Rolland P, Cevher V, Kleindessner M, et al. Score matching enables causal
    discovery of nonlinear additive noise  models. In: <i>Proceedings of the 39th
    International Conference on Machine Learning</i>. Vol 162. ML Research Press;
    2022:18741-18753.'
  apa: 'Rolland, P., Cevher, V., Kleindessner, M., Russel, C., Schölkopf, B., Janzing,
    D., &#38; Locatello, F. (2022). Score matching enables causal discovery of nonlinear
    additive noise  models. In <i>Proceedings of the 39th International Conference
    on Machine Learning</i> (Vol. 162, pp. 18741–18753). Baltimore, MD, United States:
    ML Research Press.'
  chicago: Rolland, Paul, Volkan Cevher, Matthäus Kleindessner, Chris Russel, Bernhard
    Schölkopf, Dominik Janzing, and Francesco Locatello. “Score Matching Enables Causal
    Discovery of Nonlinear Additive Noise  Models.” In <i>Proceedings of the 39th
    International Conference on Machine Learning</i>, 162:18741–53. ML Research Press,
    2022.
  ieee: P. Rolland <i>et al.</i>, “Score matching enables causal discovery of nonlinear
    additive noise  models,” in <i>Proceedings of the 39th International Conference
    on Machine Learning</i>, Baltimore, MD, United States, 2022, vol. 162, pp. 18741–18753.
  ista: Rolland P, Cevher V, Kleindessner M, Russel C, Schölkopf B, Janzing D, Locatello
    F. 2022. Score matching enables causal discovery of nonlinear additive noise 
    models. Proceedings of the 39th International Conference on Machine Learning.
    International Conference on Machine Learning, PMLR, vol. 162, 18741–18753.
  mla: Rolland, Paul, et al. “Score Matching Enables Causal Discovery of Nonlinear
    Additive Noise  Models.” <i>Proceedings of the 39th International Conference on
    Machine Learning</i>, vol. 162, ML Research Press, 2022, pp. 18741–53.
  short: P. Rolland, V. Cevher, M. Kleindessner, C. Russel, B. Schölkopf, D. Janzing,
    F. Locatello, in:, Proceedings of the 39th International Conference on Machine
    Learning, ML Research Press, 2022, pp. 18741–18753.
conference:
  end_date: 2022-07-23
  location: Baltimore, MD, United States
  name: International Conference on Machine Learning
  start_date: 2022-07-17
date_created: 2023-08-22T14:00:18Z
date_published: 2022-07-22T00:00:00Z
date_updated: 2023-09-11T10:14:20Z
day: '22'
department:
- _id: FrLo
extern: '1'
external_id:
  arxiv:
  - '2203.04413'
intvolume: '       162'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://arxiv.org/abs/2203.04413
month: '07'
oa: 1
oa_version: Preprint
page: 18741-18753
publication: Proceedings of the 39th International Conference on Machine Learning
publication_status: published
publisher: ML Research Press
quality_controlled: '1'
status: public
title: Score matching enables causal discovery of nonlinear additive noise  models
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 162
year: '2022'
...
---
_id: '14172'
abstract:
- lang: eng
  text: "An important component for generalization in machine learning is to uncover
    underlying latent factors of variation as well as the mechanism through which
    each factor acts in the world. In this paper, we test whether 17 unsupervised,
    weakly supervised, and fully supervised representation learning approaches correctly
    infer the generative factors of variation in simple datasets (dSprites, Shapes3D,
    MPI3D) from controlled environments, and on our contributed CelebGlow dataset.
    In contrast to prior robustness work that introduces novel factors of variation
    during test time, such as blur or other (un)structured noise, we here recompose,
    interpolate, or extrapolate only existing factors of variation from the training
    data set (e.g., small and medium-sized objects during training and large objects
    during testing). Models\r\nthat learn the correct mechanism should be able to
    generalize to this benchmark. In total, we train and test 2000+ models and observe
    that all of them struggle to learn the underlying mechanism regardless of supervision
    signal and architectural bias. Moreover, the generalization capabilities of all
    tested models drop significantly as we move from artificial datasets towards\r\nmore
    realistic real-world datasets. Despite their inability to identify the correct
    mechanism, the models are quite modular as their ability to infer other in-distribution
    factors remains fairly stable, providing only a single factoris out-of-distribution.
    These results point to an important yet understudied problem of learning mechanistic
    models of observations that can facilitate\r\ngeneralization."
article_processing_charge: No
arxiv: 1
author:
- first_name: Lukas
  full_name: Schott, Lukas
  last_name: Schott
- first_name: Julius von
  full_name: Kügelgen, Julius von
  last_name: Kügelgen
- first_name: Frederik
  full_name: Träuble, Frederik
  last_name: Träuble
- first_name: Peter
  full_name: Gehler, Peter
  last_name: Gehler
- first_name: Chris
  full_name: Russell, Chris
  last_name: Russell
- first_name: Matthias
  full_name: Bethge, Matthias
  last_name: Bethge
- first_name: Bernhard
  full_name: Schölkopf, Bernhard
  last_name: Schölkopf
- first_name: Francesco
  full_name: Locatello, Francesco
  id: 26cfd52f-2483-11ee-8040-88983bcc06d4
  last_name: Locatello
  orcid: 0000-0002-4850-0683
- first_name: Wieland
  full_name: Brendel, Wieland
  last_name: Brendel
citation:
  ama: 'Schott L, Kügelgen J von, Träuble F, et al. Visual representation learning
    does not generalize strongly within the  same domain. In: <i>10th International
    Conference on Learning Representations</i>. ; 2022.'
  apa: Schott, L., Kügelgen, J. von, Träuble, F., Gehler, P., Russell, C., Bethge,
    M., … Brendel, W. (2022). Visual representation learning does not generalize strongly
    within the  same domain. In <i>10th International Conference on Learning Representations</i>.
    Virtual.
  chicago: Schott, Lukas, Julius von Kügelgen, Frederik Träuble, Peter Gehler, Chris
    Russell, Matthias Bethge, Bernhard Schölkopf, Francesco Locatello, and Wieland
    Brendel. “Visual Representation Learning Does Not Generalize Strongly within the 
    Same Domain.” In <i>10th International Conference on Learning Representations</i>,
    2022.
  ieee: L. Schott <i>et al.</i>, “Visual representation learning does not generalize
    strongly within the  same domain,” in <i>10th International Conference on Learning
    Representations</i>, Virtual, 2022.
  ista: 'Schott L, Kügelgen J von, Träuble F, Gehler P, Russell C, Bethge M, Schölkopf
    B, Locatello F, Brendel W. 2022. Visual representation learning does not generalize
    strongly within the  same domain. 10th International Conference on Learning Representations.
    ICLR: International Conference on Learning Representations.'
  mla: Schott, Lukas, et al. “Visual Representation Learning Does Not Generalize Strongly
    within the  Same Domain.” <i>10th International Conference on Learning Representations</i>,
    2022.
  short: L. Schott, J. von Kügelgen, F. Träuble, P. Gehler, C. Russell, M. Bethge,
    B. Schölkopf, F. Locatello, W. Brendel, in:, 10th International Conference on
    Learning Representations, 2022.
conference:
  end_date: 2022-04-29
  location: Virtual
  name: 'ICLR: International Conference on Learning Representations'
  start_date: 2022-04-25
date_created: 2023-08-22T14:00:50Z
date_published: 2022-04-25T00:00:00Z
date_updated: 2023-09-11T09:40:52Z
day: '25'
department:
- _id: FrLo
extern: '1'
external_id:
  arxiv:
  - '2107.08221'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.48550/arXiv.2107.08221
month: '04'
oa: 1
oa_version: Preprint
publication: 10th International Conference on Learning Representations
publication_status: published
quality_controlled: '1'
status: public
title: Visual representation learning does not generalize strongly within the  same
  domain
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2022'
...
---
_id: '14173'
abstract:
- lang: eng
  text: "Since out-of-distribution generalization is a generally ill-posed problem,
    various proxy targets (e.g., calibration, adversarial robustness, algorithmic
    corruptions, invariance across shifts) were studied across different research
    programs resulting in different recommendations. While sharing the same aspirational
    goal, these approaches have never been tested under the same\r\nexperimental conditions
    on real data. In this paper, we take a unified view of previous work, highlighting
    message discrepancies that we address empirically, and providing recommendations
    on how to measure the robustness of a model and how to improve it. To this end,
    we collect 172 publicly available dataset pairs for training and out-of-distribution
    evaluation of accuracy, calibration error, adversarial attacks, environment invariance,
    and synthetic corruptions. We fine-tune over 31k networks, from nine different
    architectures in the many- and\r\nfew-shot setting. Our findings confirm that
    in- and out-of-distribution accuracies tend to increase jointly, but show that
    their relation is largely dataset-dependent, and in general more nuanced and more
    complex than posited by previous, smaller scale studies."
alternative_title:
- Advances in Neural Information Processing Systems
article_processing_charge: No
arxiv: 1
author:
- first_name: Florian
  full_name: Wenzel, Florian
  last_name: Wenzel
- first_name: Andrea
  full_name: Dittadi, Andrea
  last_name: Dittadi
- first_name: Peter Vincent
  full_name: Gehler, Peter Vincent
  last_name: Gehler
- first_name: Carl-Johann Simon-Gabriel
  full_name: Carl-Johann Simon-Gabriel, Carl-Johann Simon-Gabriel
  last_name: Carl-Johann Simon-Gabriel
- first_name: Max
  full_name: Horn, Max
  last_name: Horn
- first_name: Dominik
  full_name: Zietlow, Dominik
  last_name: Zietlow
- first_name: David
  full_name: Kernert, David
  last_name: Kernert
- first_name: Chris
  full_name: Russell, Chris
  last_name: Russell
- first_name: Thomas
  full_name: Brox, Thomas
  last_name: Brox
- first_name: Bernt
  full_name: Schiele, Bernt
  last_name: Schiele
- first_name: Bernhard
  full_name: Schölkopf, Bernhard
  last_name: Schölkopf
- first_name: Francesco
  full_name: Locatello, Francesco
  id: 26cfd52f-2483-11ee-8040-88983bcc06d4
  last_name: Locatello
  orcid: 0000-0002-4850-0683
citation:
  ama: 'Wenzel F, Dittadi A, Gehler PV, et al. Assaying out-of-distribution generalization
    in transfer learning. In: <i>36th Conference on Neural Information Processing
    Systems</i>. Vol 35. Neural Information Processing Systems Foundation; 2022:7181-7198.'
  apa: 'Wenzel, F., Dittadi, A., Gehler, P. V., Carl-Johann Simon-Gabriel, C.-J. S.-G.,
    Horn, M., Zietlow, D., … Locatello, F. (2022). Assaying out-of-distribution generalization
    in transfer learning. In <i>36th Conference on Neural Information Processing Systems</i>
    (Vol. 35, pp. 7181–7198). New Orleans, LA, United States: Neural Information Processing
    Systems Foundation.'
  chicago: Wenzel, Florian, Andrea Dittadi, Peter Vincent Gehler, Carl-Johann Simon-Gabriel
    Carl-Johann Simon-Gabriel, Max Horn, Dominik Zietlow, David Kernert, et al. “Assaying
    Out-of-Distribution Generalization in Transfer Learning.” In <i>36th Conference
    on Neural Information Processing Systems</i>, 35:7181–98. Neural Information Processing
    Systems Foundation, 2022.
  ieee: F. Wenzel <i>et al.</i>, “Assaying out-of-distribution generalization in transfer
    learning,” in <i>36th Conference on Neural Information Processing Systems</i>,
    New Orleans, LA, United States, 2022, vol. 35, pp. 7181–7198.
  ista: 'Wenzel F, Dittadi A, Gehler PV, Carl-Johann Simon-Gabriel C-JS-G, Horn M,
    Zietlow D, Kernert D, Russell C, Brox T, Schiele B, Schölkopf B, Locatello F.
    2022. Assaying out-of-distribution generalization in transfer learning. 36th Conference
    on Neural Information Processing Systems. NeurIPS: Neural Information Processing
    Systems, Advances in Neural Information Processing Systems, vol. 35, 7181–7198.'
  mla: Wenzel, Florian, et al. “Assaying Out-of-Distribution Generalization in Transfer
    Learning.” <i>36th Conference on Neural Information Processing Systems</i>, vol.
    35, Neural Information Processing Systems Foundation, 2022, pp. 7181–98.
  short: F. Wenzel, A. Dittadi, P.V. Gehler, C.-J.S.-G. Carl-Johann Simon-Gabriel,
    M. Horn, D. Zietlow, D. Kernert, C. Russell, T. Brox, B. Schiele, B. Schölkopf,
    F. Locatello, in:, 36th Conference on Neural Information Processing Systems, Neural
    Information Processing Systems Foundation, 2022, pp. 7181–7198.
conference:
  end_date: 2022-12-09
  location: New Orleans, LA, United States
  name: 'NeurIPS: Neural Information Processing Systems'
  start_date: 2022-11-28
date_created: 2023-08-22T14:01:13Z
date_published: 2022-12-15T00:00:00Z
date_updated: 2023-09-06T10:34:43Z
day: '15'
department:
- _id: FrLo
extern: '1'
external_id:
  arxiv:
  - '2207.09239'
intvolume: '        35'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://arxiv.org/abs/2207.09239
month: '12'
oa: 1
oa_version: Preprint
page: 7181-7198
publication: 36th Conference on Neural Information Processing Systems
publication_identifier:
  isbn:
  - '9781713871088'
publication_status: published
publisher: Neural Information Processing Systems Foundation
quality_controlled: '1'
scopus_import: '1'
status: public
title: Assaying out-of-distribution generalization in transfer learning
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 35
year: '2022'
...
---
_id: '14174'
abstract:
- lang: eng
  text: "Building sample-efficient agents that generalize out-of-distribution (OOD)
    in real-world settings remains a fundamental unsolved problem on the path towards
    achieving higher-level cognition. One particularly promising approach is to begin
    with low-dimensional, pretrained representations of our world, which should facilitate
    efficient downstream learning and generalization. By training 240 representations
    and over 10,000 reinforcement learning (RL) policies on a simulated robotic setup,
    we evaluate to what extent different properties of\r\npretrained VAE-based representations
    affect the OOD generalization of downstream agents. We observe that many agents
    are surprisingly robust to realistic distribution shifts, including the challenging
    sim-to-real case. In addition, we find that the generalization performance of
    a simple downstream proxy task reliably predicts the generalization performance
    of our RL agents\r\nunder a wide range of OOD settings. Such proxy tasks can thus
    be used to select pretrained representations that will lead to agents that generalize."
article_processing_charge: No
arxiv: 1
author:
- first_name: Andrea
  full_name: Dittadi, Andrea
  last_name: Dittadi
- first_name: Frederik
  full_name: Träuble, Frederik
  last_name: Träuble
- first_name: Manuel
  full_name: Wüthrich, Manuel
  last_name: Wüthrich
- first_name: Felix
  full_name: Widmaier, Felix
  last_name: Widmaier
- first_name: Peter
  full_name: Gehler, Peter
  last_name: Gehler
- first_name: Ole
  full_name: Winther, Ole
  last_name: Winther
- first_name: Francesco
  full_name: Locatello, Francesco
  id: 26cfd52f-2483-11ee-8040-88983bcc06d4
  last_name: Locatello
  orcid: 0000-0002-4850-0683
- first_name: Olivier
  full_name: Bachem, Olivier
  last_name: Bachem
- first_name: Bernhard
  full_name: Schölkopf, Bernhard
  last_name: Schölkopf
- first_name: Stefan
  full_name: Bauer, Stefan
  last_name: Bauer
citation:
  ama: 'Dittadi A, Träuble F, Wüthrich M, et al. The role of pretrained representations
    for the OOD generalization of  reinforcement learning agents. In: <i>10th International
    Conference on Learning Representations</i>. ; 2022.'
  apa: Dittadi, A., Träuble, F., Wüthrich, M., Widmaier, F., Gehler, P., Winther,
    O., … Bauer, S. (2022). The role of pretrained representations for the OOD generalization
    of  reinforcement learning agents. In <i>10th International Conference on Learning
    Representations</i>. Virtual.
  chicago: Dittadi, Andrea, Frederik Träuble, Manuel Wüthrich, Felix Widmaier, Peter
    Gehler, Ole Winther, Francesco Locatello, Olivier Bachem, Bernhard Schölkopf,
    and Stefan Bauer. “The Role of Pretrained Representations for the OOD Generalization
    of  Reinforcement Learning Agents.” In <i>10th International Conference on Learning
    Representations</i>, 2022.
  ieee: A. Dittadi <i>et al.</i>, “The role of pretrained representations for the
    OOD generalization of  reinforcement learning agents,” in <i>10th International
    Conference on Learning Representations</i>, Virtual, 2022.
  ista: 'Dittadi A, Träuble F, Wüthrich M, Widmaier F, Gehler P, Winther O, Locatello
    F, Bachem O, Schölkopf B, Bauer S. 2022. The role of pretrained representations
    for the OOD generalization of  reinforcement learning agents. 10th International
    Conference on Learning Representations. ICLR: International Conference on Learning
    Representations.'
  mla: Dittadi, Andrea, et al. “The Role of Pretrained Representations for the OOD
    Generalization of  Reinforcement Learning Agents.” <i>10th International Conference
    on Learning Representations</i>, 2022.
  short: A. Dittadi, F. Träuble, M. Wüthrich, F. Widmaier, P. Gehler, O. Winther,
    F. Locatello, O. Bachem, B. Schölkopf, S. Bauer, in:, 10th International Conference
    on Learning Representations, 2022.
conference:
  end_date: 2022-04-29
  location: Virtual
  name: 'ICLR: International Conference on Learning Representations'
  start_date: 2022-04-25
date_created: 2023-08-22T14:02:13Z
date_published: 2022-04-25T00:00:00Z
date_updated: 2023-09-11T09:48:36Z
day: '25'
department:
- _id: FrLo
extern: '1'
external_id:
  arxiv:
  - '2107.05686'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: ' https://doi.org/10.48550/arXiv.2107.05686'
month: '04'
oa: 1
oa_version: Preprint
publication: 10th International Conference on Learning Representations
publication_status: published
quality_controlled: '1'
status: public
title: The role of pretrained representations for the OOD generalization of  reinforcement
  learning agents
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2022'
...
---
_id: '14175'
abstract:
- lang: eng
  text: "Predicting the future trajectory of a moving agent can be easy when the past
    trajectory continues smoothly but is challenging when complex interactions with
    other agents are involved. Recent deep learning approaches for trajectory prediction
    show promising performance and partially attribute this to successful reasoning
    about agent-agent interactions. However, it remains unclear which features such
    black-box models actually learn to use for making predictions. This paper proposes
    a procedure that quantifies the contributions\r\nof different cues to model performance
    based on a variant of Shapley values. Applying this procedure to state-of-the-art
    trajectory prediction methods on standard benchmark datasets shows that they are,
    in fact, unable to reason about interactions. Instead, the past trajectory of
    the target is the only feature used for predicting its future. For a task with
    richer social\r\ninteraction patterns, on the other hand, the tested models do
    pick up such interactions to a certain extent, as quantified by our feature attribution
    method. We discuss the limits of the proposed method and its links to causality."
article_processing_charge: No
arxiv: 1
author:
- first_name: Osama
  full_name: Makansi, Osama
  last_name: Makansi
- first_name: Julius von
  full_name: Kügelgen, Julius von
  last_name: Kügelgen
- first_name: Francesco
  full_name: Locatello, Francesco
  id: 26cfd52f-2483-11ee-8040-88983bcc06d4
  last_name: Locatello
  orcid: 0000-0002-4850-0683
- first_name: Peter
  full_name: Gehler, Peter
  last_name: Gehler
- first_name: Dominik
  full_name: Janzing, Dominik
  last_name: Janzing
- first_name: Thomas
  full_name: Brox, Thomas
  last_name: Brox
- first_name: Bernhard
  full_name: Schölkopf, Bernhard
  last_name: Schölkopf
citation:
  ama: 'Makansi O, Kügelgen J von, Locatello F, et al. You mostly walk alone: Analyzing
    feature attribution in trajectory prediction. In: <i>10th International Conference
    on Learning Representations</i>. ; 2022.'
  apa: 'Makansi, O., Kügelgen, J. von, Locatello, F., Gehler, P., Janzing, D., Brox,
    T., &#38; Schölkopf, B. (2022). You mostly walk alone: Analyzing feature attribution
    in trajectory prediction. In <i>10th International Conference on Learning Representations</i>.
    Virtual.'
  chicago: 'Makansi, Osama, Julius von Kügelgen, Francesco Locatello, Peter Gehler,
    Dominik Janzing, Thomas Brox, and Bernhard Schölkopf. “You Mostly Walk Alone:
    Analyzing Feature Attribution in Trajectory Prediction.” In <i>10th International
    Conference on Learning Representations</i>, 2022.'
  ieee: 'O. Makansi <i>et al.</i>, “You mostly walk alone: Analyzing feature attribution
    in trajectory prediction,” in <i>10th International Conference on Learning Representations</i>,
    Virtual, 2022.'
  ista: 'Makansi O, Kügelgen J von, Locatello F, Gehler P, Janzing D, Brox T, Schölkopf
    B. 2022. You mostly walk alone: Analyzing feature attribution in trajectory prediction.
    10th International Conference on Learning Representations. ICLR: International
    Conference on Learning Representations.'
  mla: 'Makansi, Osama, et al. “You Mostly Walk Alone: Analyzing Feature Attribution
    in Trajectory Prediction.” <i>10th International Conference on Learning Representations</i>,
    2022.'
  short: O. Makansi, J. von Kügelgen, F. Locatello, P. Gehler, D. Janzing, T. Brox,
    B. Schölkopf, in:, 10th International Conference on Learning Representations,
    2022.
conference:
  end_date: 2022-04-29
  location: Virtual
  name: 'ICLR: International Conference on Learning Representations'
  start_date: 2022-04-25
date_created: 2023-08-22T14:02:34Z
date_published: 2022-04-25T00:00:00Z
date_updated: 2023-09-11T09:52:20Z
day: '25'
department:
- _id: FrLo
extern: '1'
external_id:
  arxiv:
  - '2110.05304'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.48550/arXiv.2110.05304
month: '04'
oa: 1
oa_version: Preprint
publication: 10th International Conference on Learning Representations
publication_status: published
quality_controlled: '1'
status: public
title: 'You mostly walk alone: Analyzing feature attribution in trajectory prediction'
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2022'
...
---
_id: '14215'
abstract:
- lang: eng
  text: Geospatial Information Systems are used by researchers and Humanitarian Assistance
    and Disaster Response (HADR) practitioners to support a wide variety of important
    applications. However, collaboration between these actors is difficult due to
    the heterogeneous nature of geospatial data modalities (e.g., multi-spectral images
    of various resolutions, timeseries, weather data) and diversity of tasks (e.g.,
    regression of human activity indicators or detecting forest fires). In this work,
    we present a roadmap towards the construction of a general-purpose neural architecture
    (GPNA) with a geospatial inductive bias, pre-trained on large amounts of unlabelled
    earth observation data in a self-supervised manner. We envision how such a model
    may facilitate cooperation between members of the community. We show preliminary
    results on the first step of the roadmap, where we instantiate an architecture
    that can process a wide variety of geospatial data modalities and demonstrate
    that it can achieve competitive performance with domain-specific architectures
    on tasks relating to the U.N.'s Sustainable Development Goals.
article_processing_charge: No
arxiv: 1
author:
- first_name: Nasim
  full_name: Rahaman, Nasim
  last_name: Rahaman
- first_name: Martin
  full_name: Weiss, Martin
  last_name: Weiss
- first_name: Frederik
  full_name: Träuble, Frederik
  last_name: Träuble
- first_name: Francesco
  full_name: Locatello, Francesco
  id: 26cfd52f-2483-11ee-8040-88983bcc06d4
  last_name: Locatello
  orcid: 0000-0002-4850-0683
- first_name: Alexandre
  full_name: Lacoste, Alexandre
  last_name: Lacoste
- first_name: Yoshua
  full_name: Bengio, Yoshua
  last_name: Bengio
- first_name: Chris
  full_name: Pal, Chris
  last_name: Pal
- first_name: Li Erran
  full_name: Li, Li Erran
  last_name: Li
- first_name: Bernhard
  full_name: Schölkopf, Bernhard
  last_name: Schölkopf
citation:
  ama: 'Rahaman N, Weiss M, Träuble F, et al. A general purpose neural architecture
    for geospatial systems. In: <i>36th Conference on Neural Information Processing
    Systems</i>.'
  apa: Rahaman, N., Weiss, M., Träuble, F., Locatello, F., Lacoste, A., Bengio, Y.,
    … Schölkopf, B. (n.d.). A general purpose neural architecture for geospatial systems.
    In <i>36th Conference on Neural Information Processing Systems</i>. New Orleans,
    LA, United States.
  chicago: Rahaman, Nasim, Martin Weiss, Frederik Träuble, Francesco Locatello, Alexandre
    Lacoste, Yoshua Bengio, Chris Pal, Li Erran Li, and Bernhard Schölkopf. “A General
    Purpose Neural Architecture for Geospatial Systems.” In <i>36th Conference on
    Neural Information Processing Systems</i>, n.d.
  ieee: N. Rahaman <i>et al.</i>, “A general purpose neural architecture for geospatial
    systems,” in <i>36th Conference on Neural Information Processing Systems</i>,
    New Orleans, LA, United States.
  ista: 'Rahaman N, Weiss M, Träuble F, Locatello F, Lacoste A, Bengio Y, Pal C, Li
    LE, Schölkopf B. A general purpose neural architecture for geospatial systems.
    36th Conference on Neural Information Processing Systems. NeurIPS: Neural Information
    Processing Systems.'
  mla: Rahaman, Nasim, et al. “A General Purpose Neural Architecture for Geospatial
    Systems.” <i>36th Conference on Neural Information Processing Systems</i>.
  short: N. Rahaman, M. Weiss, F. Träuble, F. Locatello, A. Lacoste, Y. Bengio, C.
    Pal, L.E. Li, B. Schölkopf, in:, 36th Conference on Neural Information Processing
    Systems, n.d.
conference:
  end_date: 2022-12-09
  location: New Orleans, LA, United States
  name: 'NeurIPS: Neural Information Processing Systems'
  start_date: 2022-11-28
date_created: 2023-08-22T14:21:47Z
date_published: 2022-11-04T00:00:00Z
date_updated: 2023-09-13T09:35:59Z
day: '04'
department:
- _id: FrLo
extern: '1'
external_id:
  arxiv:
  - '2211.02348'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.48550/arXiv.2211.02348
month: '11'
oa: 1
oa_version: Preprint
publication: 36th Conference on Neural Information Processing Systems
publication_status: submitted
quality_controlled: '1'
status: public
title: A general purpose neural architecture for geospatial systems
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2022'
...
---
_id: '14220'
abstract:
- lang: eng
  text: Although reinforcement learning has seen remarkable progress over the last
    years, solving robust dexterous object-manipulation tasks in multi-object settings
    remains a challenge. In this paper, we focus on models that can learn manipulation
    tasks in fixed multi-object settings and extrapolate this skill zero-shot without
    any drop in performance when the number of objects changes. We consider the generic
    task of bringing a specific cube out of a set to a goal position. We find that
    previous approaches, which primarily leverage attention and graph neural network-based
    architectures, do not generalize their skills when the number of input objects
    changes while scaling as K2. We propose an alternative plug-and-play module based
    on relational inductive biases to overcome these limitations. Besides exceeding
    performances in their training environment, we show that our approach, which scales
    linearly in K, allows agents to extrapolate and generalize zero-shot to any new
    object number.
article_number: '2201.13388'
article_processing_charge: No
arxiv: 1
author:
- first_name: Davide
  full_name: Mambelli, Davide
  last_name: Mambelli
- first_name: Frederik
  full_name: Träuble, Frederik
  last_name: Träuble
- first_name: Stefan
  full_name: Bauer, Stefan
  last_name: Bauer
- first_name: Bernhard
  full_name: Schölkopf, Bernhard
  last_name: Schölkopf
- first_name: Francesco
  full_name: Locatello, Francesco
  id: 26cfd52f-2483-11ee-8040-88983bcc06d4
  last_name: Locatello
  orcid: 0000-0002-4850-0683
citation:
  ama: Mambelli D, Träuble F, Bauer S, Schölkopf B, Locatello F. Compositional multi-object
    reinforcement learning with linear relation networks. <i>arXiv</i>. doi:<a href="https://doi.org/10.48550/arXiv.2201.13388">10.48550/arXiv.2201.13388</a>
  apa: Mambelli, D., Träuble, F., Bauer, S., Schölkopf, B., &#38; Locatello, F. (n.d.).
    Compositional multi-object reinforcement learning with linear relation networks.
    <i>arXiv</i>. <a href="https://doi.org/10.48550/arXiv.2201.13388">https://doi.org/10.48550/arXiv.2201.13388</a>
  chicago: Mambelli, Davide, Frederik Träuble, Stefan Bauer, Bernhard Schölkopf, and
    Francesco Locatello. “Compositional Multi-Object Reinforcement Learning with Linear
    Relation Networks.” <i>ArXiv</i>, n.d. <a href="https://doi.org/10.48550/arXiv.2201.13388">https://doi.org/10.48550/arXiv.2201.13388</a>.
  ieee: D. Mambelli, F. Träuble, S. Bauer, B. Schölkopf, and F. Locatello, “Compositional
    multi-object reinforcement learning with linear relation networks,” <i>arXiv</i>.
    .
  ista: Mambelli D, Träuble F, Bauer S, Schölkopf B, Locatello F. Compositional multi-object
    reinforcement learning with linear relation networks. arXiv, 2201.13388.
  mla: Mambelli, Davide, et al. “Compositional Multi-Object Reinforcement Learning
    with Linear Relation Networks.” <i>ArXiv</i>, 2201.13388, doi:<a href="https://doi.org/10.48550/arXiv.2201.13388">10.48550/arXiv.2201.13388</a>.
  short: D. Mambelli, F. Träuble, S. Bauer, B. Schölkopf, F. Locatello, ArXiv (n.d.).
date_created: 2023-08-22T14:23:16Z
date_published: 2022-01-31T00:00:00Z
date_updated: 2024-10-14T12:27:39Z
day: '31'
department:
- _id: FrLo
doi: 10.48550/arXiv.2201.13388
extern: '1'
external_id:
  arxiv:
  - '2201.13388'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.48550/arXiv.2201.13388
month: '01'
oa: 1
oa_version: Preprint
publication: arXiv
publication_status: submitted
status: public
title: Compositional multi-object reinforcement learning with linear relation networks
type: preprint
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2022'
...
---
_id: '14355'
abstract:
- lang: eng
  text: 'Purpose: The mediator (MED) multisubunit-complex modulates the activity of
    the transcriptional machinery, and genetic defects in different MED subunits (17,
    20, 27) have been implicated in neurologic diseases. In this study, we identified
    a recurrent homozygous variant in MED11 (c.325C>T; p.Arg109Ter) in 7 affected
    individuals from 5 unrelated families. Methods: To investigate the genetic cause
    of the disease, exome or genome sequencing were performed in 5 unrelated families
    identified via different research networks and Matchmaker Exchange. Deep clinical
    and brain imaging evaluations were performed by clinical pediatric neurologists
    and neuroradiologists. The functional effect of the candidate variant on both
    MED11 RNA and protein was assessed using reverse transcriptase polymerase chain
    reaction and western blotting using fibroblast cell lines derived from 1 affected
    individual and controls and through computational approaches. Knockouts in zebrafish
    were generated using clustered regularly interspaced short palindromic repeats/Cas9.
    Results: The disease was characterized by microcephaly, profound neurodevelopmental
    impairment, exaggerated startle response, myoclonic seizures, progressive widespread
    neurodegeneration, and premature death. Functional studies on patient-derived
    fibroblasts did not show a loss of protein function but rather disruption of the
    C-terminal of MED11, likely impairing binding to other MED subunits. A zebrafish
    knockout model recapitulates key clinical phenotypes. Conclusion: Loss of the
    C-terminal of MED subunit 11 may affect its binding efficiency to other MED subunits,
    thus implicating the MED-complex stability in brain development and neurodegeneration.
    (C) 2022 The Authors. Published by Elsevier Inc. on behalf of American College
    of Medical Genetics and Genomics.'
article_processing_charge: No
article_type: original
author:
- first_name: Elisa
  full_name: Cali, Elisa
  last_name: Cali
- first_name: Sheng-Jia
  full_name: Lin, Sheng-Jia
  last_name: Lin
- first_name: Clarissa
  full_name: Rocca, Clarissa
  last_name: Rocca
- first_name: Yavuz
  full_name: Sahin, Yavuz
  last_name: Sahin
- first_name: Aisha
  full_name: Al Shamsi, Aisha
  last_name: Al Shamsi
- first_name: Salima
  full_name: El Chehadeh, Salima
  last_name: El Chehadeh
- first_name: Myriam
  full_name: Chaabouni, Myriam
  last_name: Chaabouni
- first_name: Kshitij
  full_name: Mankad, Kshitij
  last_name: Mankad
- first_name: Evangelia
  full_name: Galanaki, Evangelia
  last_name: Galanaki
- first_name: Stephanie
  full_name: Efthymiou, Stephanie
  last_name: Efthymiou
- first_name: Sniya
  full_name: Sudhakar, Sniya
  last_name: Sudhakar
- first_name: Alkyoni
  full_name: Athanasiou-Fragkouli, Alkyoni
  last_name: Athanasiou-Fragkouli
- first_name: Tamer
  full_name: Celik, Tamer
  last_name: Celik
- first_name: Nejat
  full_name: Narli, Nejat
  last_name: Narli
- first_name: Sebastiano
  full_name: Bianca, Sebastiano
  last_name: Bianca
- first_name: David
  full_name: Murphy, David
  last_name: Murphy
- first_name: Francisco Martins De Carvalho
  full_name: Moreira, Francisco Martins De Carvalho
  last_name: Moreira
- first_name: Andrea
  full_name: Accogli, Andrea
  last_name: Accogli
- first_name: Cassidy
  full_name: Petree, Cassidy
  last_name: Petree
- first_name: Kevin
  full_name: Huang, Kevin
  id: 3b3d2888-1ff6-11ee-9fa6-8f209ca91fe3
  last_name: Huang
  orcid: 0000-0002-2512-7812
- first_name: Kamel
  full_name: Monastiri, Kamel
  last_name: Monastiri
- first_name: Masoud
  full_name: Edizadeh, Masoud
  last_name: Edizadeh
- first_name: Rosaria
  full_name: Nardello, Rosaria
  last_name: Nardello
- first_name: Marzia
  full_name: Ognibene, Marzia
  last_name: Ognibene
- first_name: Patrizia
  full_name: De Marco, Patrizia
  last_name: De Marco
- first_name: Martino
  full_name: Ruggieri, Martino
  last_name: Ruggieri
- first_name: Federico
  full_name: Zara, Federico
  last_name: Zara
- first_name: Pasquale
  full_name: Striano, Pasquale
  last_name: Striano
- first_name: Yavuz
  full_name: Sahin, Yavuz
  last_name: Sahin
- first_name: Lihadh
  full_name: Al-Gazali, Lihadh
  last_name: Al-Gazali
- first_name: Marie Therese Abi
  full_name: Warde, Marie Therese Abi
  last_name: Warde
- first_name: Benedicte
  full_name: Gerard, Benedicte
  last_name: Gerard
- first_name: Giovanni
  full_name: Zifarelli, Giovanni
  last_name: Zifarelli
- first_name: Christian
  full_name: Beetz, Christian
  last_name: Beetz
- first_name: Sara
  full_name: Fortuna, Sara
  last_name: Fortuna
- first_name: Miguel
  full_name: Soler, Miguel
  last_name: Soler
- first_name: Enza Maria
  full_name: Valente, Enza Maria
  last_name: Valente
- first_name: Gaurav
  full_name: Varshney, Gaurav
  last_name: Varshney
- first_name: Reza
  full_name: Maroofian, Reza
  last_name: Maroofian
- first_name: Vincenzo
  full_name: Salpietro, Vincenzo
  last_name: Salpietro
- first_name: Henry
  full_name: Houlden, Henry
  last_name: Houlden
- first_name: SYNaPS Study
  full_name: Grp, SYNaPS Study
  last_name: Grp
citation:
  ama: Cali E, Lin S-J, Rocca C, et al. A homozygous MED11 C-terminal variant causes
    a lethal neurodegenerative disease. <i>Genetics in Medicine</i>. 2022;24(10):2194-2203.
    doi:<a href="https://doi.org/10.1016/j.gim.2022.07.013">10.1016/j.gim.2022.07.013</a>
  apa: Cali, E., Lin, S.-J., Rocca, C., Sahin, Y., Al Shamsi, A., El Chehadeh, S.,
    … Grp, Syn. S. (2022). A homozygous MED11 C-terminal variant causes a lethal neurodegenerative
    disease. <i>Genetics in Medicine</i>. Elsevier. <a href="https://doi.org/10.1016/j.gim.2022.07.013">https://doi.org/10.1016/j.gim.2022.07.013</a>
  chicago: Cali, Elisa, Sheng-Jia Lin, Clarissa Rocca, Yavuz Sahin, Aisha Al Shamsi,
    Salima El Chehadeh, Myriam Chaabouni, et al. “A Homozygous MED11 C-Terminal Variant
    Causes a Lethal Neurodegenerative Disease.” <i>Genetics in Medicine</i>. Elsevier,
    2022. <a href="https://doi.org/10.1016/j.gim.2022.07.013">https://doi.org/10.1016/j.gim.2022.07.013</a>.
  ieee: E. Cali <i>et al.</i>, “A homozygous MED11 C-terminal variant causes a lethal
    neurodegenerative disease,” <i>Genetics in Medicine</i>, vol. 24, no. 10. Elsevier,
    pp. 2194–2203, 2022.
  ista: Cali E, Lin S-J, Rocca C, Sahin Y, Al Shamsi A, El Chehadeh S, Chaabouni M,
    Mankad K, Galanaki E, Efthymiou S, Sudhakar S, Athanasiou-Fragkouli A, Celik T,
    Narli N, Bianca S, Murphy D, Moreira FMDC, Accogli A, Petree C, Huang K, Monastiri
    K, Edizadeh M, Nardello R, Ognibene M, De Marco P, Ruggieri M, Zara F, Striano
    P, Sahin Y, Al-Gazali L, Warde MTA, Gerard B, Zifarelli G, Beetz C, Fortuna S,
    Soler M, Valente EM, Varshney G, Maroofian R, Salpietro V, Houlden H, Grp SynS.
    2022. A homozygous MED11 C-terminal variant causes a lethal neurodegenerative
    disease. Genetics in Medicine. 24(10), 2194–2203.
  mla: Cali, Elisa, et al. “A Homozygous MED11 C-Terminal Variant Causes a Lethal
    Neurodegenerative Disease.” <i>Genetics in Medicine</i>, vol. 24, no. 10, Elsevier,
    2022, pp. 2194–203, doi:<a href="https://doi.org/10.1016/j.gim.2022.07.013">10.1016/j.gim.2022.07.013</a>.
  short: E. Cali, S.-J. Lin, C. Rocca, Y. Sahin, A. Al Shamsi, S. El Chehadeh, M.
    Chaabouni, K. Mankad, E. Galanaki, S. Efthymiou, S. Sudhakar, A. Athanasiou-Fragkouli,
    T. Celik, N. Narli, S. Bianca, D. Murphy, F.M.D.C. Moreira, A. Accogli, C. Petree,
    K. Huang, K. Monastiri, M. Edizadeh, R. Nardello, M. Ognibene, P. De Marco, M.
    Ruggieri, F. Zara, P. Striano, Y. Sahin, L. Al-Gazali, M.T.A. Warde, B. Gerard,
    G. Zifarelli, C. Beetz, S. Fortuna, M. Soler, E.M. Valente, G. Varshney, R. Maroofian,
    V. Salpietro, H. Houlden, Syn.S. Grp, Genetics in Medicine 24 (2022) 2194–2203.
date_created: 2023-09-20T20:57:18Z
date_published: 2022-10-01T00:00:00Z
date_updated: 2023-09-25T08:57:07Z
day: '01'
ddc:
- '570'
department:
- _id: GradSch
doi: 10.1016/j.gim.2022.07.013
extern: '1'
file:
- access_level: open_access
  checksum: 8117175a89129eb5022d81ffe7625f9f
  content_type: application/pdf
  creator: dernst
  date_created: 2023-09-25T08:56:06Z
  date_updated: 2023-09-25T08:56:06Z
  file_id: '14371'
  file_name: 2022_GeneticsMedicine_Calin.pdf
  file_size: 1434037
  relation: main_file
  success: 1
file_date_updated: 2023-09-25T08:56:06Z
has_accepted_license: '1'
intvolume: '        24'
issue: '10'
keyword:
- Human mediator complex
- MED11
- MEDopathies
language:
- iso: eng
month: '10'
oa: 1
oa_version: Published Version
page: 2194-2203
publication: Genetics in Medicine
publication_identifier:
  issn:
  - 1098-3600
publication_status: published
publisher: Elsevier
quality_controlled: '1'
scopus_import: '1'
status: public
title: A homozygous MED11 C-terminal variant causes a lethal neurodegenerative disease
tmp:
  image: /images/cc_by.png
  legal_code_url: https://creativecommons.org/licenses/by/4.0/legalcode
  name: Creative Commons Attribution 4.0 International Public License (CC-BY 4.0)
  short: CC BY (4.0)
type: journal_article
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 24
year: '2022'
...
---
_id: '14381'
abstract:
- lang: eng
  text: Expander graphs (sparse but highly connected graphs) have, since their inception,
    been the source of deep links between Mathematics and Computer Science as well
    as applications to other areas. In recent years, a fascinating theory of high-dimensional
    expanders has begun to emerge, which is still in a formative stage but has nonetheless
    already lead to a number of striking results. Unlike for graphs, in higher dimensions
    there is a rich array of non-equivalent notions of expansion (coboundary expansion,
    cosystolic expansion, topological expansion, spectral expansion, etc.), with differents
    strengths and applications. In this talk, we will survey this landscape of high-dimensional
    expansion, with a focus on two main results. First, we will present Gromov’s Topological
    Overlap Theorem, which asserts that coboundary expansion (a quantitative version
    of vanishing mod 2 cohomology) implies topological expansion (roughly, the property
    that for every map from a simplicial complex to a manifold of the same dimension,
    the images of a positive fraction of the simplices have a point in common). Second,
    we will outline a construction of bounded degree 2-dimensional topological expanders,
    due to Kaufman, Kazhdan, and Lubotzky.
article_processing_charge: No
article_type: original
author:
- first_name: Uli
  full_name: Wagner, Uli
  id: 36690CA2-F248-11E8-B48F-1D18A9856A87
  last_name: Wagner
  orcid: 0000-0002-1494-0568
citation:
  ama: Wagner U. High-dimensional expanders (after Gromov, Kaufman, Kazhdan, Lubotzky,
    and others). <i>Bulletin de la Societe Mathematique de France</i>. 2022;438:281-294.
    doi:<a href="https://doi.org/10.24033/ast.1188">10.24033/ast.1188</a>
  apa: Wagner, U. (2022). High-dimensional expanders (after Gromov, Kaufman, Kazhdan,
    Lubotzky, and others). <i>Bulletin de La Societe Mathematique de France</i>. Societe
    Mathematique de France. <a href="https://doi.org/10.24033/ast.1188">https://doi.org/10.24033/ast.1188</a>
  chicago: Wagner, Uli. “High-Dimensional Expanders (after Gromov, Kaufman, Kazhdan,
    Lubotzky, and Others).” <i>Bulletin de La Societe Mathematique de France</i>.
    Societe Mathematique de France, 2022. <a href="https://doi.org/10.24033/ast.1188">https://doi.org/10.24033/ast.1188</a>.
  ieee: U. Wagner, “High-dimensional expanders (after Gromov, Kaufman, Kazhdan, Lubotzky,
    and others),” <i>Bulletin de la Societe Mathematique de France</i>, vol. 438.
    Societe Mathematique de France, pp. 281–294, 2022.
  ista: Wagner U. 2022. High-dimensional expanders (after Gromov, Kaufman, Kazhdan,
    Lubotzky, and others). Bulletin de la Societe Mathematique de France. 438, 281–294.
  mla: Wagner, Uli. “High-Dimensional Expanders (after Gromov, Kaufman, Kazhdan, Lubotzky,
    and Others).” <i>Bulletin de La Societe Mathematique de France</i>, vol. 438,
    Societe Mathematique de France, 2022, pp. 281–94, doi:<a href="https://doi.org/10.24033/ast.1188">10.24033/ast.1188</a>.
  short: U. Wagner, Bulletin de La Societe Mathematique de France 438 (2022) 281–294.
corr_author: '1'
date_created: 2023-10-01T22:01:14Z
date_published: 2022-01-01T00:00:00Z
date_updated: 2025-09-10T09:55:10Z
day: '01'
department:
- _id: UlWa
doi: 10.24033/ast.1188
external_id:
  isi:
  - '000958364400007'
intvolume: '       438'
isi: 1
language:
- iso: eng
month: '01'
oa_version: None
page: 281-294
publication: Bulletin de la Societe Mathematique de France
publication_identifier:
  eissn:
  - 2102-622X
  issn:
  - 0037-9484
publication_status: published
publisher: Societe Mathematique de France
quality_controlled: '1'
scopus_import: '1'
status: public
title: High-dimensional expanders (after Gromov, Kaufman, Kazhdan, Lubotzky, and others)
type: journal_article
user_id: 317138e5-6ab7-11ef-aa6d-ffef3953e345
volume: 438
year: '2022'
...
---
_id: '14437'
abstract:
- lang: eng
  text: Future LEDs could be based on lead halide perovskites. A breakthrough in preparing
    device-compatible solids composed of nanoscale perovskite crystals overcomes a
    long-standing hurdle in making blue perovskite LEDs.
article_processing_charge: No
article_type: letter_note
author:
- first_name: Hendrik
  full_name: Utzat, Hendrik
  last_name: Utzat
- first_name: Maria
  full_name: Ibáñez, Maria
  id: 43C61214-F248-11E8-B48F-1D18A9856A87
  last_name: Ibáñez
  orcid: 0000-0001-5013-2843
citation:
  ama: Utzat H, Ibáñez M. Molecular engineering enables bright blue LEDs. <i>Nature</i>.
    2022;612(7941):638-639. doi:<a href="https://doi.org/10.1038/d41586-022-04447-0">10.1038/d41586-022-04447-0</a>
  apa: Utzat, H., &#38; Ibáñez, M. (2022). Molecular engineering enables bright blue
    LEDs. <i>Nature</i>. Springer Nature. <a href="https://doi.org/10.1038/d41586-022-04447-0">https://doi.org/10.1038/d41586-022-04447-0</a>
  chicago: Utzat, Hendrik, and Maria Ibáñez. “Molecular Engineering Enables Bright
    Blue LEDs.” <i>Nature</i>. Springer Nature, 2022. <a href="https://doi.org/10.1038/d41586-022-04447-0">https://doi.org/10.1038/d41586-022-04447-0</a>.
  ieee: H. Utzat and M. Ibáñez, “Molecular engineering enables bright blue LEDs,”
    <i>Nature</i>, vol. 612, no. 7941. Springer Nature, pp. 638–639, 2022.
  ista: Utzat H, Ibáñez M. 2022. Molecular engineering enables bright blue LEDs. Nature.
    612(7941), 638–639.
  mla: Utzat, Hendrik, and Maria Ibáñez. “Molecular Engineering Enables Bright Blue
    LEDs.” <i>Nature</i>, vol. 612, no. 7941, Springer Nature, 2022, pp. 638–39, doi:<a
    href="https://doi.org/10.1038/d41586-022-04447-0">10.1038/d41586-022-04447-0</a>.
  short: H. Utzat, M. Ibáñez, Nature 612 (2022) 638–639.
corr_author: '1'
date_created: 2023-10-17T11:14:43Z
date_published: 2022-12-21T00:00:00Z
date_updated: 2025-09-10T09:55:51Z
day: '21'
department:
- _id: MaIb
doi: 10.1038/d41586-022-04447-0
external_id:
  isi:
  - '000934065100010'
  pmid:
  - '36543947'
intvolume: '       612'
isi: 1
issue: '7941'
keyword:
- Multidisciplinary
language:
- iso: eng
month: '12'
oa_version: None
page: 638-639
pmid: 1
publication: Nature
publication_identifier:
  eissn:
  - 1476-4687
  issn:
  - 0028-0836
publication_status: published
publisher: Springer Nature
quality_controlled: '1'
scopus_import: '1'
status: public
title: Molecular engineering enables bright blue LEDs
type: journal_article
user_id: 317138e5-6ab7-11ef-aa6d-ffef3953e345
volume: 612
year: '2022'
...
