---
_id: '14921'
abstract:
- lang: eng
  text: Neural collapse (NC) refers to the surprising structure of the last layer
    of deep neural networks in the terminal phase of gradient descent training. Recently,
    an increasing amount of experimental evidence has pointed to the propagation of
    NC to earlier layers of neural networks. However, while the NC in the last layer
    is well studied theoretically, much less is known about its multi-layered counterpart
    - deep neural collapse (DNC). In particular, existing work focuses either on linear
    layers or only on the last two layers at the price of an extra assumption. Our
    paper fills this gap by generalizing the established analytical framework for
    NC - the unconstrained features model - to multiple non-linear layers. Our key
    technical contribution is to show that, in a deep unconstrained features model,
    the unique global optimum for binary classification exhibits all the properties
    typical of DNC. This explains the existing experimental evidence of DNC. We also
    empirically show that (i) by optimizing deep unconstrained features models via
    gradient descent, the resulting solution agrees well with our theory, and (ii)
    trained networks recover the unconstrained features suitable for the occurrence
    of DNC, thus supporting the validity of this modeling principle.
acknowledgement: M. M. is partially supported by the 2019 Lopez-Loreta Prize. The
  authors would like to thank Eugenia Iofinova, Bernd Prach and Simone Bombari for
  valuable feedback on the manuscript.
alternative_title:
- NeurIPS
article_processing_charge: No
arxiv: 1
author:
- first_name: Peter
  full_name: Súkeník, Peter
  id: d64d6a8d-eb8e-11eb-b029-96fd216dec3c
  last_name: Súkeník
- first_name: Marco
  full_name: Mondelli, Marco
  id: 27EB676C-8706-11E9-9510-7717E6697425
  last_name: Mondelli
  orcid: 0000-0002-3242-7020
- first_name: Christoph
  full_name: Lampert, Christoph
  id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
  last_name: Lampert
  orcid: 0000-0001-8622-7887
citation:
  ama: 'Súkeník P, Mondelli M, Lampert C. Deep neural collapse is provably optimal
    for the deep unconstrained features model. In: <i>37th Annual Conference on Neural
    Information Processing Systems</i>. ; 2023.'
  apa: Súkeník, P., Mondelli, M., &#38; Lampert, C. (2023). Deep neural collapse is
    provably optimal for the deep unconstrained features model. In <i>37th Annual
    Conference on Neural Information Processing Systems</i>. New Orleans, LA, United
    States.
  chicago: Súkeník, Peter, Marco Mondelli, and Christoph Lampert. “Deep Neural Collapse
    Is Provably Optimal for the Deep Unconstrained Features Model.” In <i>37th Annual
    Conference on Neural Information Processing Systems</i>, 2023.
  ieee: P. Súkeník, M. Mondelli, and C. Lampert, “Deep neural collapse is provably
    optimal for the deep unconstrained features model,” in <i>37th Annual Conference
    on Neural Information Processing Systems</i>, New Orleans, LA, United States,
    2023.
  ista: 'Súkeník P, Mondelli M, Lampert C. 2023. Deep neural collapse is provably
    optimal for the deep unconstrained features model. 37th Annual Conference on Neural
    Information Processing Systems. NeurIPS: Neural Information Processing Systems,
    NeurIPS, .'
  mla: Súkeník, Peter, et al. “Deep Neural Collapse Is Provably Optimal for the Deep
    Unconstrained Features Model.” <i>37th Annual Conference on Neural Information
    Processing Systems</i>, 2023.
  short: P. Súkeník, M. Mondelli, C. Lampert, in:, 37th Annual Conference on Neural
    Information Processing Systems, 2023.
conference:
  end_date: 2023-12-16
  location: New Orleans, LA, United States
  name: 'NeurIPS: Neural Information Processing Systems'
  start_date: 2023-12-10
corr_author: '1'
date_created: 2024-02-02T11:17:41Z
date_published: 2023-12-15T00:00:00Z
date_updated: 2025-04-15T07:50:16Z
day: '15'
department:
- _id: MaMo
- _id: ChLa
external_id:
  arxiv:
  - '2305.13165'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: ' https://doi.org/10.48550/arXiv.2305.13165'
month: '12'
oa: 1
oa_version: Preprint
project:
- _id: 059876FA-7A3F-11EA-A408-12923DDC885E
  name: Prix Lopez-Loretta 2019 - Marco Mondelli
publication: 37th Annual Conference on Neural Information Processing Systems
publication_status: published
quality_controlled: '1'
status: public
title: Deep neural collapse is provably optimal for the deep unconstrained features
  model
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2023'
...
---
_id: '14922'
abstract:
- lang: eng
  text: 'We propose a novel approach to concentration for non-independent random variables.
    The main idea is to ``pretend'''' that the random variables are independent and
    pay a multiplicative price measuring how far they are from actually being independent.
    This price is encapsulated in the Hellinger integral between the joint and the
    product of the marginals, which is then upper bounded leveraging tensorisation
    properties. Our bounds represent a natural generalisation of concentration inequalities
    in the presence of dependence: we recover exactly the classical bounds (McDiarmid''s
    inequality) when the random variables are independent. Furthermore, in a ``large
    deviations'''' regime, we obtain the same decay in the probability as for the
    independent case, even when the random variables display non-trivial dependencies.
    To show this, we consider a number of applications of interest. First, we provide
    a bound for Markov chains with finite state space. Then, we consider the Simple
    Symmetric Random Walk, which is a non-contracting Markov chain, and a non-Markovian
    setting in which the stochastic process depends on its entire past. To conclude,
    we propose an application to Markov Chain Monte Carlo methods, where our approach
    leads to an improved lower bound on the minimum burn-in period required to reach
    a certain accuracy. In all of these settings, we provide a regime of parameters
    in which our bound fares better than what the state of the art can provide.'
acknowledgement: The authors are partially supported by the 2019 Lopez-Loreta Prize.
  They would also like to thank Professor Jan Maas for providing valuable suggestions
  and comments on an early version of the work.
article_processing_charge: No
arxiv: 1
author:
- first_name: Amedeo Roberto
  full_name: Esposito, Amedeo Roberto
  id: 9583e921-e1ad-11ec-9862-cef099626dc9
  last_name: Esposito
- first_name: Marco
  full_name: Mondelli, Marco
  id: 27EB676C-8706-11E9-9510-7717E6697425
  last_name: Mondelli
  orcid: 0000-0002-3242-7020
citation:
  ama: 'Esposito AR, Mondelli M. Concentration without independence via information
    measures. In: <i>Proceedings of 2023 IEEE International Symposium on Information
    Theory</i>. IEEE; 2023:400-405. doi:<a href="https://doi.org/10.1109/isit54713.2023.10206899">10.1109/isit54713.2023.10206899</a>'
  apa: 'Esposito, A. R., &#38; Mondelli, M. (2023). Concentration without independence
    via information measures. In <i>Proceedings of 2023 IEEE International Symposium
    on Information Theory</i> (pp. 400–405). Taipei, Taiwan: IEEE. <a href="https://doi.org/10.1109/isit54713.2023.10206899">https://doi.org/10.1109/isit54713.2023.10206899</a>'
  chicago: Esposito, Amedeo Roberto, and Marco Mondelli. “Concentration without Independence
    via Information Measures.” In <i>Proceedings of 2023 IEEE International Symposium
    on Information Theory</i>, 400–405. IEEE, 2023. <a href="https://doi.org/10.1109/isit54713.2023.10206899">https://doi.org/10.1109/isit54713.2023.10206899</a>.
  ieee: A. R. Esposito and M. Mondelli, “Concentration without independence via information
    measures,” in <i>Proceedings of 2023 IEEE International Symposium on Information
    Theory</i>, Taipei, Taiwan, 2023, pp. 400–405.
  ista: 'Esposito AR, Mondelli M. 2023. Concentration without independence via information
    measures. Proceedings of 2023 IEEE International Symposium on Information Theory.
    ISIT: International Symposium on Information Theory, 400–405.'
  mla: Esposito, Amedeo Roberto, and Marco Mondelli. “Concentration without Independence
    via Information Measures.” <i>Proceedings of 2023 IEEE International Symposium
    on Information Theory</i>, IEEE, 2023, pp. 400–05, doi:<a href="https://doi.org/10.1109/isit54713.2023.10206899">10.1109/isit54713.2023.10206899</a>.
  short: A.R. Esposito, M. Mondelli, in:, Proceedings of 2023 IEEE International Symposium
    on Information Theory, IEEE, 2023, pp. 400–405.
conference:
  end_date: 2023-06-30
  location: Taipei, Taiwan
  name: 'ISIT: International Symposium on Information Theory'
  start_date: 2023-06-25
corr_author: '1'
date_created: 2024-02-02T11:18:40Z
date_published: 2023-06-30T00:00:00Z
date_updated: 2025-09-04T13:06:52Z
day: '30'
department:
- _id: MaMo
doi: 10.1109/isit54713.2023.10206899
external_id:
  arxiv:
  - '2303.07245'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.48550/arXiv.2303.07245
month: '06'
oa: 1
oa_version: Preprint
page: 400-405
project:
- _id: 059876FA-7A3F-11EA-A408-12923DDC885E
  name: Prix Lopez-Loretta 2019 - Marco Mondelli
publication: Proceedings of 2023 IEEE International Symposium on Information Theory
publication_identifier:
  eisbn:
  - '9781665475549'
  eissn:
  - 2157-8117
publication_status: published
publisher: IEEE
quality_controlled: '1'
related_material:
  record:
  - id: '15172'
    relation: later_version
    status: public
scopus_import: '1'
status: public
title: Concentration without independence via information measures
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2023'
...
---
_id: '14923'
abstract:
- lang: eng
  text: We study the performance of a Bayesian statistician who estimates a rank-one
    signal corrupted by non-symmetric rotationally invariant noise with a generic
    distribution of singular values. As the signal-to-noise ratio and the noise structure
    are unknown, a Gaussian setup is incorrectly assumed. We derive the exact analytic
    expression for the error of the mismatched Bayes estimator and also provide the
    analysis of an approximate message passing (AMP) algorithm. The first result exploits
    the asymptotic behavior of spherical integrals for rectangular matrices and of
    low-rank matrix perturbations; the second one relies on the design and analysis
    of an auxiliary AMP. The numerical experiments show that there is a performance
    gap between the AMP and Bayes estimators, which is due to the incorrect estimation
    of the signal norm.
article_processing_charge: No
arxiv: 1
author:
- first_name: Teng
  full_name: Fu, Teng
  last_name: Fu
- first_name: YuHao
  full_name: Liu, YuHao
  last_name: Liu
- first_name: Jean
  full_name: Barbier, Jean
  last_name: Barbier
- first_name: Marco
  full_name: Mondelli, Marco
  id: 27EB676C-8706-11E9-9510-7717E6697425
  last_name: Mondelli
  orcid: 0000-0002-3242-7020
- first_name: ShanSuo
  full_name: Liang, ShanSuo
  last_name: Liang
- first_name: TianQi
  full_name: Hou, TianQi
  last_name: Hou
citation:
  ama: 'Fu T, Liu Y, Barbier J, Mondelli M, Liang S, Hou T. Mismatched estimation
    of non-symmetric rank-one matrices corrupted by structured noise. In: <i>Proceedings
    of 2023 IEEE International Symposium on Information Theory</i>. IEEE; 2023:1178-1183.
    doi:<a href="https://doi.org/10.1109/isit54713.2023.10206671">10.1109/isit54713.2023.10206671</a>'
  apa: 'Fu, T., Liu, Y., Barbier, J., Mondelli, M., Liang, S., &#38; Hou, T. (2023).
    Mismatched estimation of non-symmetric rank-one matrices corrupted by structured
    noise. In <i>Proceedings of 2023 IEEE International Symposium on Information Theory</i>
    (pp. 1178–1183). Taipei, Taiwan: IEEE. <a href="https://doi.org/10.1109/isit54713.2023.10206671">https://doi.org/10.1109/isit54713.2023.10206671</a>'
  chicago: Fu, Teng, YuHao Liu, Jean Barbier, Marco Mondelli, ShanSuo Liang, and TianQi
    Hou. “Mismatched Estimation of Non-Symmetric Rank-One Matrices Corrupted by Structured
    Noise.” In <i>Proceedings of 2023 IEEE International Symposium on Information
    Theory</i>, 1178–83. IEEE, 2023. <a href="https://doi.org/10.1109/isit54713.2023.10206671">https://doi.org/10.1109/isit54713.2023.10206671</a>.
  ieee: T. Fu, Y. Liu, J. Barbier, M. Mondelli, S. Liang, and T. Hou, “Mismatched
    estimation of non-symmetric rank-one matrices corrupted by structured noise,”
    in <i>Proceedings of 2023 IEEE International Symposium on Information Theory</i>,
    Taipei, Taiwan, 2023, pp. 1178–1183.
  ista: 'Fu T, Liu Y, Barbier J, Mondelli M, Liang S, Hou T. 2023. Mismatched estimation
    of non-symmetric rank-one matrices corrupted by structured noise. Proceedings
    of 2023 IEEE International Symposium on Information Theory. ISIT: International
    Symposium on Information Theory, 1178–1183.'
  mla: Fu, Teng, et al. “Mismatched Estimation of Non-Symmetric Rank-One Matrices
    Corrupted by Structured Noise.” <i>Proceedings of 2023 IEEE International Symposium
    on Information Theory</i>, IEEE, 2023, pp. 1178–83, doi:<a href="https://doi.org/10.1109/isit54713.2023.10206671">10.1109/isit54713.2023.10206671</a>.
  short: T. Fu, Y. Liu, J. Barbier, M. Mondelli, S. Liang, T. Hou, in:, Proceedings
    of 2023 IEEE International Symposium on Information Theory, IEEE, 2023, pp. 1178–1183.
conference:
  end_date: 2023-06-30
  location: Taipei, Taiwan
  name: 'ISIT: International Symposium on Information Theory'
  start_date: 2023-06-25
corr_author: '1'
date_created: 2024-02-02T11:20:39Z
date_published: 2023-06-30T00:00:00Z
date_updated: 2025-07-10T11:51:04Z
day: '30'
department:
- _id: MaMo
doi: 10.1109/isit54713.2023.10206671
external_id:
  arxiv:
  - '2302.03306'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.48550/arXiv.2302.03306
month: '06'
oa: 1
oa_version: Preprint
page: 1178-1183
publication: Proceedings of 2023 IEEE International Symposium on Information Theory
publication_identifier:
  eissn:
  - 2157-8117
  isbn:
  - '9781665475549'
publication_status: published
publisher: IEEE
quality_controlled: '1'
scopus_import: '1'
status: public
title: Mismatched estimation of non-symmetric rank-one matrices corrupted by structured
  noise
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2023'
...
---
_id: '14924'
abstract:
- lang: eng
  text: "The stochastic heavy ball method (SHB), also known as stochastic gradient
    descent (SGD) with Polyak's momentum, is widely used in training neural networks.
    However, despite the remarkable success of such algorithm in practice, its theoretical
    characterization remains limited. In this paper, we focus on neural networks with
    two and three layers and provide a rigorous understanding of the properties of
    the solutions found by SHB: \\emph{(i)} stability after dropping out part of the
    neurons, \\emph{(ii)} connectivity along a low-loss path, and \\emph{(iii)} convergence
    to the global optimum.\r\nTo achieve this goal, we take a mean-field view and
    relate the SHB dynamics to a certain partial differential equation in the limit
    of large network widths. This mean-field perspective has inspired a recent line
    of work focusing on SGD while, in contrast, our paper considers an algorithm with
    momentum. More specifically, after proving existence and uniqueness of the limit
    differential equations, we show convergence to the global optimum and give a quantitative
    bound between the mean-field limit and the SHB dynamics of a finite-width network.
    Armed with this last bound, we are able to establish the dropout-stability and
    connectivity of SHB solutions."
acknowledgement: D. Wu and M. Mondelli are partially supported by the 2019 Lopez-Loreta
  Prize. V. Kungurtsev was supported by the OP VVV project CZ.02.1.01/0.0/0.0/16_019/0000765
  "Research Center for Informatics".
alternative_title:
- TMLR
article_processing_charge: No
arxiv: 1
author:
- first_name: Diyuan
  full_name: Wu, Diyuan
  id: 1a5914c2-896a-11ed-bdf8-fb80621a0635
  last_name: Wu
- first_name: Vyacheslav
  full_name: Kungurtsev, Vyacheslav
  last_name: Kungurtsev
- first_name: Marco
  full_name: Mondelli, Marco
  id: 27EB676C-8706-11E9-9510-7717E6697425
  last_name: Mondelli
  orcid: 0000-0002-3242-7020
citation:
  ama: 'Wu D, Kungurtsev V, Mondelli M. Mean-field analysis for heavy ball methods:
    Dropout-stability, connectivity, and global convergence. In: <i>Transactions on
    Machine Learning Research</i>. ML Research Press; 2023.'
  apa: 'Wu, D., Kungurtsev, V., &#38; Mondelli, M. (2023). Mean-field analysis for
    heavy ball methods: Dropout-stability, connectivity, and global convergence. In
    <i>Transactions on Machine Learning Research</i>. ML Research Press.'
  chicago: 'Wu, Diyuan, Vyacheslav Kungurtsev, and Marco Mondelli. “Mean-Field Analysis
    for Heavy Ball Methods: Dropout-Stability, Connectivity, and Global Convergence.”
    In <i>Transactions on Machine Learning Research</i>. ML Research Press, 2023.'
  ieee: 'D. Wu, V. Kungurtsev, and M. Mondelli, “Mean-field analysis for heavy ball
    methods: Dropout-stability, connectivity, and global convergence,” in <i>Transactions
    on Machine Learning Research</i>, 2023.'
  ista: 'Wu D, Kungurtsev V, Mondelli M. 2023. Mean-field analysis for heavy ball
    methods: Dropout-stability, connectivity, and global convergence. Transactions
    on Machine Learning Research. , TMLR, .'
  mla: 'Wu, Diyuan, et al. “Mean-Field Analysis for Heavy Ball Methods: Dropout-Stability,
    Connectivity, and Global Convergence.” <i>Transactions on Machine Learning Research</i>,
    ML Research Press, 2023.'
  short: D. Wu, V. Kungurtsev, M. Mondelli, in:, Transactions on Machine Learning
    Research, ML Research Press, 2023.
corr_author: '1'
date_created: 2024-02-02T11:21:56Z
date_published: 2023-02-28T00:00:00Z
date_updated: 2025-04-15T07:50:17Z
day: '28'
department:
- _id: MaMo
external_id:
  arxiv:
  - '2210.06819'
has_accepted_license: '1'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.48550/arXiv.2210.06819
month: '02'
oa: 1
oa_version: Published Version
project:
- _id: 059876FA-7A3F-11EA-A408-12923DDC885E
  name: Prix Lopez-Loretta 2019 - Marco Mondelli
publication: Transactions on Machine Learning Research
publication_status: published
publisher: ML Research Press
quality_controlled: '1'
status: public
title: 'Mean-field analysis for heavy ball methods: Dropout-stability, connectivity,
  and global convergence'
tmp:
  image: /images/cc_by.png
  legal_code_url: https://creativecommons.org/licenses/by/4.0/legalcode
  name: Creative Commons Attribution 4.0 International Public License (CC-BY 4.0)
  short: CC BY (4.0)
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2023'
...
---
_id: '14948'
abstract:
- lang: eng
  text: "The extraction of modular object-centric representations for downstream tasks\r\nis
    an emerging area of research. Learning grounded representations of objects\r\nthat
    are guaranteed to be stable and invariant promises robust performance\r\nacross
    different tasks and environments. Slot Attention (SA) learns\r\nobject-centric
    representations by assigning objects to \\textit{slots}, but\r\npresupposes a
    \\textit{single} distribution from which all slots are randomly\r\ninitialised.
    This results in an inability to learn \\textit{specialized} slots\r\nwhich bind
    to specific object types and remain invariant to identity-preserving\r\nchanges
    in object appearance. To address this, we present\r\n\\emph{\\textsc{Co}nditional
    \\textsc{S}lot \\textsc{A}ttention} (\\textsc{CoSA})\r\nusing a novel concept
    of \\emph{Grounded Slot Dictionary} (GSD) inspired by\r\nvector quantization.
    Our proposed GSD comprises (i) canonical object-level\r\nproperty vectors and
    (ii) parametric Gaussian distributions, which define a\r\nprior over the slots.
    We demonstrate the benefits of our method in multiple\r\ndownstream tasks such
    as scene generation, composition, and task adaptation,\r\nwhilst remaining competitive
    with SA in popular object discovery benchmarks."
acknowledgement: "This work was supported by supported by UKRI (grant agreement no.
  EP/S023356/1), in the UKRI\r\nCentre for Doctoral Training in Safe and Trusted AI
  via A. Kori."
article_number: '2307.09437'
article_processing_charge: No
arxiv: 1
author:
- first_name: Avinash
  full_name: Kori, Avinash
  last_name: Kori
- first_name: Francesco
  full_name: Locatello, Francesco
  id: 26cfd52f-2483-11ee-8040-88983bcc06d4
  last_name: Locatello
  orcid: 0000-0002-4850-0683
- first_name: Fabio De Sousa
  full_name: Ribeiro, Fabio De Sousa
  last_name: Ribeiro
- first_name: Francesca
  full_name: Toni, Francesca
  last_name: Toni
- first_name: Ben
  full_name: Glocker, Ben
  last_name: Glocker
citation:
  ama: Kori A, Locatello F, Ribeiro FDS, Toni F, Glocker B. Grounded object centric
    learning. <i>arXiv</i>. doi:<a href="https://doi.org/10.48550/arXiv.2307.09437">10.48550/arXiv.2307.09437</a>
  apa: Kori, A., Locatello, F., Ribeiro, F. D. S., Toni, F., &#38; Glocker, B. (n.d.).
    Grounded object centric learning. <i>arXiv</i>. <a href="https://doi.org/10.48550/arXiv.2307.09437">https://doi.org/10.48550/arXiv.2307.09437</a>
  chicago: Kori, Avinash, Francesco Locatello, Fabio De Sousa Ribeiro, Francesca Toni,
    and Ben Glocker. “Grounded Object Centric Learning.” <i>ArXiv</i>, n.d. <a href="https://doi.org/10.48550/arXiv.2307.09437">https://doi.org/10.48550/arXiv.2307.09437</a>.
  ieee: A. Kori, F. Locatello, F. D. S. Ribeiro, F. Toni, and B. Glocker, “Grounded
    object centric learning,” <i>arXiv</i>. .
  ista: Kori A, Locatello F, Ribeiro FDS, Toni F, Glocker B. Grounded object centric
    learning. arXiv, 2307.09437.
  mla: Kori, Avinash, et al. “Grounded Object Centric Learning.” <i>ArXiv</i>, 2307.09437,
    doi:<a href="https://doi.org/10.48550/arXiv.2307.09437">10.48550/arXiv.2307.09437</a>.
  short: A. Kori, F. Locatello, F.D.S. Ribeiro, F. Toni, B. Glocker, ArXiv (n.d.).
date_created: 2024-02-07T14:47:04Z
date_published: 2023-07-18T00:00:00Z
date_updated: 2024-02-12T08:13:12Z
day: '18'
department:
- _id: FrLo
doi: 10.48550/arXiv.2307.09437
external_id:
  arxiv:
  - '2307.09437'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.48550/arXiv.2307.09437
month: '07'
oa: 1
oa_version: Preprint
publication: arXiv
publication_status: submitted
status: public
title: Grounded object centric learning
type: preprint
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2023'
...
---
_id: '14949'
abstract:
- lang: eng
  text: Many approaches have been proposed to use diffusion models to augment training
    datasets for downstream tasks, such as classification. However, diffusion models
    are themselves trained on large datasets, often with noisy annotations, and it
    remains an open question to which extent these models contribute to downstream
    classification performance. In particular, it remains unclear if they generalize
    enough to improve over directly using the additional data of their pre-training
    process for augmentation. We systematically evaluate a range of existing methods
    to generate images from diffusion models and study new extensions to assess their
    benefit for data augmentation. Personalizing diffusion models towards the target
    data outperforms simpler prompting strategies. However, using the pre-training
    data of the diffusion model alone, via a simple nearest-neighbor retrieval procedure,
    leads to even stronger downstream performance. Our study explores the potential
    of diffusion models in generating new training data, and surprisingly finds that
    these sophisticated models are not yet able to beat a simple and strong image
    retrieval baseline on simple downstream vision tasks.
acknowledgement: The authors would like to thank Varad Gunjal and Vishaal Udandarao.
  MFB thanks the International Max Planck Research School for Intelligent Systems
  (IMPRS-IS).
alternative_title:
- TMLR
article_processing_charge: No
article_type: original
author:
- first_name: Max
  full_name: Burg, Max
  last_name: Burg
- first_name: Florian
  full_name: Wenzel, Florian
  last_name: Wenzel
- first_name: Dominik
  full_name: Zietlow, Dominik
  last_name: Zietlow
- first_name: Max
  full_name: Horn, Max
  last_name: Horn
- first_name: Osama
  full_name: Makansi, Osama
  last_name: Makansi
- first_name: Francesco
  full_name: Locatello, Francesco
  id: 26cfd52f-2483-11ee-8040-88983bcc06d4
  last_name: Locatello
  orcid: 0000-0002-4850-0683
- first_name: Chris
  full_name: Russell, Chris
  last_name: Russell
citation:
  ama: Burg M, Wenzel F, Zietlow D, et al. Image retrieval outperforms diffusion models
    on data augmentation. <i>Journal of Machine Learning Research</i>. 2023.
  apa: Burg, M., Wenzel, F., Zietlow, D., Horn, M., Makansi, O., Locatello, F., &#38;
    Russell, C. (2023). Image retrieval outperforms diffusion models on data augmentation.
    <i>Journal of Machine Learning Research</i>. ML Research Press.
  chicago: Burg, Max, Florian Wenzel, Dominik Zietlow, Max Horn, Osama Makansi, Francesco
    Locatello, and Chris Russell. “Image Retrieval Outperforms Diffusion Models on
    Data Augmentation.” <i>Journal of Machine Learning Research</i>. ML Research Press,
    2023.
  ieee: M. Burg <i>et al.</i>, “Image retrieval outperforms diffusion models on data
    augmentation,” <i>Journal of Machine Learning Research</i>. ML Research Press,
    2023.
  ista: Burg M, Wenzel F, Zietlow D, Horn M, Makansi O, Locatello F, Russell C. 2023.
    Image retrieval outperforms diffusion models on data augmentation. Journal of
    Machine Learning Research.
  mla: Burg, Max, et al. “Image Retrieval Outperforms Diffusion Models on Data Augmentation.”
    <i>Journal of Machine Learning Research</i>, ML Research Press, 2023.
  short: M. Burg, F. Wenzel, D. Zietlow, M. Horn, O. Makansi, F. Locatello, C. Russell,
    Journal of Machine Learning Research (2023).
date_created: 2024-02-07T14:57:39Z
date_published: 2023-12-10T00:00:00Z
date_updated: 2024-02-12T08:30:21Z
day: '10'
ddc:
- '000'
department:
- _id: FrLo
file:
- access_level: open_access
  checksum: af87ddea7908923426365347b9c87ba7
  content_type: application/pdf
  creator: ptazenko
  date_created: 2024-02-07T14:57:32Z
  date_updated: 2024-02-07T14:57:32Z
  file_id: '14950'
  file_name: Burg_et_al_2023_Image_retrieval_outperforms.pdf
  file_size: 27325153
  relation: main_file
file_date_updated: 2024-02-07T14:57:32Z
has_accepted_license: '1'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://openreview.net/forum?id=xflYdGZMpv
month: '12'
oa: 1
oa_version: Published Version
publication: Journal of Machine Learning Research
publication_identifier:
  eissn:
  - 2835-8856
publication_status: published
publisher: ML Research Press
quality_controlled: '1'
status: public
title: Image retrieval outperforms diffusion models on data augmentation
tmp:
  image: /images/cc_by.png
  legal_code_url: https://creativecommons.org/licenses/by/4.0/legalcode
  name: Creative Commons Attribution 4.0 International Public License (CC-BY 4.0)
  short: CC BY (4.0)
type: journal_article
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2023'
...
---
_id: '14952'
abstract:
- lang: eng
  text: "While different neural models often exhibit latent spaces that are alike
    when exposed to semantically related data, this intrinsic similarity is not always
    immediately discernible. Towards a better understanding of this phenomenon, our
    work shows how representations learned from these neural modules can be translated
    between different pre-trained networks via simpler transformations than previously
    thought. An advantage of this approach is the ability to\r\nestimate these transformations
    using standard, well-understood algebraic procedures that have closed-form solutions.
    Our method directly estimates a transformation between two given latent spaces,
    thereby enabling effective stitching of encoders and decoders without additional
    training. We extensively validate the adaptability of this translation procedure
    in different\r\nexperimental settings: across various trainings, domains, architectures
    (e.g., ResNet, CNN, ViT), and in multiple downstream tasks (classification, reconstruction).
    Notably, we show how it is possible to zero-shot stitch text encoders and vision
    decoders, or vice-versa, yielding surprisingly good classification performance
    in this multimodal setting."
acknowledgement: "This work is supported by the ERC grant no.802554 (SPECGEO), PRIN
  2020 project no.2020TA3K9N (LEGO.AI), and PNRR MUR project PE0000013-FAIR. Francesco\r\nLocatello
  did not contribute to this work at Amazon."
article_number: '2311.00664'
article_processing_charge: No
arxiv: 1
author:
- first_name: Valentino
  full_name: Maiorca, Valentino
  last_name: Maiorca
- first_name: Luca
  full_name: Moschella, Luca
  last_name: Moschella
- first_name: Antonio
  full_name: Norelli, Antonio
  last_name: Norelli
- first_name: Marco
  full_name: Fumero, Marco
  last_name: Fumero
- first_name: Francesco
  full_name: Locatello, Francesco
  id: 26cfd52f-2483-11ee-8040-88983bcc06d4
  last_name: Locatello
  orcid: 0000-0002-4850-0683
- first_name: Emanuele
  full_name: Rodolà, Emanuele
  last_name: Rodolà
citation:
  ama: Maiorca V, Moschella L, Norelli A, Fumero M, Locatello F, Rodolà E. Latent
    space translation via semantic alignment. <i>arXiv</i>. doi:<a href="https://doi.org/10.48550/arXiv.2311.00664">10.48550/arXiv.2311.00664</a>
  apa: Maiorca, V., Moschella, L., Norelli, A., Fumero, M., Locatello, F., &#38; Rodolà,
    E. (n.d.). Latent space translation via semantic alignment. <i>arXiv</i>. <a href="https://doi.org/10.48550/arXiv.2311.00664">https://doi.org/10.48550/arXiv.2311.00664</a>
  chicago: Maiorca, Valentino, Luca Moschella, Antonio Norelli, Marco Fumero, Francesco
    Locatello, and Emanuele Rodolà. “Latent Space Translation via Semantic Alignment.”
    <i>ArXiv</i>, n.d. <a href="https://doi.org/10.48550/arXiv.2311.00664">https://doi.org/10.48550/arXiv.2311.00664</a>.
  ieee: V. Maiorca, L. Moschella, A. Norelli, M. Fumero, F. Locatello, and E. Rodolà,
    “Latent space translation via semantic alignment,” <i>arXiv</i>. .
  ista: Maiorca V, Moschella L, Norelli A, Fumero M, Locatello F, Rodolà E. Latent
    space translation via semantic alignment. arXiv, 2311.00664.
  mla: Maiorca, Valentino, et al. “Latent Space Translation via Semantic Alignment.”
    <i>ArXiv</i>, 2311.00664, doi:<a href="https://doi.org/10.48550/arXiv.2311.00664">10.48550/arXiv.2311.00664</a>.
  short: V. Maiorca, L. Moschella, A. Norelli, M. Fumero, F. Locatello, E. Rodolà,
    ArXiv (n.d.).
date_created: 2024-02-07T15:08:55Z
date_published: 2023-11-01T00:00:00Z
date_updated: 2024-02-12T09:40:23Z
day: '01'
department:
- _id: FrLo
doi: 10.48550/arXiv.2311.00664
external_id:
  arxiv:
  - '2311.00664'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.48550/arXiv.2311.00664
month: '11'
oa: 1
oa_version: Preprint
publication: arXiv
publication_status: submitted
status: public
title: Latent space translation via semantic alignment
type: preprint
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2023'
...
---
_id: '14953'
abstract:
- lang: eng
  text: This paper provides statistical sample complexity bounds for score-matching
    and its applications in causal discovery. We demonstrate that accurate estimation
    of the score function is achievable by training a standard deep ReLU neural network
    using stochastic gradient descent. We establish bounds on the error rate of recovering
    causal relationships using the score-matching-based causal discovery method of
    Rolland et al. [2022], assuming a sufficiently good estimation of the score function.
    Finally, we analyze the upper bound of score-matching estimation within the score-based
    generative modeling, which has been applied for causal discovery but is also of
    independent interest within the domain of generative models.
acknowledgement: 'We are thankful to the reviewers for providing constructive feedback
  and Kun Zhang and Dominik Janzing for helpful discussion on the special case of
  deterministic children. This work was supported by Hasler Foundation Program: Hasler
  Responsible AI (project number 21043). This work was supported by the Swiss National
  Science Foundation (SNSF) under grant number 200021_205011. Francesco Locatello
  did not contribute to this work at Amazon. '
article_number: '2310.18123'
article_processing_charge: No
arxiv: 1
author:
- first_name: Zhenyu
  full_name: Zhu, Zhenyu
  last_name: Zhu
- first_name: Francesco
  full_name: Locatello, Francesco
  id: 26cfd52f-2483-11ee-8040-88983bcc06d4
  last_name: Locatello
  orcid: 0000-0002-4850-0683
- first_name: Volkan
  full_name: Cevher, Volkan
  last_name: Cevher
citation:
  ama: 'Zhu Z, Locatello F, Cevher V. Sample complexity bounds for score-matching:
    Causal discovery and generative modeling. <i>arXiv</i>. doi:<a href="https://doi.org/10.48550/arXiv.2310.18123">10.48550/arXiv.2310.18123</a>'
  apa: 'Zhu, Z., Locatello, F., &#38; Cevher, V. (n.d.). Sample complexity bounds
    for score-matching: Causal discovery and generative modeling. <i>arXiv</i>. <a
    href="https://doi.org/10.48550/arXiv.2310.18123">https://doi.org/10.48550/arXiv.2310.18123</a>'
  chicago: 'Zhu, Zhenyu, Francesco Locatello, and Volkan Cevher. “Sample Complexity
    Bounds for Score-Matching: Causal Discovery and Generative Modeling.” <i>ArXiv</i>,
    n.d. <a href="https://doi.org/10.48550/arXiv.2310.18123">https://doi.org/10.48550/arXiv.2310.18123</a>.'
  ieee: 'Z. Zhu, F. Locatello, and V. Cevher, “Sample complexity bounds for score-matching:
    Causal discovery and generative modeling,” <i>arXiv</i>. .'
  ista: 'Zhu Z, Locatello F, Cevher V. Sample complexity bounds for score-matching:
    Causal discovery and generative modeling. arXiv, 2310.18123.'
  mla: 'Zhu, Zhenyu, et al. “Sample Complexity Bounds for Score-Matching: Causal Discovery
    and Generative Modeling.” <i>ArXiv</i>, 2310.18123, doi:<a href="https://doi.org/10.48550/arXiv.2310.18123">10.48550/arXiv.2310.18123</a>.'
  short: Z. Zhu, F. Locatello, V. Cevher, ArXiv (n.d.).
date_created: 2024-02-07T15:11:11Z
date_published: 2023-10-27T00:00:00Z
date_updated: 2024-02-12T09:45:58Z
day: '27'
department:
- _id: FrLo
doi: 10.48550/arXiv.2310.18123
external_id:
  arxiv:
  - '2310.18123'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.48550/arXiv.2310.18123
month: '10'
oa: 1
oa_version: Preprint
publication: arXiv
publication_status: submitted
status: public
title: 'Sample complexity bounds for score-matching: Causal discovery and generative
  modeling'
type: preprint
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2023'
...
---
_id: '14954'
abstract:
- lang: eng
  text: "When domain knowledge is limited and experimentation is restricted by ethical,
    financial, or time constraints, practitioners turn to observational causal discovery
    methods to recover the causal structure, exploiting the statistical properties
    of their data. Because causal discovery without further assumptions is an ill-posed
    problem, each algorithm comes with its own set of\r\nusually untestable assumptions,
    some of which are hard to meet in real datasets. Motivated by these considerations,
    this paper extensively benchmarks the empirical performance of recent causal discovery
    methods on observational i.i.d. data generated under different background conditions,
    allowing for violations of the critical assumptions required by each selected
    approach. Our experimental findings show that score matching-based methods demonstrate\r\nsurprising
    performance in the false positive and false negative rate of the inferred graph
    in these challenging scenarios, and we provide theoretical insights into their
    performance. This work is also the first effort to benchmark the stability of
    causal discovery algorithms with respect to the values of their hyperparameters.
    Finally, we hope this paper will set a new standard for the evaluation of causal
    discovery methods and can serve as an accessible entry point for practitioners
    interested in the field, highlighting the empirical implications of different
    algorithm choices."
acknowledgement: "We thank Kun Zhang and Carl-Johann Simon-Gabriel for the insightful
  discussions. This work\r\nhas been supported by AFOSR, grant n. FA8655-20-1-7035.
  FM is supported by Programma\r\nOperativo Nazionale ricerca e innovazione 2014-2020.
  FM partially contributed to this work during an internship at Amazon Web Services
  with FL. FL partially contributed while at AWS."
article_number: '2310.13387'
article_processing_charge: No
arxiv: 1
author:
- first_name: Francesco
  full_name: Montagna, Francesco
  last_name: Montagna
- first_name: Atalanti A.
  full_name: Mastakouri, Atalanti A.
  last_name: Mastakouri
- first_name: Elias
  full_name: Eulig, Elias
  last_name: Eulig
- first_name: Nicoletta
  full_name: Noceti, Nicoletta
  last_name: Noceti
- first_name: Lorenzo
  full_name: Rosasco, Lorenzo
  last_name: Rosasco
- first_name: Dominik
  full_name: Janzing, Dominik
  last_name: Janzing
- first_name: Bryon
  full_name: Aragam, Bryon
  last_name: Aragam
- first_name: Francesco
  full_name: Locatello, Francesco
  id: 26cfd52f-2483-11ee-8040-88983bcc06d4
  last_name: Locatello
  orcid: 0000-0002-4850-0683
citation:
  ama: Montagna F, Mastakouri AA, Eulig E, et al. Assumption violations in causal
    discovery and the robustness of score matching. <i>arXiv</i>. doi:<a href="https://doi.org/10.48550/arXiv.2310.13387">10.48550/arXiv.2310.13387</a>
  apa: Montagna, F., Mastakouri, A. A., Eulig, E., Noceti, N., Rosasco, L., Janzing,
    D., … Locatello, F. (n.d.). Assumption violations in causal discovery and the
    robustness of score matching. <i>arXiv</i>. <a href="https://doi.org/10.48550/arXiv.2310.13387">https://doi.org/10.48550/arXiv.2310.13387</a>
  chicago: Montagna, Francesco, Atalanti A. Mastakouri, Elias Eulig, Nicoletta Noceti,
    Lorenzo Rosasco, Dominik Janzing, Bryon Aragam, and Francesco Locatello. “Assumption
    Violations in Causal Discovery and the Robustness of Score Matching.” <i>ArXiv</i>,
    n.d. <a href="https://doi.org/10.48550/arXiv.2310.13387">https://doi.org/10.48550/arXiv.2310.13387</a>.
  ieee: F. Montagna <i>et al.</i>, “Assumption violations in causal discovery and
    the robustness of score matching,” <i>arXiv</i>. .
  ista: Montagna F, Mastakouri AA, Eulig E, Noceti N, Rosasco L, Janzing D, Aragam
    B, Locatello F. Assumption violations in causal discovery and the robustness of
    score matching. arXiv, 2310.13387.
  mla: Montagna, Francesco, et al. “Assumption Violations in Causal Discovery and
    the Robustness of Score Matching.” <i>ArXiv</i>, 2310.13387, doi:<a href="https://doi.org/10.48550/arXiv.2310.13387">10.48550/arXiv.2310.13387</a>.
  short: F. Montagna, A.A. Mastakouri, E. Eulig, N. Noceti, L. Rosasco, D. Janzing,
    B. Aragam, F. Locatello, ArXiv (n.d.).
date_created: 2024-02-07T15:11:56Z
date_published: 2023-10-20T00:00:00Z
date_updated: 2024-02-12T09:51:15Z
day: '20'
department:
- _id: FrLo
doi: 10.48550/arXiv.2310.13387
external_id:
  arxiv:
  - '2310.13387'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.48550/arXiv.2310.13387
month: '10'
oa: 1
oa_version: Preprint
publication: arXiv
publication_status: submitted
status: public
title: Assumption violations in causal discovery and the robustness of score matching
type: preprint
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2023'
...
---
OA_place: repository
OA_type: green
_id: '14958'
abstract:
- lang: eng
  text: Causal representation learning (CRL) aims at identifying high-level causal
    variables from low-level data, e.g. images. Current methods usually assume that
    all causal variables are captured in the high-dimensional observations. In this
    work, we focus on learning causal representations from data under partial observability,
    i.e., when some of the causal variables are not observed in the measurements,
    and the set of masked variables changes across the different samples. We introduce
    some initial theoretical results for identifying causal variables under partial
    observability by exploiting a sparsity regularizer, focusing in particular on
    the linear and piecewise linear mixing function case. We provide a theorem that
    allows us to identify the causal variables up to permutation and element-wise
    linear transformations in the linear case and a lemma that allows us to identify
    causal variables up to linear transformation in the piecewise case. Finally, we
    provide a conjecture that would allow us to identify the causal variables up to
    permutation and element-wise linear transformations also in the piecewise linear
    case. We test the theorem and conjecture on simulated data, showing the effectiveness
    of our method.
acknowledgement: "This work was initiated at the Second Bellairs Workshop on Causality
  held at the Bellairs Research Institute, January 6–13, 2022; we thank all workshop
  participants for providing a stimulating research environment. The research of DX
  and SM was supported by the Air Force Office of Scientific Research under award
  number FA8655-22-1-7155. Any opinions, findings, and conclusions or recommendations
  expressed in this material are those of the author(s) and do not necessarily reflect
  the views of the United States Air Force. We also thank SURF for the support in
  using the Dutch National Supercomputer Snellius. DY was supported by an Amazon fellowship
  and the International Max Planck Research School for Intelligent Systems (IMPRS-IS).
  Work done outside of Amazon. SL was supported by an IVADO excellence PhD scholarship
  and by Samsung Electronics Co., Ldt. JvK acknowledges support from the German Federal
  Ministry of Education and Research (BMBF)\r\nthrough the Tübingen AI Center (FKZ:
  01IS18039B).\r\n"
article_number: '54'
article_processing_charge: No
author:
- first_name: Danru
  full_name: Xu, Danru
  last_name: Xu
- first_name: Dingling
  full_name: Yao, Dingling
  id: d3e02e50-48a8-11ee-8f62-c108061797fa
  last_name: Yao
- first_name: Sebastien
  full_name: Lachapelle, Sebastien
  last_name: Lachapelle
- first_name: Perouz
  full_name: Taslakian, Perouz
  last_name: Taslakian
- first_name: Julius
  full_name: von Kügelgen, Julius
  last_name: von Kügelgen
- first_name: Francesco
  full_name: Locatello, Francesco
  id: 26cfd52f-2483-11ee-8040-88983bcc06d4
  last_name: Locatello
  orcid: 0000-0002-4850-0683
- first_name: Sara
  full_name: Magliacane, Sara
  last_name: Magliacane
citation:
  ama: 'Xu D, Yao D, Lachapelle S, et al. A sparsity principle for partially observable
    causal representation learning. In: <i>Causal Representation Learning Workshop
    at NeurIPS 2023</i>. OpenReview; 2023.'
  apa: 'Xu, D., Yao, D., Lachapelle, S., Taslakian, P., von Kügelgen, J., Locatello,
    F., &#38; Magliacane, S. (2023). A sparsity principle for partially observable
    causal representation learning. In <i>Causal Representation Learning Workshop
    at NeurIPS 2023</i>. New Orleans, LA, United States: OpenReview.'
  chicago: Xu, Danru, Dingling Yao, Sebastien Lachapelle, Perouz Taslakian, Julius
    von Kügelgen, Francesco Locatello, and Sara Magliacane. “A Sparsity Principle
    for Partially Observable Causal Representation Learning.” In <i>Causal Representation
    Learning Workshop at NeurIPS 2023</i>. OpenReview, 2023.
  ieee: D. Xu <i>et al.</i>, “A sparsity principle for partially observable causal
    representation learning,” in <i>Causal Representation Learning Workshop at NeurIPS
    2023</i>, New Orleans, LA, United States, 2023.
  ista: 'Xu D, Yao D, Lachapelle S, Taslakian P, von Kügelgen J, Locatello F, Magliacane
    S. 2023. A sparsity principle for partially observable causal representation learning.
    Causal Representation Learning Workshop at NeurIPS 2023. CRL: Causal Representation
    Learning Workshop at NeurIPS, 54.'
  mla: Xu, Danru, et al. “A Sparsity Principle for Partially Observable Causal Representation
    Learning.” <i>Causal Representation Learning Workshop at NeurIPS 2023</i>, 54,
    OpenReview, 2023.
  short: D. Xu, D. Yao, S. Lachapelle, P. Taslakian, J. von Kügelgen, F. Locatello,
    S. Magliacane, in:, Causal Representation Learning Workshop at NeurIPS 2023, OpenReview,
    2023.
conference:
  end_date: 2023-12-15
  location: New Orleans, LA, United States
  name: 'CRL: Causal Representation Learning Workshop at NeurIPS'
  start_date: 2023-12-15
date_created: 2024-02-07T15:17:51Z
date_published: 2023-12-05T00:00:00Z
date_updated: 2025-02-04T12:37:34Z
day: '05'
ddc:
- '000'
department:
- _id: FrLo
file:
- access_level: open_access
  checksum: 484efc27bda75ed6666044989695d9b6
  content_type: application/pdf
  creator: dernst
  date_created: 2024-02-13T08:50:53Z
  date_updated: 2024-02-13T08:50:53Z
  file_id: '14982'
  file_name: 2023_CRL_Xu.pdf
  file_size: 552357
  relation: main_file
  success: 1
file_date_updated: 2024-02-13T08:50:53Z
has_accepted_license: '1'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://openreview.net/forum?id=Whr6uobelR
month: '12'
oa: 1
oa_version: Published Version
publication: Causal Representation Learning Workshop at NeurIPS 2023
publication_status: published
publisher: OpenReview
quality_controlled: '1'
status: public
title: A sparsity principle for partially observable causal representation learning
tmp:
  image: /images/cc_by.png
  legal_code_url: https://creativecommons.org/licenses/by/4.0/legalcode
  name: Creative Commons Attribution 4.0 International Public License (CC-BY 4.0)
  short: CC BY (4.0)
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2023'
...
---
_id: '14961'
abstract:
- lang: eng
  text: "The use of simulated data in the field of causal discovery is ubiquitous
    due to the scarcity of annotated real data. Recently, Reisach et al., 2021 highlighted
    the emergence of patterns in simulated linear data, which displays increasing
    marginal variance in the casual direction. As an ablation in their experiments,
    Montagna et al., 2023 found that similar patterns may emerge in\r\nnonlinear models
    for the variance of the score vector $\\nabla \\log p_{\\mathbf{X}}$, and introduced
    the ScoreSort algorithm. In this work, we formally define and characterize this
    score-sortability pattern of nonlinear additive noise models. We find that it
    defines a class of identifiable (bivariate) causal models overlapping with nonlinear
    additive noise models. We\r\ntheoretically demonstrate the advantages of ScoreSort
    in terms of statistical efficiency compared to prior state-of-the-art score matching-based
    methods and empirically show the score-sortability of the most common synthetic
    benchmarks in the literature. Our findings remark (1) the lack of diversity in
    the data as an important limitation in the evaluation of nonlinear causal discovery
    approaches, (2) the importance of thoroughly testing different settings within
    a problem class, and (3) the importance of analyzing statistical properties in\r\ncausal
    discovery, where research is often limited to defining identifiability conditions
    of the model. "
article_number: '2310.14246'
article_processing_charge: No
arxiv: 1
author:
- first_name: Francesco
  full_name: Montagna, Francesco
  last_name: Montagna
- first_name: Nicoletta
  full_name: Noceti, Nicoletta
  last_name: Noceti
- first_name: Lorenzo
  full_name: Rosasco, Lorenzo
  last_name: Rosasco
- first_name: Francesco
  full_name: Locatello, Francesco
  id: 26cfd52f-2483-11ee-8040-88983bcc06d4
  last_name: Locatello
  orcid: 0000-0002-4850-0683
citation:
  ama: Montagna F, Noceti N, Rosasco L, Locatello F. Shortcuts for causal discovery
    of nonlinear models by score matching. <i>arXiv</i>. doi:<a href="https://doi.org/10.48550/arXiv.2310.14246">10.48550/arXiv.2310.14246</a>
  apa: Montagna, F., Noceti, N., Rosasco, L., &#38; Locatello, F. (n.d.). Shortcuts
    for causal discovery of nonlinear models by score matching. <i>arXiv</i>. <a href="https://doi.org/10.48550/arXiv.2310.14246">https://doi.org/10.48550/arXiv.2310.14246</a>
  chicago: Montagna, Francesco, Nicoletta Noceti, Lorenzo Rosasco, and Francesco Locatello.
    “Shortcuts for Causal Discovery of Nonlinear Models by Score Matching.” <i>ArXiv</i>,
    n.d. <a href="https://doi.org/10.48550/arXiv.2310.14246">https://doi.org/10.48550/arXiv.2310.14246</a>.
  ieee: F. Montagna, N. Noceti, L. Rosasco, and F. Locatello, “Shortcuts for causal
    discovery of nonlinear models by score matching,” <i>arXiv</i>. .
  ista: Montagna F, Noceti N, Rosasco L, Locatello F. Shortcuts for causal discovery
    of nonlinear models by score matching. arXiv, 2310.14246.
  mla: Montagna, Francesco, et al. “Shortcuts for Causal Discovery of Nonlinear Models
    by Score Matching.” <i>ArXiv</i>, 2310.14246, doi:<a href="https://doi.org/10.48550/arXiv.2310.14246">10.48550/arXiv.2310.14246</a>.
  short: F. Montagna, N. Noceti, L. Rosasco, F. Locatello, ArXiv (n.d.).
corr_author: '1'
date_created: 2024-02-08T15:31:46Z
date_published: 2023-10-22T00:00:00Z
date_updated: 2024-10-09T21:08:10Z
day: '22'
department:
- _id: FrLo
doi: 10.48550/arXiv.2310.14246
external_id:
  arxiv:
  - '2310.14246'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.48550/arXiv.2310.14246
month: '10'
oa: 1
oa_version: Preprint
publication: arXiv
publication_status: submitted
status: public
title: Shortcuts for causal discovery of nonlinear models by score matching
type: preprint
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2023'
...
---
_id: '14962'
abstract:
- lang: eng
  text: "In this paper, we show that recent advances in video representation learning\r\nand
    pre-trained vision-language models allow for substantial improvements in\r\nself-supervised
    video object localization. We propose a method that first\r\nlocalizes objects
    in videos via a slot attention approach and then assigns text\r\nto the obtained
    slots. The latter is achieved by an unsupervised way to read\r\nlocalized semantic
    information from the pre-trained CLIP model. The resulting\r\nvideo object localization
    is entirely unsupervised apart from the implicit\r\nannotation contained in CLIP,
    and it is effectively the first unsupervised\r\napproach that yields good results
    on regular video benchmarks."
article_number: '2309.09858'
article_processing_charge: No
arxiv: 1
author:
- first_name: Ke
  full_name: Fan, Ke
  last_name: Fan
- first_name: Zechen
  full_name: Bai, Zechen
  last_name: Bai
- first_name: Tianjun
  full_name: Xiao, Tianjun
  last_name: Xiao
- first_name: Dominik
  full_name: Zietlow, Dominik
  last_name: Zietlow
- first_name: Max
  full_name: Horn, Max
  last_name: Horn
- first_name: Zixu
  full_name: Zhao, Zixu
  last_name: Zhao
- first_name: Carl-Johann Simon-Gabriel
  full_name: Carl-Johann Simon-Gabriel, Carl-Johann Simon-Gabriel
  last_name: Carl-Johann Simon-Gabriel
- first_name: Mike Zheng
  full_name: Shou, Mike Zheng
  last_name: Shou
- first_name: Francesco
  full_name: Locatello, Francesco
  id: 26cfd52f-2483-11ee-8040-88983bcc06d4
  last_name: Locatello
  orcid: 0000-0002-4850-0683
- first_name: Bernt
  full_name: Schiele, Bernt
  last_name: Schiele
- first_name: Thomas
  full_name: Brox, Thomas
  last_name: Brox
- first_name: Zheng
  full_name: Zhang, Zheng
  last_name: Zhang
- first_name: Yanwei
  full_name: Fu, Yanwei
  last_name: Fu
- first_name: Tong
  full_name: He, Tong
  last_name: He
citation:
  ama: Fan K, Bai Z, Xiao T, et al. Unsupervised open-vocabulary object localization
    in videos. <i>arXiv</i>. doi:<a href="https://doi.org/10.48550/arXiv.2309.09858">10.48550/arXiv.2309.09858</a>
  apa: Fan, K., Bai, Z., Xiao, T., Zietlow, D., Horn, M., Zhao, Z., … He, T. (n.d.).
    Unsupervised open-vocabulary object localization in videos. <i>arXiv</i>. <a href="https://doi.org/10.48550/arXiv.2309.09858">https://doi.org/10.48550/arXiv.2309.09858</a>
  chicago: Fan, Ke, Zechen Bai, Tianjun Xiao, Dominik Zietlow, Max Horn, Zixu Zhao,
    Carl-Johann Simon-Gabriel Carl-Johann Simon-Gabriel, et al. “Unsupervised Open-Vocabulary
    Object Localization in Videos.” <i>ArXiv</i>, n.d. <a href="https://doi.org/10.48550/arXiv.2309.09858">https://doi.org/10.48550/arXiv.2309.09858</a>.
  ieee: K. Fan <i>et al.</i>, “Unsupervised open-vocabulary object localization in
    videos,” <i>arXiv</i>. .
  ista: Fan K, Bai Z, Xiao T, Zietlow D, Horn M, Zhao Z, Carl-Johann Simon-Gabriel
    C-JS-G, Shou MZ, Locatello F, Schiele B, Brox T, Zhang Z, Fu Y, He T. Unsupervised
    open-vocabulary object localization in videos. arXiv, 2309.09858.
  mla: Fan, Ke, et al. “Unsupervised Open-Vocabulary Object Localization in Videos.”
    <i>ArXiv</i>, 2309.09858, doi:<a href="https://doi.org/10.48550/arXiv.2309.09858">10.48550/arXiv.2309.09858</a>.
  short: K. Fan, Z. Bai, T. Xiao, D. Zietlow, M. Horn, Z. Zhao, C.-J.S.-G. Carl-Johann
    Simon-Gabriel, M.Z. Shou, F. Locatello, B. Schiele, T. Brox, Z. Zhang, Y. Fu,
    T. He, ArXiv (n.d.).
date_created: 2024-02-08T15:33:39Z
date_published: 2023-09-18T00:00:00Z
date_updated: 2024-02-12T10:12:22Z
day: '18'
department:
- _id: FrLo
doi: 10.48550/arXiv.2309.09858
extern: '1'
external_id:
  arxiv:
  - '2309.09858'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.48550/arXiv.2309.09858
month: '09'
oa: 1
oa_version: Preprint
publication: arXiv
publication_status: submitted
status: public
title: Unsupervised open-vocabulary object localization in videos
type: preprint
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2023'
...
---
_id: '14963'
abstract:
- lang: eng
  text: "Unsupervised object-centric learning methods allow the partitioning of scenes\r\ninto
    entities without additional localization information and are excellent\r\ncandidates
    for reducing the annotation burden of multiple-object tracking (MOT)\r\npipelines.
    Unfortunately, they lack two key properties: objects are often split\r\ninto parts
    and are not consistently tracked over time. In fact,\r\nstate-of-the-art models
    achieve pixel-level accuracy and temporal consistency\r\nby relying on supervised
    object detection with additional ID labels for the\r\nassociation through time.
    This paper proposes a video object-centric model for\r\nMOT. It consists of an
    index-merge module that adapts the object-centric slots\r\ninto detection outputs
    and an object memory module that builds complete object\r\nprototypes to handle
    occlusions. Benefited from object-centric learning, we\r\nonly require sparse
    detection labels (0%-6.25%) for object localization and\r\nfeature binding. Relying
    on our self-supervised\r\nExpectation-Maximization-inspired loss for object association,
    our approach\r\nrequires no ID labels. Our experiments significantly narrow the
    gap between the\r\nexisting object-centric model and the fully supervised state-of-the-art
    and\r\noutperform several unsupervised trackers."
article_number: '2309.00233'
article_processing_charge: No
arxiv: 1
author:
- first_name: Zixu
  full_name: Zhao, Zixu
  last_name: Zhao
- first_name: Jiaze
  full_name: Wang, Jiaze
  last_name: Wang
- first_name: Max
  full_name: Horn, Max
  last_name: Horn
- first_name: Yizhuo
  full_name: Ding, Yizhuo
  last_name: Ding
- first_name: Tong
  full_name: He, Tong
  last_name: He
- first_name: Zechen
  full_name: Bai, Zechen
  last_name: Bai
- first_name: Dominik
  full_name: Zietlow, Dominik
  last_name: Zietlow
- first_name: Carl-Johann Simon-Gabriel
  full_name: Carl-Johann Simon-Gabriel, Carl-Johann Simon-Gabriel
  last_name: Carl-Johann Simon-Gabriel
- first_name: Bing
  full_name: Shuai, Bing
  last_name: Shuai
- first_name: Zhuowen
  full_name: Tu, Zhuowen
  last_name: Tu
- first_name: Thomas
  full_name: Brox, Thomas
  last_name: Brox
- first_name: Bernt
  full_name: Schiele, Bernt
  last_name: Schiele
- first_name: Yanwei
  full_name: Fu, Yanwei
  last_name: Fu
- first_name: Francesco
  full_name: Locatello, Francesco
  id: 26cfd52f-2483-11ee-8040-88983bcc06d4
  last_name: Locatello
  orcid: 0000-0002-4850-0683
- first_name: Zheng
  full_name: Zhang, Zheng
  last_name: Zhang
- first_name: Tianjun
  full_name: Xiao, Tianjun
  last_name: Xiao
citation:
  ama: Zhao Z, Wang J, Horn M, et al. Object-centric multiple object tracking. <i>arXiv</i>.
    doi:<a href="https://doi.org/10.48550/arXiv.2309.00233">10.48550/arXiv.2309.00233</a>
  apa: Zhao, Z., Wang, J., Horn, M., Ding, Y., He, T., Bai, Z., … Xiao, T. (n.d.).
    Object-centric multiple object tracking. <i>arXiv</i>. <a href="https://doi.org/10.48550/arXiv.2309.00233">https://doi.org/10.48550/arXiv.2309.00233</a>
  chicago: Zhao, Zixu, Jiaze Wang, Max Horn, Yizhuo Ding, Tong He, Zechen Bai, Dominik
    Zietlow, et al. “Object-Centric Multiple Object Tracking.” <i>ArXiv</i>, n.d.
    <a href="https://doi.org/10.48550/arXiv.2309.00233">https://doi.org/10.48550/arXiv.2309.00233</a>.
  ieee: Z. Zhao <i>et al.</i>, “Object-centric multiple object tracking,” <i>arXiv</i>.
    .
  ista: Zhao Z, Wang J, Horn M, Ding Y, He T, Bai Z, Zietlow D, Carl-Johann Simon-Gabriel
    C-JS-G, Shuai B, Tu Z, Brox T, Schiele B, Fu Y, Locatello F, Zhang Z, Xiao T.
    Object-centric multiple object tracking. arXiv, 2309.00233.
  mla: Zhao, Zixu, et al. “Object-Centric Multiple Object Tracking.” <i>ArXiv</i>,
    2309.00233, doi:<a href="https://doi.org/10.48550/arXiv.2309.00233">10.48550/arXiv.2309.00233</a>.
  short: Z. Zhao, J. Wang, M. Horn, Y. Ding, T. He, Z. Bai, D. Zietlow, C.-J.S.-G.
    Carl-Johann Simon-Gabriel, B. Shuai, Z. Tu, T. Brox, B. Schiele, Y. Fu, F. Locatello,
    Z. Zhang, T. Xiao, ArXiv (n.d.).
date_created: 2024-02-08T15:34:43Z
date_published: 2023-09-01T00:00:00Z
date_updated: 2024-02-12T10:16:21Z
day: '01'
department:
- _id: FrLo
doi: 10.48550/arXiv.2309.00233
extern: '1'
external_id:
  arxiv:
  - '2309.00233'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: ' https://doi.org/10.48550/arXiv.2309.00233'
month: '09'
oa: 1
oa_version: Preprint
publication: arXiv
publication_status: submitted
status: public
title: Object-centric multiple object tracking
type: preprint
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2023'
...
---
OA_place: repository
_id: '14965'
abstract:
- lang: eng
  text: 'A method of determining a correspondence between a first biological property
    of a cell and one or more further biological properties of cells is provided.
    The first biological property and the further biological properties are determined
    by different analysis techniques and each are contained in a respective one of
    a plurality of sets of biological properties. The method includes the steps of:
    converting the plurality of sets of biological properties into corresponding representations
    in a representation format which is invariant to the technologies used to derive
    the biological properties; determining, in said representation format, a representation
    from each of the converted sets of further biological properties which most closely
    matches the first representation of the first biological property; and re-converting
    the determined representations from the representation format back to the biological
    properties associated with the determined representations and thereby determining
    a correspondence between the first biological property and each of the further
    biological properties.'
applicant:
- ETH Zürich
application_date: 2021-04-21
application_number: PCT/EP2021/060318
article_processing_charge: No
author:
- first_name: Joanna
  full_name: Ficek, Joanna
  last_name: Ficek
- first_name: Kjong-Van
  full_name: Lehmann, Kjong-Van
  last_name: Lehmann
- first_name: Francesco
  full_name: Locatello, Francesco
  id: 26cfd52f-2483-11ee-8040-88983bcc06d4
  last_name: Locatello
  orcid: 0000-0002-4850-0683
- first_name: 'Gunnar '
  full_name: 'Raetsch, Gunnar '
  last_name: Raetsch
- first_name: Stefan
  full_name: Stark, Stefan
  last_name: Stark
citation:
  ama: Ficek J, Lehmann K-V, Locatello F, Raetsch G, Stark S. Methods of determining
    correspondences between biological properties of cells. 2023.
  apa: Ficek, J., Lehmann, K.-V., Locatello, F., Raetsch, G., &#38; Stark, S. (2023).
    Methods of determining correspondences between biological properties of cells.
  chicago: Ficek, Joanna, Kjong-Van Lehmann, Francesco Locatello, Gunnar  Raetsch,
    and Stefan Stark. “Methods of Determining Correspondences between Biological Properties
    of Cells,” 2023.
  ieee: J. Ficek, K.-V. Lehmann, F. Locatello, G. Raetsch, and S. Stark, “Methods
    of determining correspondences between biological properties of cells.” 2023.
  ista: Ficek J, Lehmann K-V, Locatello F, Raetsch G, Stark S. 2023. Methods of determining
    correspondences between biological properties of cells.
  mla: Ficek, Joanna, et al. <i>Methods of Determining Correspondences between Biological
    Properties of Cells</i>. 2023.
  short: J. Ficek, K.-V. Lehmann, F. Locatello, G. Raetsch, S. Stark, (2023).
date_created: 2024-02-08T15:52:21Z
date_published: 2023-05-25T00:00:00Z
date_updated: 2025-01-29T10:53:48Z
day: '25'
ddc:
- '540'
department:
- _id: FrLo
extern: '1'
file:
- access_level: open_access
  checksum: 55ed444b176b48e4fb4d609ea895de36
  content_type: application/pdf
  creator: ptazenko
  date_created: 2024-02-08T15:41:51Z
  date_updated: 2024-02-08T15:41:51Z
  file_id: '14966'
  file_name: Patent_FrLo_US20230162818A1.pdf
  file_size: 2893462
  relation: main_file
  success: 1
file_date_updated: 2024-02-08T15:41:51Z
has_accepted_license: '1'
ipc: C12Q1/68 ; G06V10/82 ; G06V20/69 ; G16B40/30
ipn: US20230162818A1
month: '05'
oa: 1
oa_version: Published Version
page: '9'
publication_date: 2023-05-25
status: public
title: Methods of determining correspondences between biological properties of cells
type: patent
user_id: 8b945eb4-e2f2-11eb-945a-df72226e66a9
year: '2023'
...
---
_id: '14974'
abstract:
- lang: eng
  text: "The field of machine learning and AI has witnessed remarkable breakthroughs
    with the emergence of LLMs, which have also sparked a lively debate in the causal
    community. As researchers in this field, we are interested in exploring how LLMs
    relate to causality research, and how we can leverage the technology to advance
    it. In the second conference of Causal Learning and Reasoning (CLeaR), 2023, we
    held a round table discussion to gather and integrate the diverse perspectives
    of the CLeaR community on this topic.\r\nThere is a general consensus that LLMs
    are not yet capable of causal reasoning at the current\r\nstage but has a lot
    of potential with public available information by CLeaR 2023. Enhancing causal
    machine learning is vital not only for its own sake but also to help LLMs improve
    their performance, especially regarding trustworthiness. In this document, we
    present both the summary and the raw outcome of the round table discussion. We
    acknowledge that with the progress of both fields, the opportunities and impact
    may rapidly change. We will repeat the same exercise in CLeaR 2024 to document
    the evolution."
article_processing_charge: No
author:
- first_name: Cheng
  full_name: Zhang, Cheng
  last_name: Zhang
- first_name: Dominik
  full_name: Janzing, Dominik
  last_name: Janzing
- first_name: 'Mihaela '
  full_name: 'van der Schaar, Mihaela '
  last_name: van der Schaar
- first_name: Francesco
  full_name: Locatello, Francesco
  id: 26cfd52f-2483-11ee-8040-88983bcc06d4
  last_name: Locatello
  orcid: 0000-0002-4850-0683
- first_name: Peter
  full_name: Spirtes, Peter
  last_name: Spirtes
- first_name: Kun
  full_name: Zhang, Kun
  last_name: Zhang
- first_name: Bernhard
  full_name: Schölkopf, Bernhard
  last_name: Schölkopf
- first_name: Caroline
  full_name: Uhler, Caroline
  id: 49ADD78E-F248-11E8-B48F-1D18A9856A87
  last_name: Uhler
  orcid: 0000-0002-7008-0216
citation:
  ama: 'Zhang C, Janzing D, van der Schaar M, et al. Causality in the time of LLMs:
    Round table discussion results of CLeaR 2023. In: <i>2nd Conference on Causal
    Learning and Reasoning</i>.'
  apa: 'Zhang, C., Janzing, D., van der Schaar, M., Locatello, F., Spirtes, P., Zhang,
    K., … Uhler, C. (n.d.). Causality in the time of LLMs: Round table discussion
    results of CLeaR 2023. In <i>2nd Conference on Causal Learning and Reasoning</i>.
    Tübingen, Germany.'
  chicago: 'Zhang, Cheng, Dominik Janzing, Mihaela  van der Schaar, Francesco Locatello,
    Peter Spirtes, Kun Zhang, Bernhard Schölkopf, and Caroline Uhler. “Causality in
    the Time of LLMs: Round Table Discussion Results of CLeaR 2023.” In <i>2nd Conference
    on Causal Learning and Reasoning</i>, n.d.'
  ieee: 'C. Zhang <i>et al.</i>, “Causality in the time of LLMs: Round table discussion
    results of CLeaR 2023,” in <i>2nd Conference on Causal Learning and Reasoning</i>,
    Tübingen, Germany.'
  ista: 'Zhang C, Janzing D, van der Schaar M, Locatello F, Spirtes P, Zhang K, Schölkopf
    B, Uhler C. Causality in the time of LLMs: Round table discussion results of CLeaR
    2023. 2nd Conference on Causal Learning and Reasoning. CLeaR: Conference on Causal
    Learning and Reasoning.'
  mla: 'Zhang, Cheng, et al. “Causality in the Time of LLMs: Round Table Discussion
    Results of CLeaR 2023.” <i>2nd Conference on Causal Learning and Reasoning</i>.'
  short: C. Zhang, D. Janzing, M. van der Schaar, F. Locatello, P. Spirtes, K. Zhang,
    B. Schölkopf, C. Uhler, in:, 2nd Conference on Causal Learning and Reasoning,
    n.d.
conference:
  end_date: 2023-04-14
  location: Tübingen, Germany
  name: 'CLeaR: Conference on Causal Learning and Reasoning'
  start_date: 2023-04-11
date_created: 2024-02-08T16:03:18Z
date_published: 2023-05-01T00:00:00Z
date_updated: 2025-08-05T11:19:37Z
day: '01'
ddc:
- '000'
department:
- _id: FrLo
extern: '1'
file:
- access_level: open_access
  checksum: 105ff58e55de866ce76967f3a95e82f7
  content_type: application/pdf
  creator: ptazenko
  date_created: 2024-02-08T16:03:08Z
  date_updated: 2024-02-08T16:03:08Z
  file_id: '14975'
  file_name: CLeaR23_roundtable_discussion.pdf
  file_size: 215629
  relation: main_file
file_date_updated: 2024-02-08T16:03:08Z
has_accepted_license: '1'
language:
- iso: eng
month: '05'
oa: 1
oa_version: Submitted Version
publication: 2nd Conference on Causal Learning and Reasoning
publication_status: submitted
quality_controlled: '1'
status: public
title: 'Causality in the time of LLMs: Round table discussion results of CLeaR 2023'
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2023'
...
---
_id: '14985'
abstract:
- lang: eng
  text: Lead sulfide (PbS) presents large potential in thermoelectric application
    due to its earth-abundant S element. However, its inferior average ZT (ZTave)
    value makes PbS less competitive with its analogs PbTe and PbSe. To promote its
    thermoelectric performance, this study implements strategies of continuous Se
    alloying and Cu interstitial doping to synergistically tune thermal and electrical
    transport properties in n-type PbS. First, the lattice parameter of 5.93 Å in
    PbS is linearly expanded to 6.03 Å in PbS0.5Se0.5 with increasing Se alloying
    content. This expanded lattice in Se-alloyed PbS not only intensifies phonon scattering
    but also facilitates the formation of Cu interstitials. Based on the PbS0.6Se0.4
    content with the minimal lattice thermal conductivity, Cu interstitials are introduced
    to improve the electron density, thus boosting the peak power factor, from 3.88 μW cm−1 K−2
    in PbS0.6Se0.4 to 20.58 μW cm−1 K−2 in PbS0.6Se0.4−1%Cu. Meanwhile, the lattice
    thermal conductivity in PbS0.6Se0.4−x%Cu (x = 0–2) is further suppressed due to
    the strong strain field caused by Cu interstitials. Finally, with the lowered
    thermal conductivity and high electrical transport properties, a peak ZT ~1.1
    and ZTave ~0.82 can be achieved in PbS0.6Se0.4 − 1%Cu at 300–773K, which outperforms
    previously reported n-type PbS.
acknowledgement: 'The authors would like to acknowledge the strong supportof microstructure
  observation from Center for HighPressure Science and Technology Advanced Research(HPSTAR).
  We acknowledge the financial support fromthe  National  Natural  Science  Foundation  of  China:52172236,
  the Fundamental Research Funds for theCentral Universities: xtr042021007, Top Young
  TalentsProgramme of Xi''an Jiaotong University and NationalScience Fund for Distinguished
  Young Scholars: 51925101.'
article_processing_charge: Yes
article_type: original
author:
- first_name: Zhengtao
  full_name: Liu, Zhengtao
  last_name: Liu
- first_name: Tao
  full_name: Hong, Tao
  last_name: Hong
- first_name: Liqing
  full_name: Xu, Liqing
  last_name: Xu
- first_name: Sining
  full_name: Wang, Sining
  last_name: Wang
- first_name: Xiang
  full_name: Gao, Xiang
  last_name: Gao
- first_name: Cheng
  full_name: Chang, Cheng
  id: 9E331C2E-9F27-11E9-AE48-5033E6697425
  last_name: Chang
  orcid: 0000-0002-9515-4277
- first_name: Xiangdong
  full_name: Ding, Xiangdong
  last_name: Ding
- first_name: Yu
  full_name: Xiao, Yu
  last_name: Xiao
- first_name: Li‐Dong
  full_name: Zhao, Li‐Dong
  last_name: Zhao
citation:
  ama: Liu Z, Hong T, Xu L, et al. Lattice expansion enables interstitial doping to
    achieve a high average ZT in n‐type PbS. <i>Interdisciplinary Materials</i>. 2023;2(1):161-170.
    doi:<a href="https://doi.org/10.1002/idm2.12056">10.1002/idm2.12056</a>
  apa: Liu, Z., Hong, T., Xu, L., Wang, S., Gao, X., Chang, C., … Zhao, L. (2023).
    Lattice expansion enables interstitial doping to achieve a high average ZT in
    n‐type PbS. <i>Interdisciplinary Materials</i>. Wiley. <a href="https://doi.org/10.1002/idm2.12056">https://doi.org/10.1002/idm2.12056</a>
  chicago: Liu, Zhengtao, Tao Hong, Liqing Xu, Sining Wang, Xiang Gao, Cheng Chang,
    Xiangdong Ding, Yu Xiao, and Li‐Dong Zhao. “Lattice Expansion Enables Interstitial
    Doping to Achieve a High Average ZT in N‐type PbS.” <i>Interdisciplinary Materials</i>.
    Wiley, 2023. <a href="https://doi.org/10.1002/idm2.12056">https://doi.org/10.1002/idm2.12056</a>.
  ieee: Z. Liu <i>et al.</i>, “Lattice expansion enables interstitial doping to achieve
    a high average ZT in n‐type PbS,” <i>Interdisciplinary Materials</i>, vol. 2,
    no. 1. Wiley, pp. 161–170, 2023.
  ista: Liu Z, Hong T, Xu L, Wang S, Gao X, Chang C, Ding X, Xiao Y, Zhao L. 2023.
    Lattice expansion enables interstitial doping to achieve a high average ZT in
    n‐type PbS. Interdisciplinary Materials. 2(1), 161–170.
  mla: Liu, Zhengtao, et al. “Lattice Expansion Enables Interstitial Doping to Achieve
    a High Average ZT in N‐type PbS.” <i>Interdisciplinary Materials</i>, vol. 2,
    no. 1, Wiley, 2023, pp. 161–70, doi:<a href="https://doi.org/10.1002/idm2.12056">10.1002/idm2.12056</a>.
  short: Z. Liu, T. Hong, L. Xu, S. Wang, X. Gao, C. Chang, X. Ding, Y. Xiao, L. Zhao,
    Interdisciplinary Materials 2 (2023) 161–170.
date_created: 2024-02-14T12:12:17Z
date_published: 2023-01-01T00:00:00Z
date_updated: 2024-02-19T10:01:26Z
day: '01'
ddc:
- '540'
department:
- _id: MaIb
doi: 10.1002/idm2.12056
file:
- access_level: open_access
  checksum: 7b5e8210ef1434feb173022c6dbbee0c
  content_type: application/pdf
  creator: dernst
  date_created: 2024-02-19T09:58:32Z
  date_updated: 2024-02-19T09:58:32Z
  file_id: '15015'
  file_name: 2023_InterdiscMaterials_Liu.pdf
  file_size: 4675941
  relation: main_file
  success: 1
file_date_updated: 2024-02-19T09:58:32Z
has_accepted_license: '1'
intvolume: '         2'
issue: '1'
language:
- iso: eng
month: '01'
oa: 1
oa_version: Published Version
page: 161-170
publication: Interdisciplinary Materials
publication_identifier:
  eissn:
  - 2767-441X
publication_status: published
publisher: Wiley
quality_controlled: '1'
status: public
title: Lattice expansion enables interstitial doping to achieve a high average ZT
  in n‐type PbS
tmp:
  image: /images/cc_by.png
  legal_code_url: https://creativecommons.org/licenses/by/4.0/legalcode
  name: Creative Commons Attribution 4.0 International Public License (CC-BY 4.0)
  short: CC BY (4.0)
type: journal_article
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 2
year: '2023'
...
---
_id: '14989'
abstract:
- lang: eng
  text: "Encryption alone is not enough for secure end-to end encrypted messaging:
    a server must also honestly serve public keys to users. Key transparency has been
    presented as an efficient\r\nsolution for detecting (and hence deterring) a server
    that attempts to dishonestly serve keys. Key transparency involves two major components:
    (1) a username to public key mapping, stored and cryptographically committed to
    by the server, and, (2) an outof-band consistency protocol for serving short commitments
    to users. In the setting of real-world deployments and supporting production scale,
    new challenges must be considered for both of these components. We enumerate these
    challenges and provide solutions to address them. In particular, we design and
    implement a memory-optimized and privacy-preserving verifiable data structure
    for committing to the username to public key store.\r\nTo make this implementation
    viable for production, we also integrate support for persistent and distributed
    storage. We also propose a future-facing solution, termed “compaction”, as\r\na
    mechanism for mitigating practical issues that arise from dealing with infinitely
    growing server data structures. Finally, we implement a consensusless solution
    that achieves the minimum requirements for a service that consistently distributes
    commitments for a transparency application, providing a much more efficient protocol
    for distributing small and consistent\r\ncommitments to users. This culminates
    in our production-grade implementation of a key transparency system (Parakeet)
    which we have open-sourced, along with a demonstration of feasibility through
    our benchmarks."
acknowledgement: This work is supported by the Novi team at Meta and funded in part
  by IC3 industry partners and NSF grant 1943499.
article_processing_charge: No
author:
- first_name: Harjasleen
  full_name: Malvai, Harjasleen
  last_name: Malvai
- first_name: Eleftherios
  full_name: Kokoris Kogias, Eleftherios
  id: f5983044-d7ef-11ea-ac6d-fd1430a26d30
  last_name: Kokoris Kogias
- first_name: Alberto
  full_name: Sonnino, Alberto
  last_name: Sonnino
- first_name: Esha
  full_name: Ghosh, Esha
  last_name: Ghosh
- first_name: Ercan
  full_name: Oztürk, Ercan
  last_name: Oztürk
- first_name: Kevin
  full_name: Lewi, Kevin
  last_name: Lewi
- first_name: Sean
  full_name: Lawlor, Sean
  last_name: Lawlor
citation:
  ama: 'Malvai H, Kokoris Kogias E, Sonnino A, et al. Parakeet: Practical key transparency
    for end-to-end eEncrypted messaging. In: <i>Proceedings of the 2023 Network and
    Distributed System Security Symposium</i>. Internet Society; 2023. doi:<a href="https://doi.org/10.14722/ndss.2023.24545">10.14722/ndss.2023.24545</a>'
  apa: 'Malvai, H., Kokoris Kogias, E., Sonnino, A., Ghosh, E., Oztürk, E., Lewi,
    K., &#38; Lawlor, S. (2023). Parakeet: Practical key transparency for end-to-end
    eEncrypted messaging. In <i>Proceedings of the 2023 Network and Distributed System
    Security Symposium</i>. San Diego, CA, United States: Internet Society. <a href="https://doi.org/10.14722/ndss.2023.24545">https://doi.org/10.14722/ndss.2023.24545</a>'
  chicago: 'Malvai, Harjasleen, Eleftherios Kokoris Kogias, Alberto Sonnino, Esha
    Ghosh, Ercan Oztürk, Kevin Lewi, and Sean Lawlor. “Parakeet: Practical Key Transparency
    for End-to-End EEncrypted Messaging.” In <i>Proceedings of the 2023 Network and
    Distributed System Security Symposium</i>. Internet Society, 2023. <a href="https://doi.org/10.14722/ndss.2023.24545">https://doi.org/10.14722/ndss.2023.24545</a>.'
  ieee: 'H. Malvai <i>et al.</i>, “Parakeet: Practical key transparency for end-to-end
    eEncrypted messaging,” in <i>Proceedings of the 2023 Network and Distributed System
    Security Symposium</i>, San Diego, CA, United States, 2023.'
  ista: 'Malvai H, Kokoris Kogias E, Sonnino A, Ghosh E, Oztürk E, Lewi K, Lawlor
    S. 2023. Parakeet: Practical key transparency for end-to-end eEncrypted messaging.
    Proceedings of the 2023 Network and Distributed System Security Symposium. NDSS:
    Network and Distributed Systems Security.'
  mla: 'Malvai, Harjasleen, et al. “Parakeet: Practical Key Transparency for End-to-End
    EEncrypted Messaging.” <i>Proceedings of the 2023 Network and Distributed System
    Security Symposium</i>, Internet Society, 2023, doi:<a href="https://doi.org/10.14722/ndss.2023.24545">10.14722/ndss.2023.24545</a>.'
  short: H. Malvai, E. Kokoris Kogias, A. Sonnino, E. Ghosh, E. Oztürk, K. Lewi, S.
    Lawlor, in:, Proceedings of the 2023 Network and Distributed System Security Symposium,
    Internet Society, 2023.
conference:
  end_date: 2023-03-03
  location: San Diego, CA, United States
  name: 'NDSS: Network and Distributed Systems Security'
  start_date: 2023-02-27
date_created: 2024-02-14T14:20:40Z
date_published: 2023-03-01T00:00:00Z
date_updated: 2024-10-21T06:01:37Z
day: '01'
department:
- _id: ElKo
doi: 10.14722/ndss.2023.24545
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://eprint.iacr.org/2023/081
month: '03'
oa: 1
oa_version: Published Version
publication: Proceedings of the 2023 Network and Distributed System Security Symposium
publication_identifier:
  isbn:
  - '1891562835'
publication_status: published
publisher: Internet Society
quality_controlled: '1'
scopus_import: '1'
status: public
title: 'Parakeet: Practical key transparency for end-to-end eEncrypted messaging'
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2023'
...
---
_id: '14990'
abstract:
- lang: eng
  text: The software artefact to evaluate the approximation of stationary distributions
    implementation.
article_processing_charge: No
author:
- first_name: Tobias
  full_name: Meggendorfer, Tobias
  id: b21b0c15-30a2-11eb-80dc-f13ca25802e1
  last_name: Meggendorfer
  orcid: 0000-0002-1712-2165
citation:
  ama: 'Meggendorfer T. Artefact for: Correct Approximation of Stationary Distributions.
    2023. doi:<a href="https://doi.org/10.5281/ZENODO.7548214">10.5281/ZENODO.7548214</a>'
  apa: 'Meggendorfer, T. (2023). Artefact for: Correct Approximation of Stationary
    Distributions. Zenodo. <a href="https://doi.org/10.5281/ZENODO.7548214">https://doi.org/10.5281/ZENODO.7548214</a>'
  chicago: 'Meggendorfer, Tobias. “Artefact for: Correct Approximation of Stationary
    Distributions.” Zenodo, 2023. <a href="https://doi.org/10.5281/ZENODO.7548214">https://doi.org/10.5281/ZENODO.7548214</a>.'
  ieee: 'T. Meggendorfer, “Artefact for: Correct Approximation of Stationary Distributions.”
    Zenodo, 2023.'
  ista: 'Meggendorfer T. 2023. Artefact for: Correct Approximation of Stationary Distributions,
    Zenodo, <a href="https://doi.org/10.5281/ZENODO.7548214">10.5281/ZENODO.7548214</a>.'
  mla: 'Meggendorfer, Tobias. <i>Artefact for: Correct Approximation of Stationary
    Distributions</i>. Zenodo, 2023, doi:<a href="https://doi.org/10.5281/ZENODO.7548214">10.5281/ZENODO.7548214</a>.'
  short: T. Meggendorfer, (2023).
corr_author: '1'
date_created: 2024-02-14T14:27:06Z
date_published: 2023-01-18T00:00:00Z
date_updated: 2025-09-09T12:28:12Z
day: '18'
ddc:
- '000'
department:
- _id: KrCh
doi: 10.5281/ZENODO.7548214
has_accepted_license: '1'
main_file_link:
- open_access: '1'
  url: https://doi.org/10.5281/zenodo.7548214
month: '01'
oa: 1
oa_version: Published Version
publisher: Zenodo
related_material:
  record:
  - id: '13139'
    relation: used_in_publication
    status: public
status: public
title: 'Artefact for: Correct Approximation of Stationary Distributions'
tmp:
  image: /images/cc_by.png
  legal_code_url: https://creativecommons.org/licenses/by/4.0/legalcode
  name: Creative Commons Attribution 4.0 International Public License (CC-BY 4.0)
  short: CC BY (4.0)
type: research_data_reference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2023'
...
---
_id: '14991'
abstract:
- lang: eng
  text: This repository contains the data, scripts, WRF codes and files required to
    reproduce the results of the manuscript "Assessing Memory in Convection Schemes
    Using Idealized Tests" submitted to the Journal of Advances in Modeling Earth
    Systems (JAMES).
article_processing_charge: No
author:
- first_name: Yi-Ling
  full_name: Hwong, Yi-Ling
  id: 1217aa61-4dd1-11ec-9ac3-f2ba3f17ee22
  last_name: Hwong
  orcid: 0000-0001-9281-3479
- first_name: Maxime
  full_name: Colin, Maxime
  last_name: Colin
- first_name: Philipp
  full_name: Aglas, Philipp
  id: 02eace56-97fc-11ee-b81a-f0939ca85a77
  last_name: Aglas
- first_name: Caroline J
  full_name: Muller, Caroline J
  id: f978ccb0-3f7f-11eb-b193-b0e2bd13182b
  last_name: Muller
  orcid: 0000-0001-5836-5350
- first_name: Steven C.
  full_name: Sherwood, Steven C.
  last_name: Sherwood
citation:
  ama: Hwong Y-L, Colin M, Aglas P, Muller CJ, Sherwood SC. Data-assessing memory
    in convection schemes using idealized tests. 2023. doi:<a href="https://doi.org/10.5281/ZENODO.7757041">10.5281/ZENODO.7757041</a>
  apa: Hwong, Y.-L., Colin, M., Aglas, P., Muller, C. J., &#38; Sherwood, S. C. (2023).
    Data-assessing memory in convection schemes using idealized tests. Zenodo. <a
    href="https://doi.org/10.5281/ZENODO.7757041">https://doi.org/10.5281/ZENODO.7757041</a>
  chicago: Hwong, Yi-Ling, Maxime Colin, Philipp Aglas, Caroline J Muller, and Steven
    C. Sherwood. “Data-Assessing Memory in Convection Schemes Using Idealized Tests.”
    Zenodo, 2023. <a href="https://doi.org/10.5281/ZENODO.7757041">https://doi.org/10.5281/ZENODO.7757041</a>.
  ieee: Y.-L. Hwong, M. Colin, P. Aglas, C. J. Muller, and S. C. Sherwood, “Data-assessing
    memory in convection schemes using idealized tests.” Zenodo, 2023.
  ista: Hwong Y-L, Colin M, Aglas P, Muller CJ, Sherwood SC. 2023. Data-assessing
    memory in convection schemes using idealized tests, Zenodo, <a href="https://doi.org/10.5281/ZENODO.7757041">10.5281/ZENODO.7757041</a>.
  mla: Hwong, Yi-Ling, et al. <i>Data-Assessing Memory in Convection Schemes Using
    Idealized Tests</i>. Zenodo, 2023, doi:<a href="https://doi.org/10.5281/ZENODO.7757041">10.5281/ZENODO.7757041</a>.
  short: Y.-L. Hwong, M. Colin, P. Aglas, C.J. Muller, S.C. Sherwood, (2023).
corr_author: '1'
date_created: 2024-02-14T14:37:57Z
date_published: 2023-06-23T00:00:00Z
date_updated: 2025-09-09T13:35:40Z
day: '23'
ddc:
- '550'
department:
- _id: CaMu
doi: 10.5281/ZENODO.7757041
ec_funded: 1
has_accepted_license: '1'
main_file_link:
- open_access: '1'
  url: https://doi.org/10.5281/zenodo.7757041
month: '06'
oa: 1
oa_version: Published Version
project:
- _id: fc2ed2f7-9c52-11eb-aca3-c01059dda49c
  call_identifier: H2020
  grant_number: '101034413'
  name: 'IST-BRIDGE: International postdoctoral program'
publisher: Zenodo
related_material:
  record:
  - id: '14654'
    relation: used_in_publication
    status: public
status: public
title: Data-assessing memory in convection schemes using idealized tests
tmp:
  image: /images/cc_by.png
  legal_code_url: https://creativecommons.org/licenses/by/4.0/legalcode
  name: Creative Commons Attribution 4.0 International Public License (CC-BY 4.0)
  short: CC BY (4.0)
type: research_data_reference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2023'
...
---
_id: '14992'
abstract:
- lang: eng
  text: In this chapter we first review the Levy–Lieb functional, which gives the
    lowest kinetic and interaction energy that can be reached with all possible quantum
    states having a given density. We discuss two possible convex generalizations
    of this functional, corresponding to using mixed canonical and grand-canonical
    states, respectively. We present some recent works about the local density approximation,
    in which the functionals get replaced by purely local functionals constructed
    using the uniform electron gas energy per unit volume. We then review the known
    upper and lower bounds on the Levy–Lieb functionals. We start with the kinetic
    energy alone, then turn to the classical interaction alone, before we are able
    to put everything together. A later section is devoted to the Hohenberg–Kohn theorem
    and the role of many-body unique continuation in its proof.
alternative_title:
- Mathematics and Molecular Modeling
article_processing_charge: No
arxiv: 1
author:
- first_name: Mathieu
  full_name: Lewin, Mathieu
  last_name: Lewin
- first_name: Elliott H.
  full_name: Lieb, Elliott H.
  last_name: Lieb
- first_name: Robert
  full_name: Seiringer, Robert
  id: 4AFD0470-F248-11E8-B48F-1D18A9856A87
  last_name: Seiringer
  orcid: 0000-0002-6781-0521
citation:
  ama: 'Lewin M, Lieb EH, Seiringer R. Universal Functionals in Density Functional
    Theory. In: Cances E, Friesecke G, eds. <i>Density Functional Theory</i>. 1st
    ed. MAMOMO. Springer; 2023:115-182. doi:<a href="https://doi.org/10.1007/978-3-031-22340-2_3">10.1007/978-3-031-22340-2_3</a>'
  apa: Lewin, M., Lieb, E. H., &#38; Seiringer, R. (2023). Universal Functionals in
    Density Functional Theory. In E. Cances &#38; G. Friesecke (Eds.), <i>Density
    Functional Theory</i> (1st ed., pp. 115–182). Springer. <a href="https://doi.org/10.1007/978-3-031-22340-2_3">https://doi.org/10.1007/978-3-031-22340-2_3</a>
  chicago: Lewin, Mathieu, Elliott H. Lieb, and Robert Seiringer. “Universal Functionals
    in Density Functional Theory.” In <i>Density Functional Theory</i>, edited by
    Eric Cances and Gero Friesecke, 1st ed., 115–82. MAMOMO. Springer, 2023. <a href="https://doi.org/10.1007/978-3-031-22340-2_3">https://doi.org/10.1007/978-3-031-22340-2_3</a>.
  ieee: M. Lewin, E. H. Lieb, and R. Seiringer, “Universal Functionals in Density
    Functional Theory,” in <i>Density Functional Theory</i>, 1st ed., E. Cances and
    G. Friesecke, Eds. Springer, 2023, pp. 115–182.
  ista: 'Lewin M, Lieb EH, Seiringer R. 2023.Universal Functionals in Density Functional
    Theory. In: Density Functional Theory. Mathematics and Molecular Modeling, , 115–182.'
  mla: Lewin, Mathieu, et al. “Universal Functionals in Density Functional Theory.”
    <i>Density Functional Theory</i>, edited by Eric Cances and Gero Friesecke, 1st
    ed., Springer, 2023, pp. 115–82, doi:<a href="https://doi.org/10.1007/978-3-031-22340-2_3">10.1007/978-3-031-22340-2_3</a>.
  short: M. Lewin, E.H. Lieb, R. Seiringer, in:, E. Cances, G. Friesecke (Eds.), Density
    Functional Theory, 1st ed., Springer, 2023, pp. 115–182.
date_created: 2024-02-14T14:44:33Z
date_published: 2023-07-19T00:00:00Z
date_updated: 2024-02-20T08:33:06Z
day: '19'
department:
- _id: RoSe
doi: 10.1007/978-3-031-22340-2_3
edition: '1'
editor:
- first_name: Eric
  full_name: Cances, Eric
  last_name: Cances
- first_name: Gero
  full_name: Friesecke, Gero
  last_name: Friesecke
external_id:
  arxiv:
  - '1912.10424'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.48550/arXiv.1912.10424
month: '07'
oa: 1
oa_version: Preprint
page: 115-182
publication: Density Functional Theory
publication_identifier:
  eisbn:
  - '9783031223402'
  isbn:
  - '9783031223396'
  issn:
  - 3005-0286
publication_status: published
publisher: Springer
quality_controlled: '1'
series_title: MAMOMO
status: public
title: Universal Functionals in Density Functional Theory
type: book_chapter
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2023'
...
