---
_id: '14182'
abstract:
- lang: eng
  text: "When machine learning systems meet real world applications, accuracy is only\r\none
    of several requirements. In this paper, we assay a complementary\r\nperspective
    originating from the increasing availability of pre-trained and\r\nregularly improving
    state-of-the-art models. While new improved models develop\r\nat a fast pace,
    downstream tasks vary more slowly or stay constant. Assume that\r\nwe have a large
    unlabelled data set for which we want to maintain accurate\r\npredictions. Whenever
    a new and presumably better ML models becomes available,\r\nwe encounter two problems:
    (i) given a limited budget, which data points should\r\nbe re-evaluated using
    the new model?; and (ii) if the new predictions differ\r\nfrom the current ones,
    should we update? Problem (i) is about compute cost,\r\nwhich matters for very
    large data sets and models. Problem (ii) is about\r\nmaintaining consistency of
    the predictions, which can be highly relevant for\r\ndownstream applications;
    our demand is to avoid negative flips, i.e., changing\r\ncorrect to incorrect
    predictions. In this paper, we formalize the Prediction\r\nUpdate Problem and
    present an efficient probabilistic approach as answer to the\r\nabove questions.
    In extensive experiments on standard classification benchmark\r\ndata sets, we
    show that our method outperforms alternative strategies along key\r\nmetrics for
    backward-compatible prediction updates."
article_processing_charge: No
arxiv: 1
author:
- first_name: Frederik
  full_name: Träuble, Frederik
  last_name: Träuble
- first_name: Julius von
  full_name: Kügelgen, Julius von
  last_name: Kügelgen
- first_name: Matthäus
  full_name: Kleindessner, Matthäus
  last_name: Kleindessner
- first_name: Francesco
  full_name: Locatello, Francesco
  id: 26cfd52f-2483-11ee-8040-88983bcc06d4
  last_name: Locatello
  orcid: 0000-0002-4850-0683
- first_name: Bernhard
  full_name: Schölkopf, Bernhard
  last_name: Schölkopf
- first_name: Peter
  full_name: Gehler, Peter
  last_name: Gehler
citation:
  ama: 'Träuble F, Kügelgen J von, Kleindessner M, Locatello F, Schölkopf B, Gehler
    P. Backward-compatible prediction updates: A probabilistic approach. In: <i>35th
    Conference on Neural Information Processing Systems</i>. Vol 34. ; 2021:116-128.'
  apa: 'Träuble, F., Kügelgen, J. von, Kleindessner, M., Locatello, F., Schölkopf,
    B., &#38; Gehler, P. (2021). Backward-compatible prediction updates: A probabilistic
    approach. In <i>35th Conference on Neural Information Processing Systems</i> (Vol.
    34, pp. 116–128). Virtual.'
  chicago: 'Träuble, Frederik, Julius von Kügelgen, Matthäus Kleindessner, Francesco
    Locatello, Bernhard Schölkopf, and Peter Gehler. “Backward-Compatible Prediction
    Updates: A Probabilistic Approach.” In <i>35th Conference on Neural Information
    Processing Systems</i>, 34:116–28, 2021.'
  ieee: 'F. Träuble, J. von Kügelgen, M. Kleindessner, F. Locatello, B. Schölkopf,
    and P. Gehler, “Backward-compatible prediction updates: A probabilistic approach,”
    in <i>35th Conference on Neural Information Processing Systems</i>, Virtual, 2021,
    vol. 34, pp. 116–128.'
  ista: 'Träuble F, Kügelgen J von, Kleindessner M, Locatello F, Schölkopf B, Gehler
    P. 2021. Backward-compatible prediction updates: A probabilistic approach. 35th
    Conference on Neural Information Processing Systems. NeurIPS: Neural Information
    Processing Systems vol. 34, 116–128.'
  mla: 'Träuble, Frederik, et al. “Backward-Compatible Prediction Updates: A Probabilistic
    Approach.” <i>35th Conference on Neural Information Processing Systems</i>, vol.
    34, 2021, pp. 116–28.'
  short: F. Träuble, J. von Kügelgen, M. Kleindessner, F. Locatello, B. Schölkopf,
    P. Gehler, in:, 35th Conference on Neural Information Processing Systems, 2021,
    pp. 116–128.
conference:
  end_date: 2021-12-10
  location: Virtual
  name: 'NeurIPS: Neural Information Processing Systems'
  start_date: 2021-12-07
date_created: 2023-08-22T14:05:41Z
date_published: 2021-07-02T00:00:00Z
date_updated: 2023-09-11T11:31:59Z
day: '02'
department:
- _id: FrLo
extern: '1'
external_id:
  arxiv:
  - '2107.01057'
intvolume: '        34'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://arxiv.org/abs/2107.01057
month: '07'
oa: 1
oa_version: Preprint
page: 116-128
publication: 35th Conference on Neural Information Processing Systems
publication_identifier:
  isbn:
  - '9781713845393'
publication_status: published
quality_controlled: '1'
status: public
title: 'Backward-compatible prediction updates: A probabilistic approach'
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 34
year: '2021'
...
---
OA_place: repository
_id: '14185'
abstract:
- lang: eng
  text: A method involves receiving a perceptual representation including a plurality
    of feature vectors, and initializing a plurality of slot vectors represented by
    a neural network memory unit. Each respective slot vector is configured to represent
    a corresponding entity in the perceptual representation. The method also involves
    determining an attention matrix based on a product of the plurality of feature
    vectors transformed by a key function and the plurality of slot vectors transformed
    by a query function. Each respective value of a plurality of values along each
    respective dimension of the attention matrix is normalized with respect to the
    plurality of values. The method additionally involves determining an update matrix
    based on the plurality of feature vectors transformed by a value function and
    the attention matrix, and updating the plurality of slot vectors based on the
    update matrix by way of the neural network memory unit.
applicant:
- Google LLC
application_date: 2020-07-13
application_number: '16 / 927,018 '
article_processing_charge: No
arxiv: 1
author:
- first_name: Dirk
  full_name: Weissenborn, Dirk
  last_name: Weissenborn
- first_name: Jakob
  full_name: Uszkoreit, Jakob
  last_name: Uszkoreit
- first_name: Thomas
  full_name: Unterthiner, Thomas
  last_name: Unterthiner
- first_name: Aravindh
  full_name: Mahendran, Aravindh
  last_name: Mahendran
- first_name: Francesco
  full_name: Locatello, Francesco
  id: 26cfd52f-2483-11ee-8040-88983bcc06d4
  last_name: Locatello
  orcid: 0000-0002-4850-0683
- first_name: Thomas
  full_name: Kipf, Thomas
  last_name: Kipf
- first_name: Georg
  full_name: Heigold, Georg
  last_name: Heigold
- first_name: Alexey
  full_name: Dosovitskiy, Alexey
  last_name: Dosovitskiy
citation:
  ama: Weissenborn D, Uszkoreit J, Unterthiner T, et al. Object-centric learning with
    slot attention. 2021.
  apa: Weissenborn, D., Uszkoreit, J., Unterthiner, T., Mahendran, A., Locatello,
    F., Kipf, T., … Dosovitskiy, A. (2021). Object-centric learning with slot attention.
  chicago: Weissenborn, Dirk, Jakob Uszkoreit, Thomas Unterthiner, Aravindh Mahendran,
    Francesco Locatello, Thomas Kipf, Georg Heigold, and Alexey Dosovitskiy. “Object-Centric
    Learning with Slot Attention,” 2021.
  ieee: D. Weissenborn <i>et al.</i>, “Object-centric learning with slot attention.”
    2021.
  ista: Weissenborn D, Uszkoreit J, Unterthiner T, Mahendran A, Locatello F, Kipf
    T, Heigold G, Dosovitskiy A. 2021. Object-centric learning with slot attention.
  mla: Weissenborn, Dirk, et al. <i>Object-Centric Learning with Slot Attention</i>.
    2021.
  short: D. Weissenborn, J. Uszkoreit, T. Unterthiner, A. Mahendran, F. Locatello,
    T. Kipf, G. Heigold, A. Dosovitskiy, (2021).
date_created: 2023-08-22T14:07:06Z
date_published: 2021-12-09T00:00:00Z
date_updated: 2025-01-31T11:35:46Z
day: '09'
department:
- _id: FrLo
extern: '1'
external_id:
  arxiv:
  - '2006.15055'
ipc: G06N 3/063 ; G06N 3/08 ; G06F 17/16
ipn: US20210383199A1
main_file_link:
- open_access: '1'
  url: https://patents.google.com/patent/US20210383199A1/en
month: '12'
oa: 1
oa_version: Published Version
publication_date: 2021-12-09
status: public
title: Object-centric learning with slot attention
type: patent
user_id: 8b945eb4-e2f2-11eb-945a-df72226e66a9
year: '2021'
...
---
_id: '14221'
abstract:
- lang: eng
  text: 'The world is structured in countless ways. It may be prudent to enforce corresponding
    structural properties to a learning algorithm''s solution, such as incorporating
    prior beliefs, natural constraints, or causal structures. Doing so may translate
    to faster, more accurate, and more flexible models, which may directly relate
    to real-world impact. In this dissertation, we consider two different research
    areas that concern structuring a learning algorithm''s solution: when the structure
    is known and when it has to be discovered.'
article_number: '2111.13693'
article_processing_charge: No
arxiv: 1
author:
- first_name: Francesco
  full_name: Locatello, Francesco
  id: 26cfd52f-2483-11ee-8040-88983bcc06d4
  last_name: Locatello
  orcid: 0000-0002-4850-0683
citation:
  ama: Locatello F. Enforcing and discovering structure in machine learning. <i>arXiv</i>.
    doi:<a href="https://doi.org/10.48550/arXiv.2111.13693">10.48550/arXiv.2111.13693</a>
  apa: Locatello, F. (n.d.). Enforcing and discovering structure in machine learning.
    <i>arXiv</i>. <a href="https://doi.org/10.48550/arXiv.2111.13693">https://doi.org/10.48550/arXiv.2111.13693</a>
  chicago: Locatello, Francesco. “Enforcing and Discovering Structure in Machine Learning.”
    <i>ArXiv</i>, n.d. <a href="https://doi.org/10.48550/arXiv.2111.13693">https://doi.org/10.48550/arXiv.2111.13693</a>.
  ieee: F. Locatello, “Enforcing and discovering structure in machine learning,” <i>arXiv</i>.
    .
  ista: Locatello F. Enforcing and discovering structure in machine learning. arXiv,
    2111.13693.
  mla: Locatello, Francesco. “Enforcing and Discovering Structure in Machine Learning.”
    <i>ArXiv</i>, 2111.13693, doi:<a href="https://doi.org/10.48550/arXiv.2111.13693">10.48550/arXiv.2111.13693</a>.
  short: F. Locatello, ArXiv (n.d.).
date_created: 2023-08-22T14:23:35Z
date_published: 2021-11-26T00:00:00Z
date_updated: 2024-10-14T12:27:49Z
day: '26'
department:
- _id: FrLo
doi: 10.48550/arXiv.2111.13693
extern: '1'
external_id:
  arxiv:
  - '2111.13693'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.48550/arXiv.2111.13693
month: '11'
oa: 1
oa_version: Preprint
publication: arXiv
publication_status: submitted
status: public
title: Enforcing and discovering structure in machine learning
type: preprint
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2021'
...
---
_id: '14332'
abstract:
- lang: eng
  text: Learning data representations that are useful for various downstream tasks
    is a cornerstone of artificial intelligence. While existing methods are typically
    evaluated on downstream tasks such as classification or generative image quality,
    we propose to assess representations through their usefulness in downstream control
    tasks, such as reaching or pushing objects. By training over 10,000 reinforcement
    learning policies, we extensively evaluate to what extent different representation
    properties affect out-of-distribution (OOD) generalization. Finally, we demonstrate
    zero-shot transfer of these policies from simulation to the real world, without
    any domain randomization or fine-tuning. This paper aims to establish the first
    systematic characterization of the usefulness of learned representations for real-world
    OOD downstream tasks.
article_processing_charge: No
author:
- first_name: Frederik
  full_name: Träuble, Frederik
  last_name: Träuble
- first_name: Andrea
  full_name: Dittadi, Andrea
  last_name: Dittadi
- first_name: Manuel
  full_name: Wuthrich, Manuel
  last_name: Wuthrich
- first_name: Felix
  full_name: Widmaier, Felix
  last_name: Widmaier
- first_name: Peter Vincent
  full_name: Gehler, Peter Vincent
  last_name: Gehler
- first_name: Ole
  full_name: Winther, Ole
  last_name: Winther
- first_name: Francesco
  full_name: Locatello, Francesco
  id: 26cfd52f-2483-11ee-8040-88983bcc06d4
  last_name: Locatello
  orcid: 0000-0002-4850-0683
- first_name: Olivier
  full_name: Bachem, Olivier
  last_name: Bachem
- first_name: Bernhard
  full_name: Schölkopf, Bernhard
  last_name: Schölkopf
- first_name: Stefan
  full_name: Bauer, Stefan
  last_name: Bauer
citation:
  ama: 'Träuble F, Dittadi A, Wuthrich M, et al. Representation learning for out-of-distribution
    generalization in reinforcement learning. In: <i>ICML 2021 Workshop on Unsupervised
    Reinforcement Learning</i>. ; 2021.'
  apa: Träuble, F., Dittadi, A., Wuthrich, M., Widmaier, F., Gehler, P. V., Winther,
    O., … Bauer, S. (2021). Representation learning for out-of-distribution generalization
    in reinforcement learning. In <i>ICML 2021 Workshop on Unsupervised Reinforcement
    Learning</i>. Virtual.
  chicago: Träuble, Frederik, Andrea Dittadi, Manuel Wuthrich, Felix Widmaier, Peter
    Vincent Gehler, Ole Winther, Francesco Locatello, Olivier Bachem, Bernhard Schölkopf,
    and Stefan Bauer. “Representation Learning for Out-of-Distribution Generalization
    in Reinforcement Learning.” In <i>ICML 2021 Workshop on Unsupervised Reinforcement
    Learning</i>, 2021.
  ieee: F. Träuble <i>et al.</i>, “Representation learning for out-of-distribution
    generalization in reinforcement learning,” in <i>ICML 2021 Workshop on Unsupervised
    Reinforcement Learning</i>, Virtual, 2021.
  ista: 'Träuble F, Dittadi A, Wuthrich M, Widmaier F, Gehler PV, Winther O, Locatello
    F, Bachem O, Schölkopf B, Bauer S. 2021. Representation learning for out-of-distribution
    generalization in reinforcement learning. ICML 2021 Workshop on Unsupervised Reinforcement
    Learning. ICML: International Conference on Machine Learning.'
  mla: Träuble, Frederik, et al. “Representation Learning for Out-of-Distribution
    Generalization in Reinforcement Learning.” <i>ICML 2021 Workshop on Unsupervised
    Reinforcement Learning</i>, 2021.
  short: F. Träuble, A. Dittadi, M. Wuthrich, F. Widmaier, P.V. Gehler, O. Winther,
    F. Locatello, O. Bachem, B. Schölkopf, S. Bauer, in:, ICML 2021 Workshop on Unsupervised
    Reinforcement Learning, 2021.
conference:
  end_date: 2021-07-23
  location: Virtual
  name: 'ICML: International Conference on Machine Learning'
  start_date: 2021-07-23
date_created: 2023-09-13T12:43:14Z
date_published: 2021-07-23T00:00:00Z
date_updated: 2023-09-13T12:44:00Z
day: '23'
department:
- _id: FrLo
extern: '1'
language:
- iso: eng
month: '07'
oa_version: None
publication: ICML 2021 Workshop on Unsupervised Reinforcement Learning
publication_status: published
quality_controlled: '1'
status: public
title: Representation learning for out-of-distribution generalization in reinforcement
  learning
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2021'
...
---
OA_type: closed access
_id: '17876'
abstract:
- lang: eng
  text: The scanning tunneling microscope-based break-junction (STM-BJ) technique
    is the most common method used to study the electronic properties of single-molecule
    junctions. It relies on repeatedly forming and rupturing a Au contact in an environment
    of the target molecules. The probability of junction formation is typically very
    high (∼70–95%), prompting questions relating to how the nanoscale structure of
    the Au electrode before the metal point contact ruptures alters junction formation.
    Here we analyze conductance traces measured with the STM-BJ setup by combining
    correlation analysis and multiple machine learning tools, including gradient-boosted
    trees and neural networks. We show that two key features describing the Au–Au
    contact prior to rupture determine the extent of contact relaxation (snapback)
    and the probability of junction formation. Importantly, our data strongly indicate
    that molecular junctions are formed prior to the rupture of the Au–Au contact,
    explaining the high probability of junction formation observed in room-temperature
    solution measurements.
article_processing_charge: No
article_type: original
author:
- first_name: Tianren
  full_name: Fu, Tianren
  last_name: Fu
- first_name: Kathleen
  full_name: Frommer, Kathleen
  last_name: Frommer
- first_name: Colin
  full_name: Nuckolls, Colin
  last_name: Nuckolls
- first_name: Latha
  full_name: Venkataraman, Latha
  id: 9ebb78a5-cc0d-11ee-8322-fae086a32caf
  last_name: Venkataraman
  orcid: 0000-0002-6957-6089
citation:
  ama: Fu T, Frommer K, Nuckolls C, Venkataraman L. Single-molecule junction formation
    in break-junction measurements. <i>The Journal of Physical Chemistry Letters</i>.
    2021;12(44):10802-10807. doi:<a href="https://doi.org/10.1021/acs.jpclett.1c03160">10.1021/acs.jpclett.1c03160</a>
  apa: Fu, T., Frommer, K., Nuckolls, C., &#38; Venkataraman, L. (2021). Single-molecule
    junction formation in break-junction measurements. <i>The Journal of Physical
    Chemistry Letters</i>. American Chemical Society. <a href="https://doi.org/10.1021/acs.jpclett.1c03160">https://doi.org/10.1021/acs.jpclett.1c03160</a>
  chicago: Fu, Tianren, Kathleen Frommer, Colin Nuckolls, and Latha Venkataraman.
    “Single-Molecule Junction Formation in Break-Junction Measurements.” <i>The Journal
    of Physical Chemistry Letters</i>. American Chemical Society, 2021. <a href="https://doi.org/10.1021/acs.jpclett.1c03160">https://doi.org/10.1021/acs.jpclett.1c03160</a>.
  ieee: T. Fu, K. Frommer, C. Nuckolls, and L. Venkataraman, “Single-molecule junction
    formation in break-junction measurements,” <i>The Journal of Physical Chemistry
    Letters</i>, vol. 12, no. 44. American Chemical Society, pp. 10802–10807, 2021.
  ista: Fu T, Frommer K, Nuckolls C, Venkataraman L. 2021. Single-molecule junction
    formation in break-junction measurements. The Journal of Physical Chemistry Letters.
    12(44), 10802–10807.
  mla: Fu, Tianren, et al. “Single-Molecule Junction Formation in Break-Junction Measurements.”
    <i>The Journal of Physical Chemistry Letters</i>, vol. 12, no. 44, American Chemical
    Society, 2021, pp. 10802–07, doi:<a href="https://doi.org/10.1021/acs.jpclett.1c03160">10.1021/acs.jpclett.1c03160</a>.
  short: T. Fu, K. Frommer, C. Nuckolls, L. Venkataraman, The Journal of Physical
    Chemistry Letters 12 (2021) 10802–10807.
date_created: 2024-09-06T13:10:30Z
date_published: 2021-11-01T00:00:00Z
date_updated: 2024-12-10T10:03:05Z
day: '01'
doi: 10.1021/acs.jpclett.1c03160
extern: '1'
external_id:
  pmid:
  - '34723548'
intvolume: '        12'
issue: '44'
language:
- iso: eng
month: '11'
oa_version: None
page: 10802-10807
pmid: 1
publication: The Journal of Physical Chemistry Letters
publication_identifier:
  issn:
  - 1948-7185
publication_status: published
publisher: American Chemical Society
quality_controlled: '1'
scopus_import: '1'
status: public
title: Single-molecule junction formation in break-junction measurements
type: journal_article
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 12
year: '2021'
...
---
OA_type: closed access
_id: '17877'
abstract:
- lang: eng
  text: Chemical reactions that occur at nanostructured electrodes have garnered widespread
    interest because of their potential applications in fields including nanotechnology,
    green chemistry and fundamental physical organic chemistry. Much of our present
    understanding of these reactions comes from probes that interrogate ensembles
    of molecules undergoing various stages of the transformation concurrently. Exquisite
    control over single-molecule reactivity lets us construct new molecules and further
    our understanding of nanoscale chemical phenomena. We can study single molecules
    using instruments such as the scanning tunnelling microscope, which can additionally
    be part of a mechanically controlled break junction. These are unique tools that
    can offer a high level of detail. They probe the electronic conductance of individual
    molecules and catalyse chemical reactions by establishing environments with reactive
    metal sites on nanoscale electrodes. This Review describes how chemical reactions
    involving bond cleavage and formation can be triggered at nanoscale electrodes
    and studied one molecule at a time.
article_processing_charge: No
article_type: review
author:
- first_name: Ilana
  full_name: Stone, Ilana
  last_name: Stone
- first_name: Rachel L.
  full_name: Starr, Rachel L.
  last_name: Starr
- first_name: Yaping
  full_name: Zang, Yaping
  last_name: Zang
- first_name: Colin
  full_name: Nuckolls, Colin
  last_name: Nuckolls
- first_name: Michael L.
  full_name: Steigerwald, Michael L.
  last_name: Steigerwald
- first_name: Tristan H.
  full_name: Lambert, Tristan H.
  last_name: Lambert
- first_name: Xavier
  full_name: Roy, Xavier
  last_name: Roy
- first_name: Latha
  full_name: Venkataraman, Latha
  id: 9ebb78a5-cc0d-11ee-8322-fae086a32caf
  last_name: Venkataraman
  orcid: 0000-0002-6957-6089
citation:
  ama: Stone I, Starr RL, Zang Y, et al. A single-molecule blueprint for synthesis.
    <i>Nature Reviews Chemistry</i>. 2021;5(10):695-710. doi:<a href="https://doi.org/10.1038/s41570-021-00316-y">10.1038/s41570-021-00316-y</a>
  apa: Stone, I., Starr, R. L., Zang, Y., Nuckolls, C., Steigerwald, M. L., Lambert,
    T. H., … Venkataraman, L. (2021). A single-molecule blueprint for synthesis. <i>Nature
    Reviews Chemistry</i>. Springer Nature. <a href="https://doi.org/10.1038/s41570-021-00316-y">https://doi.org/10.1038/s41570-021-00316-y</a>
  chicago: Stone, Ilana, Rachel L. Starr, Yaping Zang, Colin Nuckolls, Michael L.
    Steigerwald, Tristan H. Lambert, Xavier Roy, and Latha Venkataraman. “A Single-Molecule
    Blueprint for Synthesis.” <i>Nature Reviews Chemistry</i>. Springer Nature, 2021.
    <a href="https://doi.org/10.1038/s41570-021-00316-y">https://doi.org/10.1038/s41570-021-00316-y</a>.
  ieee: I. Stone <i>et al.</i>, “A single-molecule blueprint for synthesis,” <i>Nature
    Reviews Chemistry</i>, vol. 5, no. 10. Springer Nature, pp. 695–710, 2021.
  ista: Stone I, Starr RL, Zang Y, Nuckolls C, Steigerwald ML, Lambert TH, Roy X,
    Venkataraman L. 2021. A single-molecule blueprint for synthesis. Nature Reviews
    Chemistry. 5(10), 695–710.
  mla: Stone, Ilana, et al. “A Single-Molecule Blueprint for Synthesis.” <i>Nature
    Reviews Chemistry</i>, vol. 5, no. 10, Springer Nature, 2021, pp. 695–710, doi:<a
    href="https://doi.org/10.1038/s41570-021-00316-y">10.1038/s41570-021-00316-y</a>.
  short: I. Stone, R.L. Starr, Y. Zang, C. Nuckolls, M.L. Steigerwald, T.H. Lambert,
    X. Roy, L. Venkataraman, Nature Reviews Chemistry 5 (2021) 695–710.
date_created: 2024-09-06T13:11:31Z
date_published: 2021-08-25T00:00:00Z
date_updated: 2024-12-10T10:05:51Z
day: '25'
doi: 10.1038/s41570-021-00316-y
extern: '1'
external_id:
  pmid:
  - '37118183'
intvolume: '         5'
issue: '10'
language:
- iso: eng
month: '08'
oa_version: None
page: 695-710
pmid: 1
publication: Nature Reviews Chemistry
publication_identifier:
  issn:
  - 2397-3358
publication_status: published
publisher: Springer Nature
quality_controlled: '1'
scopus_import: '1'
status: public
title: A single-molecule blueprint for synthesis
type: journal_article
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 5
year: '2021'
...
---
DOAJ_listed: '1'
OA_place: publisher
OA_type: gold
_id: '17899'
abstract:
- lang: eng
  text: Designing highly insulating sub-nanometer molecules is difficult because tunneling
    conductance increases exponentially with decreasing molecular length. This challenge
    is further enhanced by the fact that most molecules cannot achieve full conductance
    suppression with destructive quantum interference. Here, we present results for
    a series of small saturated heterocyclic alkanes where we show that conductance
    is suppressed due to destructive interference. Using the STM-BJ technique and
    density functional theory calculations, we confirm that their single-molecule
    junction conductance is lower than analogous alkanes of similar length. We rationalize
    the suppression of conductance in the junctions through analysis of the computed
    ballistic current density. We find there are highly symmetric ring currents, which
    reverse direction at the antiresonance in the Landauer transmission near the Fermi
    energy. This pattern has not been seen in earlier studies of larger bicyclic systems
    exhibiting interference effects and constitutes clear-cut evidence of destructive
    σ-interference. The finding of heterocyclic alkanes with destructive quantum interference
    charts a pathway for chemical design of short molecular insulators using organic
    molecules.
article_processing_charge: Yes
article_type: original
author:
- first_name: Boyuan
  full_name: Zhang, Boyuan
  last_name: Zhang
- first_name: Marc H.
  full_name: Garner, Marc H.
  last_name: Garner
- first_name: Liang
  full_name: Li, Liang
  last_name: Li
- first_name: Luis M.
  full_name: Campos, Luis M.
  last_name: Campos
- first_name: Gemma C.
  full_name: Solomon, Gemma C.
  last_name: Solomon
- first_name: Latha
  full_name: Venkataraman, Latha
  id: 9ebb78a5-cc0d-11ee-8322-fae086a32caf
  last_name: Venkataraman
  orcid: 0000-0002-6957-6089
citation:
  ama: 'Zhang B, Garner MH, Li L, Campos LM, Solomon GC, Venkataraman L. Destructive
    quantum interference in heterocyclic alkanes: The search for ultra-short molecular
    insulators. <i>Chemical Science</i>. 2021;12(30):10299-10305. doi:<a href="https://doi.org/10.1039/d1sc02287c">10.1039/d1sc02287c</a>'
  apa: 'Zhang, B., Garner, M. H., Li, L., Campos, L. M., Solomon, G. C., &#38; Venkataraman,
    L. (2021). Destructive quantum interference in heterocyclic alkanes: The search
    for ultra-short molecular insulators. <i>Chemical Science</i>. Royal Society of
    Chemistry. <a href="https://doi.org/10.1039/d1sc02287c">https://doi.org/10.1039/d1sc02287c</a>'
  chicago: 'Zhang, Boyuan, Marc H. Garner, Liang Li, Luis M. Campos, Gemma C. Solomon,
    and Latha Venkataraman. “Destructive Quantum Interference in Heterocyclic Alkanes:
    The Search for Ultra-Short Molecular Insulators.” <i>Chemical Science</i>. Royal
    Society of Chemistry, 2021. <a href="https://doi.org/10.1039/d1sc02287c">https://doi.org/10.1039/d1sc02287c</a>.'
  ieee: 'B. Zhang, M. H. Garner, L. Li, L. M. Campos, G. C. Solomon, and L. Venkataraman,
    “Destructive quantum interference in heterocyclic alkanes: The search for ultra-short
    molecular insulators,” <i>Chemical Science</i>, vol. 12, no. 30. Royal Society
    of Chemistry, pp. 10299–10305, 2021.'
  ista: 'Zhang B, Garner MH, Li L, Campos LM, Solomon GC, Venkataraman L. 2021. Destructive
    quantum interference in heterocyclic alkanes: The search for ultra-short molecular
    insulators. Chemical Science. 12(30), 10299–10305.'
  mla: 'Zhang, Boyuan, et al. “Destructive Quantum Interference in Heterocyclic Alkanes:
    The Search for Ultra-Short Molecular Insulators.” <i>Chemical Science</i>, vol.
    12, no. 30, Royal Society of Chemistry, 2021, pp. 10299–305, doi:<a href="https://doi.org/10.1039/d1sc02287c">10.1039/d1sc02287c</a>.'
  short: B. Zhang, M.H. Garner, L. Li, L.M. Campos, G.C. Solomon, L. Venkataraman,
    Chemical Science 12 (2021) 10299–10305.
date_created: 2024-09-09T06:42:54Z
date_published: 2021-06-30T00:00:00Z
date_updated: 2024-12-10T10:11:16Z
day: '30'
doi: 10.1039/d1sc02287c
extern: '1'
external_id:
  pmid:
  - '34476051'
intvolume: '        12'
issue: '30'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.1039/D1SC02287C
month: '06'
oa: 1
oa_version: Published Version
page: 10299-10305
pmid: 1
publication: Chemical Science
publication_identifier:
  eissn:
  - 2041-6539
  issn:
  - 2041-6520
publication_status: published
publisher: Royal Society of Chemistry
quality_controlled: '1'
scopus_import: '1'
status: public
title: 'Destructive quantum interference in heterocyclic alkanes: The search for ultra-short
  molecular insulators'
type: journal_article
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 12
year: '2021'
...
---
OA_type: closed access
_id: '17900'
abstract:
- lang: eng
  text: To rival the performance of modern integrated circuits, single-molecule devices
    must be designed to exhibit extremely nonlinear current–voltage (I–V) characteristics1,2,3,4.
    A common approach is to design molecular backbones where destructive quantum interference
    (QI) between the highest occupied molecular orbital (HOMO) and the lowest unoccupied
    molecular orbital (LUMO) produces a nonlinear energy-dependent tunnelling probability
    near the electrode Fermi energy (EF)5,6,7,8. However, tuning such systems is not
    straightforward, as aligning the frontier orbitals to EF is hard to control9.
    Here, we instead create a molecular system where constructive QI between the HOMO
    and LUMO is suppressed and destructive QI between the HOMO and strongly coupled
    occupied orbitals of opposite phase is enhanced. We use a series of fluorene oligomers
    containing a central benzothiadiazole10 unit to demonstrate that this strategy
    can be used to create highly nonlinear single-molecule circuits. Notably, we are
    able to reproducibly modulate the conductance of a 6-nm molecule by a factor of
    more than 10^4.
article_processing_charge: No
author:
- first_name: Julia E.
  full_name: Greenwald, Julia E.
  last_name: Greenwald
- first_name: Joseph
  full_name: Cameron, Joseph
  last_name: Cameron
- first_name: Neil J.
  full_name: Findlay, Neil J.
  last_name: Findlay
- first_name: Tianren
  full_name: Fu, Tianren
  last_name: Fu
- first_name: Suman
  full_name: Gunasekaran, Suman
  last_name: Gunasekaran
- first_name: Peter J.
  full_name: Skabara, Peter J.
  last_name: Skabara
- first_name: Latha
  full_name: Venkataraman, Latha
  id: 9ebb78a5-cc0d-11ee-8322-fae086a32caf
  last_name: Venkataraman
  orcid: 0000-0002-6957-6089
citation:
  ama: Greenwald JE, Cameron J, Findlay NJ, et al. Highly nonlinear transport across
    single-molecule junctions via destructive quantum interference. <i>Nature Nanotechnology</i>.
    2021;16(3):313-317. doi:<a href="https://doi.org/10.1038/s41565-020-00807-x">10.1038/s41565-020-00807-x</a>
  apa: Greenwald, J. E., Cameron, J., Findlay, N. J., Fu, T., Gunasekaran, S., Skabara,
    P. J., &#38; Venkataraman, L. (2021). Highly nonlinear transport across single-molecule
    junctions via destructive quantum interference. <i>Nature Nanotechnology</i>.
    Springer Nature. <a href="https://doi.org/10.1038/s41565-020-00807-x">https://doi.org/10.1038/s41565-020-00807-x</a>
  chicago: Greenwald, Julia E., Joseph Cameron, Neil J. Findlay, Tianren Fu, Suman
    Gunasekaran, Peter J. Skabara, and Latha Venkataraman. “Highly Nonlinear Transport
    across Single-Molecule Junctions via Destructive Quantum Interference.” <i>Nature
    Nanotechnology</i>. Springer Nature, 2021. <a href="https://doi.org/10.1038/s41565-020-00807-x">https://doi.org/10.1038/s41565-020-00807-x</a>.
  ieee: J. E. Greenwald <i>et al.</i>, “Highly nonlinear transport across single-molecule
    junctions via destructive quantum interference,” <i>Nature Nanotechnology</i>,
    vol. 16, no. 3. Springer Nature, pp. 313–317, 2021.
  ista: Greenwald JE, Cameron J, Findlay NJ, Fu T, Gunasekaran S, Skabara PJ, Venkataraman
    L. 2021. Highly nonlinear transport across single-molecule junctions via destructive
    quantum interference. Nature Nanotechnology. 16(3), 313–317.
  mla: Greenwald, Julia E., et al. “Highly Nonlinear Transport across Single-Molecule
    Junctions via Destructive Quantum Interference.” <i>Nature Nanotechnology</i>,
    vol. 16, no. 3, Springer Nature, 2021, pp. 313–17, doi:<a href="https://doi.org/10.1038/s41565-020-00807-x">10.1038/s41565-020-00807-x</a>.
  short: J.E. Greenwald, J. Cameron, N.J. Findlay, T. Fu, S. Gunasekaran, P.J. Skabara,
    L. Venkataraman, Nature Nanotechnology 16 (2021) 313–317.
date_created: 2024-09-09T06:43:51Z
date_published: 2021-03-01T00:00:00Z
date_updated: 2024-12-10T10:20:32Z
day: '01'
doi: 10.1038/s41565-020-00807-x
extern: '1'
external_id:
  pmid:
  - '33288949'
intvolume: '        16'
issue: '3'
language:
- iso: eng
month: '03'
oa_version: None
page: 313-317
pmid: 1
publication: Nature Nanotechnology
publication_identifier:
  eissn:
  - 1748-3395
  issn:
  - 1748-3387
publication_status: published
publisher: Springer Nature
quality_controlled: '1'
scopus_import: '1'
status: public
title: Highly nonlinear transport across single-molecule junctions via destructive
  quantum interference
type: journal_article
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 16
year: '2021'
...
---
OA_type: closed access
_id: '17901'
abstract:
- lang: eng
  text: A 1,1,2,2-tetrakis(4-aminophenyl)ethene with three paths of π-conjugation,
    linear-cis, linear-trans and a cross-conjugation, has been prepared. The molecule
    is able to bind to gold electrodes forming molecular junctions for single-molecule
    conductance measurements. Only two regimes of conduction are found experimentally.
    The modelling of the conductance allows to assign them to through-bond transmission
    in the linear case, while the cross-conjugated channel is further assisted by
    through-space transmission, partially alleviating the destructive quantum interference.
article_processing_charge: No
article_type: original
author:
- first_name: Samara
  full_name: Medina Rivero, Samara
  last_name: Medina Rivero
- first_name: Paloma
  full_name: García Arroyo, Paloma
  last_name: García Arroyo
- first_name: Liang
  full_name: Li, Liang
  last_name: Li
- first_name: Suman
  full_name: Gunasekaran, Suman
  last_name: Gunasekaran
- first_name: Thijs
  full_name: Stuyver, Thijs
  last_name: Stuyver
- first_name: María José
  full_name: Mancheño, María José
  last_name: Mancheño
- first_name: Mercedes
  full_name: Alonso, Mercedes
  last_name: Alonso
- first_name: Latha
  full_name: Venkataraman, Latha
  id: 9ebb78a5-cc0d-11ee-8322-fae086a32caf
  last_name: Venkataraman
  orcid: 0000-0002-6957-6089
- first_name: José L.
  full_name: Segura, José L.
  last_name: Segura
- first_name: Juan
  full_name: Casado, Juan
  last_name: Casado
citation:
  ama: Medina Rivero S, García Arroyo P, Li L, et al. Single-molecule conductance
    in a unique cross-conjugated tetra(aminoaryl)ethene. <i>Chemical Communications</i>.
    2021;57(5):591-594. doi:<a href="https://doi.org/10.1039/d0cc07124b">10.1039/d0cc07124b</a>
  apa: Medina Rivero, S., García Arroyo, P., Li, L., Gunasekaran, S., Stuyver, T.,
    Mancheño, M. J., … Casado, J. (2021). Single-molecule conductance in a unique
    cross-conjugated tetra(aminoaryl)ethene. <i>Chemical Communications</i>. Royal
    Society of Chemistry. <a href="https://doi.org/10.1039/d0cc07124b">https://doi.org/10.1039/d0cc07124b</a>
  chicago: Medina Rivero, Samara, Paloma García Arroyo, Liang Li, Suman Gunasekaran,
    Thijs Stuyver, María José Mancheño, Mercedes Alonso, Latha Venkataraman, José
    L. Segura, and Juan Casado. “Single-Molecule Conductance in a Unique Cross-Conjugated
    Tetra(Aminoaryl)Ethene.” <i>Chemical Communications</i>. Royal Society of Chemistry,
    2021. <a href="https://doi.org/10.1039/d0cc07124b">https://doi.org/10.1039/d0cc07124b</a>.
  ieee: S. Medina Rivero <i>et al.</i>, “Single-molecule conductance in a unique cross-conjugated
    tetra(aminoaryl)ethene,” <i>Chemical Communications</i>, vol. 57, no. 5. Royal
    Society of Chemistry, pp. 591–594, 2021.
  ista: Medina Rivero S, García Arroyo P, Li L, Gunasekaran S, Stuyver T, Mancheño
    MJ, Alonso M, Venkataraman L, Segura JL, Casado J. 2021. Single-molecule conductance
    in a unique cross-conjugated tetra(aminoaryl)ethene. Chemical Communications.
    57(5), 591–594.
  mla: Medina Rivero, Samara, et al. “Single-Molecule Conductance in a Unique Cross-Conjugated
    Tetra(Aminoaryl)Ethene.” <i>Chemical Communications</i>, vol. 57, no. 5, Royal
    Society of Chemistry, 2021, pp. 591–94, doi:<a href="https://doi.org/10.1039/d0cc07124b">10.1039/d0cc07124b</a>.
  short: S. Medina Rivero, P. García Arroyo, L. Li, S. Gunasekaran, T. Stuyver, M.J.
    Mancheño, M. Alonso, L. Venkataraman, J.L. Segura, J. Casado, Chemical Communications
    57 (2021) 591–594.
date_created: 2024-09-09T06:44:58Z
date_published: 2021-05-01T00:00:00Z
date_updated: 2024-12-10T10:23:49Z
day: '01'
doi: 10.1039/d0cc07124b
extern: '1'
external_id:
  pmid:
  - '33325935'
intvolume: '        57'
issue: '5'
language:
- iso: eng
month: '05'
oa_version: None
page: 591-594
pmid: 1
publication: Chemical Communications
publication_identifier:
  eissn:
  - 1364-548X
  issn:
  - 1359-7345
publication_status: published
publisher: Royal Society of Chemistry
quality_controlled: '1'
scopus_import: '1'
status: public
title: Single-molecule conductance in a unique cross-conjugated tetra(aminoaryl)ethene
type: journal_article
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 57
year: '2021'
...
---
_id: '18192'
abstract:
- lang: eng
  text: Current quantum simulation experiments are starting to explore nonequilibrium
    many-body dynamics in previously inaccessible regimes in terms of system sizes
    and timescales. Therefore, the question emerges as to which observables are best
    suited to study the dynamics in such quantum many-body systems. Using machine
    learning techniques, we investigate the dynamics and, in particular, the thermalization
    behavior of an interacting quantum system that undergoes a nonequilibrium phase
    transition from an ergodic to a many-body localized phase. We employ supervised
    and unsupervised training methods to distinguish nonequilibrium from equilibrium
    data, using the network performance as a probe for the thermalization behavior
    of the system. We test our methods with experimental snapshots of ultracold atoms
    taken with a quantum gas microscope. Our results provide a path to analyze highly
    entangled large-scale quantum states for system sizes where numerical calculations
    of conventional observables become challenging.
article_number: '150504'
article_processing_charge: No
article_type: original
arxiv: 1
author:
- first_name: A.
  full_name: Bohrdt, A.
  last_name: Bohrdt
- first_name: S.
  full_name: Kim, S.
  last_name: Kim
- first_name: A.
  full_name: Lukin, A.
  last_name: Lukin
- first_name: M.
  full_name: Rispoli, M.
  last_name: Rispoli
- first_name: R.
  full_name: Schittko, R.
  last_name: Schittko
- first_name: M.
  full_name: Knap, M.
  last_name: Knap
- first_name: M.
  full_name: Greiner, M.
  last_name: Greiner
- first_name: Julian
  full_name: Leonard, Julian
  id: b75b3f45-7995-11ef-9bfd-9a9cd02c3577
  last_name: Leonard
citation:
  ama: Bohrdt A, Kim S, Lukin A, et al. Analyzing nonequilibrium quantum states through
    snapshots with artificial neural networks. <i>Physical Review Letters</i>. 2021;127(15).
    doi:<a href="https://doi.org/10.1103/physrevlett.127.150504">10.1103/physrevlett.127.150504</a>
  apa: Bohrdt, A., Kim, S., Lukin, A., Rispoli, M., Schittko, R., Knap, M., … Leonard,
    J. (2021). Analyzing nonequilibrium quantum states through snapshots with artificial
    neural networks. <i>Physical Review Letters</i>. American Physical Society. <a
    href="https://doi.org/10.1103/physrevlett.127.150504">https://doi.org/10.1103/physrevlett.127.150504</a>
  chicago: Bohrdt, A., S. Kim, A. Lukin, M. Rispoli, R. Schittko, M. Knap, M. Greiner,
    and Julian Leonard. “Analyzing Nonequilibrium Quantum States through Snapshots
    with Artificial Neural Networks.” <i>Physical Review Letters</i>. American Physical
    Society, 2021. <a href="https://doi.org/10.1103/physrevlett.127.150504">https://doi.org/10.1103/physrevlett.127.150504</a>.
  ieee: A. Bohrdt <i>et al.</i>, “Analyzing nonequilibrium quantum states through
    snapshots with artificial neural networks,” <i>Physical Review Letters</i>, vol.
    127, no. 15. American Physical Society, 2021.
  ista: Bohrdt A, Kim S, Lukin A, Rispoli M, Schittko R, Knap M, Greiner M, Leonard
    J. 2021. Analyzing nonequilibrium quantum states through snapshots with artificial
    neural networks. Physical Review Letters. 127(15), 150504.
  mla: Bohrdt, A., et al. “Analyzing Nonequilibrium Quantum States through Snapshots
    with Artificial Neural Networks.” <i>Physical Review Letters</i>, vol. 127, no.
    15, 150504, American Physical Society, 2021, doi:<a href="https://doi.org/10.1103/physrevlett.127.150504">10.1103/physrevlett.127.150504</a>.
  short: A. Bohrdt, S. Kim, A. Lukin, M. Rispoli, R. Schittko, M. Knap, M. Greiner,
    J. Leonard, Physical Review Letters 127 (2021).
date_created: 2024-10-07T11:47:11Z
date_published: 2021-10-08T00:00:00Z
date_updated: 2024-10-08T09:58:03Z
day: '08'
doi: 10.1103/physrevlett.127.150504
extern: '1'
external_id:
  arxiv:
  - '2012.11586'
intvolume: '       127'
issue: '15'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.48550/arXiv.2012.11586
month: '10'
oa: 1
oa_version: Preprint
publication: Physical Review Letters
publication_identifier:
  issn:
  - 0031-9007
  - 1079-7114
publication_status: published
publisher: American Physical Society
quality_controlled: '1'
scopus_import: '1'
status: public
title: Analyzing nonequilibrium quantum states through snapshots with artificial neural
  networks
type: journal_article
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 127
year: '2021'
...
---
_id: '18193'
abstract:
- lang: eng
  text: "Topological states of matter, such as fractional quantum Hall states, are
    an active field of research due to their exotic excitations. In particular, ultracold
    atoms in optical lattices provide a highly controllable and adaptable platform
    to study such new types of quantum matter. However, finding a clear route to realize
    non-Abelian quantum Hall states in these systems remains challenging. Here we
    use the density-matrix renormalization-group (DMRG) method to study the Hofstadter-Bose-Hubbard
    model at filling factor \U0001D708=1 and find strong indications that at \U0001D6FC=1/6
    magnetic flux quanta per plaquette the ground state is a lattice analog of the
    continuum non-Abelian Pfaffian. We study the on-site correlations of the ground
    state, which indicate its paired nature at \U0001D708=1, and find an incompressible
    state characterized by a charge gap in the bulk. We argue that the emergence of
    a charge density wave on thin cylinders and the behavior of the two- and three-particle
    correlation functions at short distances provide evidence for the state being
    closely related to the continuum Pfaffian. The signatures discussed in this letter
    are accessible in current cold atom experiments and we show that the Pfaffian-like
    state is readily realizable in few-body systems using adiabatic preparation schemes."
article_number: L161101
article_processing_charge: No
article_type: letter_note
arxiv: 1
author:
- first_name: F. A.
  full_name: Palm, F. A.
  last_name: Palm
- first_name: M.
  full_name: Buser, M.
  last_name: Buser
- first_name: Julian
  full_name: Leonard, Julian
  id: b75b3f45-7995-11ef-9bfd-9a9cd02c3577
  last_name: Leonard
- first_name: M.
  full_name: Aidelsburger, M.
  last_name: Aidelsburger
- first_name: U.
  full_name: Schollwöck, U.
  last_name: Schollwöck
- first_name: F.
  full_name: Grusdt, F.
  last_name: Grusdt
citation:
  ama: Palm FA, Buser M, Leonard J, Aidelsburger M, Schollwöck U, Grusdt F. Bosonic
    Pfaffian state in the Hofstadter-Bose-Hubbard model. <i>Physical Review B</i>.
    2021;103(16). doi:<a href="https://doi.org/10.1103/physrevb.103.l161101">10.1103/physrevb.103.l161101</a>
  apa: Palm, F. A., Buser, M., Leonard, J., Aidelsburger, M., Schollwöck, U., &#38;
    Grusdt, F. (2021). Bosonic Pfaffian state in the Hofstadter-Bose-Hubbard model.
    <i>Physical Review B</i>. American Physical Society. <a href="https://doi.org/10.1103/physrevb.103.l161101">https://doi.org/10.1103/physrevb.103.l161101</a>
  chicago: Palm, F. A., M. Buser, Julian Leonard, M. Aidelsburger, U. Schollwöck,
    and F. Grusdt. “Bosonic Pfaffian State in the Hofstadter-Bose-Hubbard Model.”
    <i>Physical Review B</i>. American Physical Society, 2021. <a href="https://doi.org/10.1103/physrevb.103.l161101">https://doi.org/10.1103/physrevb.103.l161101</a>.
  ieee: F. A. Palm, M. Buser, J. Leonard, M. Aidelsburger, U. Schollwöck, and F. Grusdt,
    “Bosonic Pfaffian state in the Hofstadter-Bose-Hubbard model,” <i>Physical Review
    B</i>, vol. 103, no. 16. American Physical Society, 2021.
  ista: Palm FA, Buser M, Leonard J, Aidelsburger M, Schollwöck U, Grusdt F. 2021.
    Bosonic Pfaffian state in the Hofstadter-Bose-Hubbard model. Physical Review B.
    103(16), L161101.
  mla: Palm, F. A., et al. “Bosonic Pfaffian State in the Hofstadter-Bose-Hubbard
    Model.” <i>Physical Review B</i>, vol. 103, no. 16, L161101, American Physical
    Society, 2021, doi:<a href="https://doi.org/10.1103/physrevb.103.l161101">10.1103/physrevb.103.l161101</a>.
  short: F.A. Palm, M. Buser, J. Leonard, M. Aidelsburger, U. Schollwöck, F. Grusdt,
    Physical Review B 103 (2021).
date_created: 2024-10-07T11:47:51Z
date_published: 2021-04-15T00:00:00Z
date_updated: 2024-10-08T09:55:46Z
day: '15'
doi: 10.1103/physrevb.103.l161101
extern: '1'
external_id:
  arxiv:
  - '2011.02477'
intvolume: '       103'
issue: '16'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.48550/arXiv.2011.02477
month: '04'
oa: 1
oa_version: Preprint
publication: Physical Review B
publication_identifier:
  eissn:
  - 2469-9969
  issn:
  - 2469-9950
publication_status: published
publisher: American Physical Society
quality_controlled: '1'
scopus_import: '1'
status: public
title: Bosonic Pfaffian state in the Hofstadter-Bose-Hubbard model
type: journal_article
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 103
year: '2021'
...
---
OA_place: repository
OA_type: green
_id: '18233'
abstract:
- lang: eng
  text: Neural network quantization enables the deployment of large models on resource-constrained
    devices. Current post-training quantization methods fall short in terms of accuracy
    for INT4 (or lower) but provide reasonable accuracy for INT8 (or above). In this
    work, we study the effect of quantization on the structure of the loss landscape.
    We show that the structure is flat and separable for mild quantization, enabling
    straightforward post-training quantization methods to achieve good results. We
    show that with more aggressive quantization, the loss landscape becomes highly
    non-separable with steep curvature, making the selection of quantization parameters
    more challenging. Armed with this understanding, we design a method that quantizes
    the layer parameters jointly, enabling significant accuracy improvement over current
    post-training quantization methods. Reference implementation is available at https://github.com/ynahshan/nn-quantization-pytorch/tree/master/lapq.
article_processing_charge: No
article_type: original
arxiv: 1
author:
- first_name: Yury
  full_name: Nahshan, Yury
  last_name: Nahshan
- first_name: Brian
  full_name: Chmiel, Brian
  last_name: Chmiel
- first_name: Chaim
  full_name: Baskin, Chaim
  last_name: Baskin
- first_name: Evgenii
  full_name: Zheltonozhskii, Evgenii
  last_name: Zheltonozhskii
- first_name: Ron
  full_name: Banner, Ron
  last_name: Banner
- first_name: Alexander
  full_name: Bronstein, Alexander
  id: 58f3726e-7cba-11ef-ad8b-e6e8cb3904e6
  last_name: Bronstein
  orcid: 0000-0001-9699-8730
- first_name: Avi
  full_name: Mendelson, Avi
  last_name: Mendelson
citation:
  ama: Nahshan Y, Chmiel B, Baskin C, et al. Loss aware post-training quantization.
    <i>Machine Learning</i>. 2021;110(11-12):3245-3262. doi:<a href="https://doi.org/10.1007/s10994-021-06053-z">10.1007/s10994-021-06053-z</a>
  apa: Nahshan, Y., Chmiel, B., Baskin, C., Zheltonozhskii, E., Banner, R., Bronstein,
    A. M., &#38; Mendelson, A. (2021). Loss aware post-training quantization. <i>Machine
    Learning</i>. Springer Nature. <a href="https://doi.org/10.1007/s10994-021-06053-z">https://doi.org/10.1007/s10994-021-06053-z</a>
  chicago: Nahshan, Yury, Brian Chmiel, Chaim Baskin, Evgenii Zheltonozhskii, Ron
    Banner, Alex M. Bronstein, and Avi Mendelson. “Loss Aware Post-Training Quantization.”
    <i>Machine Learning</i>. Springer Nature, 2021. <a href="https://doi.org/10.1007/s10994-021-06053-z">https://doi.org/10.1007/s10994-021-06053-z</a>.
  ieee: Y. Nahshan <i>et al.</i>, “Loss aware post-training quantization,” <i>Machine
    Learning</i>, vol. 110, no. 11–12. Springer Nature, pp. 3245–3262, 2021.
  ista: Nahshan Y, Chmiel B, Baskin C, Zheltonozhskii E, Banner R, Bronstein AM, Mendelson
    A. 2021. Loss aware post-training quantization. Machine Learning. 110(11–12),
    3245–3262.
  mla: Nahshan, Yury, et al. “Loss Aware Post-Training Quantization.” <i>Machine Learning</i>,
    vol. 110, no. 11–12, Springer Nature, 2021, pp. 3245–62, doi:<a href="https://doi.org/10.1007/s10994-021-06053-z">10.1007/s10994-021-06053-z</a>.
  short: Y. Nahshan, B. Chmiel, C. Baskin, E. Zheltonozhskii, R. Banner, A.M. Bronstein,
    A. Mendelson, Machine Learning 110 (2021) 3245–3262.
date_created: 2024-10-08T12:57:05Z
date_published: 2021-12-01T00:00:00Z
date_updated: 2024-10-15T07:33:28Z
day: '01'
doi: 10.1007/s10994-021-06053-z
extern: '1'
external_id:
  arxiv:
  - '1911.07190'
intvolume: '       110'
issue: 11-12
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.48550/arXiv.1911.07190
month: '12'
oa: 1
oa_version: Preprint
page: 3245-3262
publication: Machine Learning
publication_identifier:
  eissn:
  - 1573-0565
  issn:
  - 0885-6125
publication_status: published
publisher: Springer Nature
quality_controlled: '1'
related_material:
  link:
  - relation: software
    url: https://github.com/ynahshan/nn-quantization-pytorch/tree/master/lapq
scopus_import: '1'
status: public
title: Loss aware post-training quantization
type: journal_article
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 110
year: '2021'
...
---
DOAJ_listed: '1'
OA_place: publisher
OA_type: gold
_id: '18234'
abstract:
- lang: eng
  text: Convolutional Neural Networks (CNNs) are very popular in many fields including
    computer vision, speech recognition, natural language processing, etc. Though
    deep learning leads to groundbreaking performance in those domains, the networks
    used are very computationally demanding and are far from being able to perform
    in real-time applications even on a GPU, which is not power efficient and therefore
    does not suit low power systems such as mobile devices. To overcome this challenge,
    some solutions have been proposed for quantizing the weights and activations of
    these networks, which accelerate the runtime significantly. Yet, this acceleration
    comes at the cost of a larger error unless spatial adjustments are carried out.
    The method proposed in this work trains quantized neural networks by noise injection
    and a learned clamping, which improve accuracy. This leads to state-of-the-art
    results on various regression and classification tasks, e.g., ImageNet classification
    with architectures such as ResNet-18/34/50 with as low as 3 bit weights and activations.
    We implement the proposed solution on an FPGA to demonstrate its applicability
    for low-power real-time applications. The quantization code will become publicly
    available upon acceptance.
article_number: '2144'
article_processing_charge: No
article_type: original
arxiv: 1
author:
- first_name: Chaim
  full_name: Baskin, Chaim
  last_name: Baskin
- first_name: Evgenii
  full_name: Zheltonozhkii, Evgenii
  last_name: Zheltonozhkii
- first_name: Tal
  full_name: Rozen, Tal
  last_name: Rozen
- first_name: Natan
  full_name: Liss, Natan
  last_name: Liss
- first_name: Yoav
  full_name: Chai, Yoav
  last_name: Chai
- first_name: Eli
  full_name: Schwartz, Eli
  last_name: Schwartz
- first_name: Raja
  full_name: Giryes, Raja
  last_name: Giryes
- first_name: Alexander
  full_name: Bronstein, Alexander
  id: 58f3726e-7cba-11ef-ad8b-e6e8cb3904e6
  last_name: Bronstein
  orcid: 0000-0001-9699-8730
- first_name: Avi
  full_name: Mendelson, Avi
  last_name: Mendelson
citation:
  ama: 'Baskin C, Zheltonozhkii E, Rozen T, et al. NICE: Noise Injection and Clamping
    Estimation for neural network quantization. <i>Mathematics</i>. 2021;9(17). doi:<a
    href="https://doi.org/10.3390/math9172144">10.3390/math9172144</a>'
  apa: 'Baskin, C., Zheltonozhkii, E., Rozen, T., Liss, N., Chai, Y., Schwartz, E.,
    … Mendelson, A. (2021). NICE: Noise Injection and Clamping Estimation for neural
    network quantization. <i>Mathematics</i>. MDPI. <a href="https://doi.org/10.3390/math9172144">https://doi.org/10.3390/math9172144</a>'
  chicago: 'Baskin, Chaim, Evgenii Zheltonozhkii, Tal Rozen, Natan Liss, Yoav Chai,
    Eli Schwartz, Raja Giryes, Alex M. Bronstein, and Avi Mendelson. “NICE: Noise
    Injection and Clamping Estimation for Neural Network Quantization.” <i>Mathematics</i>.
    MDPI, 2021. <a href="https://doi.org/10.3390/math9172144">https://doi.org/10.3390/math9172144</a>.'
  ieee: 'C. Baskin <i>et al.</i>, “NICE: Noise Injection and Clamping Estimation for
    neural network quantization,” <i>Mathematics</i>, vol. 9, no. 17. MDPI, 2021.'
  ista: 'Baskin C, Zheltonozhkii E, Rozen T, Liss N, Chai Y, Schwartz E, Giryes R,
    Bronstein AM, Mendelson A. 2021. NICE: Noise Injection and Clamping Estimation
    for neural network quantization. Mathematics. 9(17), 2144.'
  mla: 'Baskin, Chaim, et al. “NICE: Noise Injection and Clamping Estimation for Neural
    Network Quantization.” <i>Mathematics</i>, vol. 9, no. 17, 2144, MDPI, 2021, doi:<a
    href="https://doi.org/10.3390/math9172144">10.3390/math9172144</a>.'
  short: C. Baskin, E. Zheltonozhkii, T. Rozen, N. Liss, Y. Chai, E. Schwartz, R.
    Giryes, A.M. Bronstein, A. Mendelson, Mathematics 9 (2021).
date_created: 2024-10-08T12:57:24Z
date_published: 2021-09-02T00:00:00Z
date_updated: 2024-10-15T07:37:39Z
day: '02'
doi: 10.3390/math9172144
extern: '1'
external_id:
  arxiv:
  - '1810.00162'
intvolume: '         9'
issue: '17'
language:
- iso: eng
month: '09'
oa_version: Published Version
publication: Mathematics
publication_identifier:
  issn:
  - 2227-7390
publication_status: published
publisher: MDPI
quality_controlled: '1'
scopus_import: '1'
status: public
title: 'NICE: Noise Injection and Clamping Estimation for neural network quantization'
type: journal_article
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 9
year: '2021'
...
---
OA_place: repository
OA_type: green
_id: '18235'
abstract:
- lang: eng
  text: 'Recently, great progress has been made in the field of Few-Shot Learning
    (FSL). While many different methods have been proposed, one of the key factors
    leading to higher FSL performance is surprisingly simple. It is the backbone network
    architecture used to embed the images of the few-shot tasks. While first works
    on FSL resorted to small architectures with just a few convolution layers, recent
    works show that large architectures pre-trained on the training portion of FSL
    datasets produce strong features that are more easily transferable to novel few-shot
    tasks, thus attaining significant gains to methods using them. Despite these observations,
    little to no work has been done towards finding the right backbone for FSL. In
    this paper we propose MetAdapt that not only meta-searches for an optimized architecture
    for FSL using Network Architecture Search (NAS), but also results in a model that
    can adaptively ‘re-wire’ itself predicting the better architecture for a given
    novel few-shot task. Using the proposed approach we observe strong results on
    two popular few-shot benchmarks: miniImageNet and FC100.'
article_processing_charge: No
article_type: original
arxiv: 1
author:
- first_name: Sivan
  full_name: Doveh, Sivan
  last_name: Doveh
- first_name: Eli
  full_name: Schwartz, Eli
  last_name: Schwartz
- first_name: Chao
  full_name: Xue, Chao
  last_name: Xue
- first_name: Rogerio
  full_name: Feris, Rogerio
  last_name: Feris
- first_name: Alexander
  full_name: Bronstein, Alexander
  id: 58f3726e-7cba-11ef-ad8b-e6e8cb3904e6
  last_name: Bronstein
  orcid: 0000-0001-9699-8730
- first_name: Raja
  full_name: Giryes, Raja
  last_name: Giryes
- first_name: Leonid
  full_name: Karlinsky, Leonid
  last_name: Karlinsky
citation:
  ama: 'Doveh S, Schwartz E, Xue C, et al. MetAdapt: Meta-learned task-adaptive architecture
    for few-shot classification. <i>Pattern Recognition Letters</i>. 2021;149:130-136.
    doi:<a href="https://doi.org/10.1016/j.patrec.2021.05.010">10.1016/j.patrec.2021.05.010</a>'
  apa: 'Doveh, S., Schwartz, E., Xue, C., Feris, R., Bronstein, A. M., Giryes, R.,
    &#38; Karlinsky, L. (2021). MetAdapt: Meta-learned task-adaptive architecture
    for few-shot classification. <i>Pattern Recognition Letters</i>. Elsevier. <a
    href="https://doi.org/10.1016/j.patrec.2021.05.010">https://doi.org/10.1016/j.patrec.2021.05.010</a>'
  chicago: 'Doveh, Sivan, Eli Schwartz, Chao Xue, Rogerio Feris, Alex M. Bronstein,
    Raja Giryes, and Leonid Karlinsky. “MetAdapt: Meta-Learned Task-Adaptive Architecture
    for Few-Shot Classification.” <i>Pattern Recognition Letters</i>. Elsevier, 2021.
    <a href="https://doi.org/10.1016/j.patrec.2021.05.010">https://doi.org/10.1016/j.patrec.2021.05.010</a>.'
  ieee: 'S. Doveh <i>et al.</i>, “MetAdapt: Meta-learned task-adaptive architecture
    for few-shot classification,” <i>Pattern Recognition Letters</i>, vol. 149. Elsevier,
    pp. 130–136, 2021.'
  ista: 'Doveh S, Schwartz E, Xue C, Feris R, Bronstein AM, Giryes R, Karlinsky L.
    2021. MetAdapt: Meta-learned task-adaptive architecture for few-shot classification.
    Pattern Recognition Letters. 149, 130–136.'
  mla: 'Doveh, Sivan, et al. “MetAdapt: Meta-Learned Task-Adaptive Architecture for
    Few-Shot Classification.” <i>Pattern Recognition Letters</i>, vol. 149, Elsevier,
    2021, pp. 130–36, doi:<a href="https://doi.org/10.1016/j.patrec.2021.05.010">10.1016/j.patrec.2021.05.010</a>.'
  short: S. Doveh, E. Schwartz, C. Xue, R. Feris, A.M. Bronstein, R. Giryes, L. Karlinsky,
    Pattern Recognition Letters 149 (2021) 130–136.
date_created: 2024-10-08T12:57:53Z
date_published: 2021-09-01T00:00:00Z
date_updated: 2024-10-15T07:39:56Z
day: '01'
doi: 10.1016/j.patrec.2021.05.010
extern: '1'
external_id:
  arxiv:
  - '1912.00412'
intvolume: '       149'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.48550/arXiv.1912.00412
month: '09'
oa: 1
oa_version: Preprint
page: 130-136
publication: Pattern Recognition Letters
publication_identifier:
  issn:
  - 0167-8655
publication_status: published
publisher: Elsevier
quality_controlled: '1'
scopus_import: '1'
status: public
title: 'MetAdapt: Meta-learned task-adaptive architecture for few-shot classification'
type: journal_article
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 149
year: '2021'
...
---
OA_place: publisher
OA_type: free access
_id: '18236'
abstract:
- lang: eng
  text: 'Despite their great promise, artificial intelligence (AI) systems have yet
    to become ubiquitous in the daily practice of medicine largely due to several
    crucial unmet needs of healthcare practitioners. These include lack of explanations
    in clinically meaningful terms, handling the presence of unknown medical conditions,
    and transparency regarding the system’s limitations, both in terms of statistical
    performance as well as recognizing situations for which the system’s predictions
    are irrelevant. We articulate these unmet clinical needs as machine-learning (ML)
    problems and systematically address them with cutting-edge ML techniques. We focus
    on electrocardiogram (ECG) analysis as an example domain in which AI has great
    potential and tackle two challenging tasks: the detection of a heterogeneous mix
    of known and unknown arrhythmias from ECG and the identification of underlying
    cardio-pathology from segments annotated as normal sinus rhythm recorded in patients
    with an intermittent arrhythmia. We validate our methods by simulating a screening
    for arrhythmias in a large-scale population while adhering to statistical significance
    requirements. Specifically, our system 1) visualizes the relative importance of
    each part of an ECG segment for the final model decision; 2) upholds specified
    statistical constraints on its out-of-sample performance and provides uncertainty
    estimation for its predictions; 3) handles inputs containing unknown rhythm types;
    and 4) handles data from unseen patients while also flagging cases in which the
    model’s outputs are not usable for a specific patient. This work represents a
    significant step toward overcoming the limitations currently impeding the integration
    of AI into clinical practice in cardiology and medicine in general.'
article_number: e2020620118
article_processing_charge: No
article_type: original
author:
- first_name: Yonatan
  full_name: Elul, Yonatan
  last_name: Elul
- first_name: Aviv A.
  full_name: Rosenberg, Aviv A.
  last_name: Rosenberg
- first_name: Assaf
  full_name: Schuster, Assaf
  last_name: Schuster
- first_name: Alexander
  full_name: Bronstein, Alexander
  id: 58f3726e-7cba-11ef-ad8b-e6e8cb3904e6
  last_name: Bronstein
  orcid: 0000-0001-9699-8730
- first_name: Yael
  full_name: Yaniv, Yael
  last_name: Yaniv
citation:
  ama: Elul Y, Rosenberg AA, Schuster A, Bronstein AM, Yaniv Y. Meeting the unmet
    needs of clinicians from AI systems showcased for cardiology with deep-learning–based
    ECG analysis. <i>Proceedings of the National Academy of Sciences</i>. 2021;118(24).
    doi:<a href="https://doi.org/10.1073/pnas.2020620118">10.1073/pnas.2020620118</a>
  apa: Elul, Y., Rosenberg, A. A., Schuster, A., Bronstein, A. M., &#38; Yaniv, Y.
    (2021). Meeting the unmet needs of clinicians from AI systems showcased for cardiology
    with deep-learning–based ECG analysis. <i>Proceedings of the National Academy
    of Sciences</i>. National Academy of Sciences. <a href="https://doi.org/10.1073/pnas.2020620118">https://doi.org/10.1073/pnas.2020620118</a>
  chicago: Elul, Yonatan, Aviv A. Rosenberg, Assaf Schuster, Alex M. Bronstein, and
    Yael Yaniv. “Meeting the Unmet Needs of Clinicians from AI Systems Showcased for
    Cardiology with Deep-Learning–Based ECG Analysis.” <i>Proceedings of the National
    Academy of Sciences</i>. National Academy of Sciences, 2021. <a href="https://doi.org/10.1073/pnas.2020620118">https://doi.org/10.1073/pnas.2020620118</a>.
  ieee: Y. Elul, A. A. Rosenberg, A. Schuster, A. M. Bronstein, and Y. Yaniv, “Meeting
    the unmet needs of clinicians from AI systems showcased for cardiology with deep-learning–based
    ECG analysis,” <i>Proceedings of the National Academy of Sciences</i>, vol. 118,
    no. 24. National Academy of Sciences, 2021.
  ista: Elul Y, Rosenberg AA, Schuster A, Bronstein AM, Yaniv Y. 2021. Meeting the
    unmet needs of clinicians from AI systems showcased for cardiology with deep-learning–based
    ECG analysis. Proceedings of the National Academy of Sciences. 118(24), e2020620118.
  mla: Elul, Yonatan, et al. “Meeting the Unmet Needs of Clinicians from AI Systems
    Showcased for Cardiology with Deep-Learning–Based ECG Analysis.” <i>Proceedings
    of the National Academy of Sciences</i>, vol. 118, no. 24, e2020620118, National
    Academy of Sciences, 2021, doi:<a href="https://doi.org/10.1073/pnas.2020620118">10.1073/pnas.2020620118</a>.
  short: Y. Elul, A.A. Rosenberg, A. Schuster, A.M. Bronstein, Y. Yaniv, Proceedings
    of the National Academy of Sciences 118 (2021).
date_created: 2024-10-08T12:58:09Z
date_published: 2021-06-07T00:00:00Z
date_updated: 2024-10-15T07:43:01Z
day: '07'
doi: 10.1073/pnas.2020620118
extern: '1'
external_id:
  pmid:
  - '34099565'
intvolume: '       118'
issue: '24'
language:
- iso: eng
month: '06'
oa_version: Published Version
pmid: 1
publication: Proceedings of the National Academy of Sciences
publication_identifier:
  eissn:
  - 1091-6490
  issn:
  - 0027-8424
publication_status: published
publisher: National Academy of Sciences
quality_controlled: '1'
scopus_import: '1'
status: public
title: Meeting the unmet needs of clinicians from AI systems showcased for cardiology
  with deep-learning–based ECG analysis
type: journal_article
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 118
year: '2021'
...
---
OA_place: repository
OA_type: green
_id: '18237'
abstract:
- lang: eng
  text: We present a novel method for neural network quantization. Our method, named
    UNIQ, emulates a non-uniform k-quantile quantizer and adapts the model to perform
    well with quantized weights by injecting noise to the weights at training time.
    As a by-product of injecting noise to weights, we find that activations can also
    be quantized to as low as 8-bit with only a minor accuracy degradation. Our non-uniform
    quantization approach provides a novel alternative to the existing uniform quantization
    techniques for neural networks. We further propose a novel complexity metric of
    number of bit operations performed (BOPs), and we show that this metric has a
    linear relation with logic utilization and power. We suggest evaluating the trade-off
    of accuracy vs. complexity (BOPs). The proposed method, when evaluated on ResNet18/34/50
    and MobileNet on ImageNet, outperforms the prior state of the art both in the
    low-complexity regime and the high accuracy regime. We demonstrate the practical
    applicability of this approach, by implementing our non-uniformly quantized CNN
    on FPGA.
article_processing_charge: No
article_type: original
arxiv: 1
author:
- first_name: Chaim
  full_name: Baskin, Chaim
  last_name: Baskin
- first_name: Natan
  full_name: Liss, Natan
  last_name: Liss
- first_name: Eli
  full_name: Schwartz, Eli
  last_name: Schwartz
- first_name: Evgenii
  full_name: Zheltonozhskii, Evgenii
  last_name: Zheltonozhskii
- first_name: Raja
  full_name: Giryes, Raja
  last_name: Giryes
- first_name: Alexander
  full_name: Bronstein, Alexander
  id: 58f3726e-7cba-11ef-ad8b-e6e8cb3904e6
  last_name: Bronstein
  orcid: 0000-0001-9699-8730
- first_name: Avi
  full_name: Mendelson, Avi
  last_name: Mendelson
citation:
  ama: 'Baskin C, Liss N, Schwartz E, et al. UNIQ: Uniform Noise Injection for Non-Uniform
    Quantization of neural networks. <i>ACM Transactions on Computer Systems</i>.
    2021;37(1-4):1-15. doi:<a href="https://doi.org/10.1145/3444943">10.1145/3444943</a>'
  apa: 'Baskin, C., Liss, N., Schwartz, E., Zheltonozhskii, E., Giryes, R., Bronstein,
    A. M., &#38; Mendelson, A. (2021). UNIQ: Uniform Noise Injection for Non-Uniform
    Quantization of neural networks. <i>ACM Transactions on Computer Systems</i>.
    Association for Computing Machinery. <a href="https://doi.org/10.1145/3444943">https://doi.org/10.1145/3444943</a>'
  chicago: 'Baskin, Chaim, Natan Liss, Eli Schwartz, Evgenii Zheltonozhskii, Raja
    Giryes, Alex M. Bronstein, and Avi Mendelson. “UNIQ: Uniform Noise Injection for
    Non-Uniform Quantization of Neural Networks.” <i>ACM Transactions on Computer
    Systems</i>. Association for Computing Machinery, 2021. <a href="https://doi.org/10.1145/3444943">https://doi.org/10.1145/3444943</a>.'
  ieee: 'C. Baskin <i>et al.</i>, “UNIQ: Uniform Noise Injection for Non-Uniform Quantization
    of neural networks,” <i>ACM Transactions on Computer Systems</i>, vol. 37, no.
    1–4. Association for Computing Machinery, pp. 1–15, 2021.'
  ista: 'Baskin C, Liss N, Schwartz E, Zheltonozhskii E, Giryes R, Bronstein AM, Mendelson
    A. 2021. UNIQ: Uniform Noise Injection for Non-Uniform Quantization of neural
    networks. ACM Transactions on Computer Systems. 37(1–4), 1–15.'
  mla: 'Baskin, Chaim, et al. “UNIQ: Uniform Noise Injection for Non-Uniform Quantization
    of Neural Networks.” <i>ACM Transactions on Computer Systems</i>, vol. 37, no.
    1–4, Association for Computing Machinery, 2021, pp. 1–15, doi:<a href="https://doi.org/10.1145/3444943">10.1145/3444943</a>.'
  short: C. Baskin, N. Liss, E. Schwartz, E. Zheltonozhskii, R. Giryes, A.M. Bronstein,
    A. Mendelson, ACM Transactions on Computer Systems 37 (2021) 1–15.
date_created: 2024-10-08T12:58:26Z
date_published: 2021-03-26T00:00:00Z
date_updated: 2024-10-15T07:47:22Z
day: '26'
doi: 10.1145/3444943
extern: '1'
external_id:
  arxiv:
  - '1804.10969'
intvolume: '        37'
issue: 1-4
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.48550/arXiv.1804.10969
month: '03'
oa: 1
oa_version: Preprint
page: 1-15
publication: ACM Transactions on Computer Systems
publication_identifier:
  eissn:
  - 1557-7333
  issn:
  - 0734-2071
publication_status: published
publisher: Association for Computing Machinery
quality_controlled: '1'
scopus_import: '1'
status: public
title: 'UNIQ: Uniform Noise Injection for Non-Uniform Quantization of neural networks'
type: journal_article
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 37
year: '2021'
...
---
OA_place: publisher
OA_type: gold
_id: '18238'
abstract:
- lang: eng
  text: The demand for running NNs in embedded environments has increased significantly
    in recent years due to the significant success of convolutional neural network
    (CNN) approaches in various tasks, including image recognition and generation.
    The task of achieving high accuracy on resource-restricted devices, however, is
    still considered to be challenging, which is mainly due to the vast number of
    design parameters that need to be balanced. While the quantization of CNN parameters
    leads to a reduction of power and area, it can also generate unexpected changes
    in the balance between communication and computation. This change is hard to evaluate,
    and the lack of balance may lead to lower utilization of either memory bandwidth
    or computational resources, thereby reducing performance. This paper introduces
    a hardware performance analysis framework for identifying bottlenecks in the early
    stages of CNN hardware design. We demonstrate how the proposed method can help
    in evaluating different architecture alternatives of resource-restricted CNN accelerators
    (e.g., part of real-time embedded systems) early in design stages and, thus, prevent
    making design mistakes.
article_number: '717'
article_processing_charge: No
article_type: original
author:
- first_name: Alex
  full_name: Karbachevsky, Alex
  last_name: Karbachevsky
- first_name: Chaim
  full_name: Baskin, Chaim
  last_name: Baskin
- first_name: Evgenii
  full_name: Zheltonozhskii, Evgenii
  last_name: Zheltonozhskii
- first_name: Yevgeny
  full_name: Yermolin, Yevgeny
  last_name: Yermolin
- first_name: Freddy
  full_name: Gabbay, Freddy
  last_name: Gabbay
- first_name: Alexander
  full_name: Bronstein, Alexander
  id: 58f3726e-7cba-11ef-ad8b-e6e8cb3904e6
  last_name: Bronstein
  orcid: 0000-0001-9699-8730
- first_name: Avi
  full_name: Mendelson, Avi
  last_name: Mendelson
citation:
  ama: Karbachevsky A, Baskin C, Zheltonozhskii E, et al. Early-stage neural network
    hardware performance analysis. <i>Sustainability</i>. 2021;13(2). doi:<a href="https://doi.org/10.3390/su13020717">10.3390/su13020717</a>
  apa: Karbachevsky, A., Baskin, C., Zheltonozhskii, E., Yermolin, Y., Gabbay, F.,
    Bronstein, A. M., &#38; Mendelson, A. (2021). Early-stage neural network hardware
    performance analysis. <i>Sustainability</i>. MDPI. <a href="https://doi.org/10.3390/su13020717">https://doi.org/10.3390/su13020717</a>
  chicago: Karbachevsky, Alex, Chaim Baskin, Evgenii Zheltonozhskii, Yevgeny Yermolin,
    Freddy Gabbay, Alex M. Bronstein, and Avi Mendelson. “Early-Stage Neural Network
    Hardware Performance Analysis.” <i>Sustainability</i>. MDPI, 2021. <a href="https://doi.org/10.3390/su13020717">https://doi.org/10.3390/su13020717</a>.
  ieee: A. Karbachevsky <i>et al.</i>, “Early-stage neural network hardware performance
    analysis,” <i>Sustainability</i>, vol. 13, no. 2. MDPI, 2021.
  ista: Karbachevsky A, Baskin C, Zheltonozhskii E, Yermolin Y, Gabbay F, Bronstein
    AM, Mendelson A. 2021. Early-stage neural network hardware performance analysis.
    Sustainability. 13(2), 717.
  mla: Karbachevsky, Alex, et al. “Early-Stage Neural Network Hardware Performance
    Analysis.” <i>Sustainability</i>, vol. 13, no. 2, 717, MDPI, 2021, doi:<a href="https://doi.org/10.3390/su13020717">10.3390/su13020717</a>.
  short: A. Karbachevsky, C. Baskin, E. Zheltonozhskii, Y. Yermolin, F. Gabbay, A.M.
    Bronstein, A. Mendelson, Sustainability 13 (2021).
date_created: 2024-10-08T12:58:47Z
date_published: 2021-01-13T00:00:00Z
date_updated: 2024-10-15T08:17:49Z
day: '13'
doi: 10.3390/su13020717
extern: '1'
intvolume: '        13'
issue: '2'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.3390/su13020717
month: '01'
oa: 1
oa_version: Published Version
publication: Sustainability
publication_identifier:
  issn:
  - 2071-1050
publication_status: published
publisher: MDPI
quality_controlled: '1'
scopus_import: '1'
status: public
title: Early-stage neural network hardware performance analysis
type: journal_article
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 13
year: '2021'
...
---
OA_place: repository
OA_type: green
_id: '18239'
abstract:
- lang: eng
  text: Nowadays, there is an abundance of data involving images and surrounding free-form
    text weakly corresponding to those images. Weakly Supervised phrase-Grounding
    (WSG) deals with the task of using this data to learn to localize (or to ground)
    arbitrary text phrases in images without any additional annotations. However,
    most recent SotA methods for WSG assume an existence of a pre-trained object detector,
    relying on it to produce the ROIs for localization. In this work, we focus on
    the task of Detector-Free WSG (DF-WSG) to solve WSG without relying on a pre-trained
    detector. The key idea behind our proposed Grounding by Separation (GbS) method
    is synthesizing ‘text to image-regions’ associations by random alpha-blending
    of arbitrary image pairs and using the corresponding texts of the pair as conditions
    to recover the alpha map from the blended image via a segmentation network. At
    test time, this allows using the query phrase as a condition for a non-blended
    query image, thus interpreting the test image as a composition of a region corresponding
    to the phrase and the complement region. Our GbS shows an 8.5% accuracy improvement
    over previous DF-WSG SotA, for a range of benchmarks including Flickr30K, Visual
    Genome, and ReferIt, as well as a complementary improvement (above 7%) over the
    detector-based approaches for WSG.
article_processing_charge: No
arxiv: 1
author:
- first_name: Assaf
  full_name: Arbelle, Assaf
  last_name: Arbelle
- first_name: Sivan
  full_name: Doveh, Sivan
  last_name: Doveh
- first_name: Amit
  full_name: Alfassy, Amit
  last_name: Alfassy
- first_name: Joseph
  full_name: Shtok, Joseph
  last_name: Shtok
- first_name: Guy
  full_name: Lev, Guy
  last_name: Lev
- first_name: Eli
  full_name: Schwartz, Eli
  last_name: Schwartz
- first_name: Hilde
  full_name: Kuehne, Hilde
  last_name: Kuehne
- first_name: Hila Barak
  full_name: Levi, Hila Barak
  last_name: Levi
- first_name: Prasanna
  full_name: Sattigeri, Prasanna
  last_name: Sattigeri
- first_name: Rameswar
  full_name: Panda, Rameswar
  last_name: Panda
- first_name: Chun-Fu
  full_name: Chen, Chun-Fu
  last_name: Chen
- first_name: Alexander
  full_name: Bronstein, Alexander
  id: 58f3726e-7cba-11ef-ad8b-e6e8cb3904e6
  last_name: Bronstein
  orcid: 0000-0001-9699-8730
- first_name: Kate
  full_name: Saenko, Kate
  last_name: Saenko
- first_name: Shimon
  full_name: Ullman, Shimon
  last_name: Ullman
- first_name: Raja
  full_name: Giryes, Raja
  last_name: Giryes
- first_name: Rogerio
  full_name: Feris, Rogerio
  last_name: Feris
- first_name: Leonid
  full_name: Karlinsky, Leonid
  last_name: Karlinsky
citation:
  ama: 'Arbelle A, Doveh S, Alfassy A, et al. Detector-free weakly supervised grounding
    by separation. In: <i>IEEE/CVF International Conference on Computer Vision</i>.
    Vol 15. Institute of Electrical and Electronics Engineers; 2021. doi:<a href="https://doi.org/10.1109/iccv48922.2021.00182">10.1109/iccv48922.2021.00182</a>'
  apa: 'Arbelle, A., Doveh, S., Alfassy, A., Shtok, J., Lev, G., Schwartz, E., … Karlinsky,
    L. (2021). Detector-free weakly supervised grounding by separation. In <i>IEEE/CVF
    International Conference on Computer Vision</i> (Vol. 15). Montreal, Canada: Institute
    of Electrical and Electronics Engineers. <a href="https://doi.org/10.1109/iccv48922.2021.00182">https://doi.org/10.1109/iccv48922.2021.00182</a>'
  chicago: Arbelle, Assaf, Sivan Doveh, Amit Alfassy, Joseph Shtok, Guy Lev, Eli Schwartz,
    Hilde Kuehne, et al. “Detector-Free Weakly Supervised Grounding by Separation.”
    In <i>IEEE/CVF International Conference on Computer Vision</i>, Vol. 15. Institute
    of Electrical and Electronics Engineers, 2021. <a href="https://doi.org/10.1109/iccv48922.2021.00182">https://doi.org/10.1109/iccv48922.2021.00182</a>.
  ieee: A. Arbelle <i>et al.</i>, “Detector-free weakly supervised grounding by separation,”
    in <i>IEEE/CVF International Conference on Computer Vision</i>, Montreal, Canada,
    2021, vol. 15.
  ista: 'Arbelle A, Doveh S, Alfassy A, Shtok J, Lev G, Schwartz E, Kuehne H, Levi
    HB, Sattigeri P, Panda R, Chen C-F, Bronstein AM, Saenko K, Ullman S, Giryes R,
    Feris R, Karlinsky L. 2021. Detector-free weakly supervised grounding by separation.
    IEEE/CVF International Conference on Computer Vision. ICCV: International Conference
    on Computer Vision vol. 15.'
  mla: Arbelle, Assaf, et al. “Detector-Free Weakly Supervised Grounding by Separation.”
    <i>IEEE/CVF International Conference on Computer Vision</i>, vol. 15, Institute
    of Electrical and Electronics Engineers, 2021, doi:<a href="https://doi.org/10.1109/iccv48922.2021.00182">10.1109/iccv48922.2021.00182</a>.
  short: A. Arbelle, S. Doveh, A. Alfassy, J. Shtok, G. Lev, E. Schwartz, H. Kuehne,
    H.B. Levi, P. Sattigeri, R. Panda, C.-F. Chen, A.M. Bronstein, K. Saenko, S. Ullman,
    R. Giryes, R. Feris, L. Karlinsky, in:, IEEE/CVF International Conference on Computer
    Vision, Institute of Electrical and Electronics Engineers, 2021.
conference:
  end_date: 2021-10-17
  location: Montreal, Canada
  name: 'ICCV: International Conference on Computer Vision'
  start_date: 2021-10-10
date_created: 2024-10-08T13:02:34Z
date_published: 2021-10-20T00:00:00Z
date_updated: 2024-10-15T08:22:47Z
day: '20'
doi: 10.1109/iccv48922.2021.00182
extern: '1'
external_id:
  arxiv:
  - '2104.09829'
intvolume: '        15'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.48550/arXiv.2104.09829
month: '10'
oa: 1
oa_version: Preprint
publication: IEEE/CVF International Conference on Computer Vision
publication_identifier:
  eisbn:
  - '9781665428125'
publication_status: published
publisher: Institute of Electrical and Electronics Engineers
quality_controlled: '1'
scopus_import: '1'
status: public
title: Detector-free weakly supervised grounding by separation
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 15
year: '2021'
...
---
OA_place: repository
OA_type: green
_id: '18240'
abstract:
- lang: eng
  text: Mechanical image stabilization using actuated gimbals enables capturing long-exposure
    shots without suffering from blur due to camera motion. These devices, however,
    are often physically cumbersome and expensive, limiting their widespread use.
    In this work, we propose to digitally emulate a mechanically stabilized system
    from the input of a fast unstabilized camera. To exploit the trade-off between
    motion blur at long exposures and low SNR at short exposures, we train a CNN that
    estimates a sharp high-SNR image by aggregating a burst of noisy short-exposure
    frames, related by unknown motion. We further suggest learning the burst’s exposure
    times in an end-to-end manner, thus balancing the noise and blur across the frames.
    We demonstrate this method’s advantage over the traditional approach of deblurring
    a single image or denoising a fixed-exposure burst on both synthetic and real
    data.
article_processing_charge: No
arxiv: 1
author:
- first_name: Omer
  full_name: Dahary, Omer
  last_name: Dahary
- first_name: Matan
  full_name: Jacoby, Matan
  last_name: Jacoby
- first_name: Alexander
  full_name: Bronstein, Alexander
  id: 58f3726e-7cba-11ef-ad8b-e6e8cb3904e6
  last_name: Bronstein
  orcid: 0000-0001-9699-8730
citation:
  ama: 'Dahary O, Jacoby M, Bronstein AM. Digital gimbal: End-to-end deep image stabilization
    with learnable exposure times. In: <i>IEEE/CVF Conference on Computer Vision and
    Pattern Recognition</i>. Vol 38. Institute of Electrical and Electronics Engineers;
    2021. doi:<a href="https://doi.org/10.1109/cvpr46437.2021.01176">10.1109/cvpr46437.2021.01176</a>'
  apa: 'Dahary, O., Jacoby, M., &#38; Bronstein, A. M. (2021). Digital gimbal: End-to-end
    deep image stabilization with learnable exposure times. In <i>IEEE/CVF Conference
    on Computer Vision and Pattern Recognition</i> (Vol. 38). Nashville, TN, United
    States: Institute of Electrical and Electronics Engineers. <a href="https://doi.org/10.1109/cvpr46437.2021.01176">https://doi.org/10.1109/cvpr46437.2021.01176</a>'
  chicago: 'Dahary, Omer, Matan Jacoby, and Alex M. Bronstein. “Digital Gimbal: End-to-End
    Deep Image Stabilization with Learnable Exposure Times.” In <i>IEEE/CVF Conference
    on Computer Vision and Pattern Recognition</i>, Vol. 38. Institute of Electrical
    and Electronics Engineers, 2021. <a href="https://doi.org/10.1109/cvpr46437.2021.01176">https://doi.org/10.1109/cvpr46437.2021.01176</a>.'
  ieee: 'O. Dahary, M. Jacoby, and A. M. Bronstein, “Digital gimbal: End-to-end deep
    image stabilization with learnable exposure times,” in <i>IEEE/CVF Conference
    on Computer Vision and Pattern Recognition</i>, Nashville, TN, United States,
    2021, vol. 38.'
  ista: 'Dahary O, Jacoby M, Bronstein AM. 2021. Digital gimbal: End-to-end deep image
    stabilization with learnable exposure times. IEEE/CVF Conference on Computer Vision
    and Pattern Recognition. CVPR: Conference on Computer Vision and Pattern Recognition
    vol. 38.'
  mla: 'Dahary, Omer, et al. “Digital Gimbal: End-to-End Deep Image Stabilization
    with Learnable Exposure Times.” <i>IEEE/CVF Conference on Computer Vision and
    Pattern Recognition</i>, vol. 38, Institute of Electrical and Electronics Engineers,
    2021, doi:<a href="https://doi.org/10.1109/cvpr46437.2021.01176">10.1109/cvpr46437.2021.01176</a>.'
  short: O. Dahary, M. Jacoby, A.M. Bronstein, in:, IEEE/CVF Conference on Computer
    Vision and Pattern Recognition, Institute of Electrical and Electronics Engineers,
    2021.
conference:
  end_date: 2021-06-25
  location: Nashville, TN, United States
  name: 'CVPR: Conference on Computer Vision and Pattern Recognition'
  start_date: 2021-06-20
date_created: 2024-10-08T13:02:53Z
date_published: 2021-06-30T00:00:00Z
date_updated: 2024-10-15T08:42:37Z
day: '30'
doi: 10.1109/cvpr46437.2021.01176
extern: '1'
external_id:
  arxiv:
  - '2012.04515'
intvolume: '        38'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.48550/arXiv.2012.04515
month: '06'
oa: 1
oa_version: Preprint
publication: IEEE/CVF Conference on Computer Vision and Pattern Recognition
publication_identifier:
  eisbn:
  - '9781665445092'
publication_status: published
publisher: Institute of Electrical and Electronics Engineers
quality_controlled: '1'
scopus_import: '1'
status: public
title: 'Digital gimbal: End-to-end deep image stabilization with learnable exposure
  times'
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 38
year: '2021'
...
---
OA_place: repository
OA_type: green
_id: '18241'
abstract:
- lang: eng
  text: Multiple-input multiple-output (MIMO) radar is one of the leading depth sensing
    modalities. However, the usage of multiple receive channels lead to relative high
    costs and prevent the penetration of MIMOs in many areas such as the automotive
    industry. Over the last years, few studies concentrated on designing reduced measurement
    schemes and image reconstruction schemes for MIMO radars, however these problems
    have been so far addressed separately. On the other hand, recent works in optical
    computational imaging have demonstrated growing success of simultaneous learning-based
    design of the acquisition and reconstruction schemes, manifesting significant
    improvement in the reconstruction quality. Inspired by these successes, in this
    work, we propose to learn MIMO acquisition parameters in the form of receive (Rx)
    antenna elements locations jointly with an image neural-network based reconstruction.
    To this end, we propose an algorithm for training the combined acquisition-reconstruction
    pipeline end-to-end in a differentiable way. We demonstrate the significance of
    using our learned acquisition parameters with and without the neural-network reconstruction.
    Code and datasets will be released upon publication.
article_processing_charge: No
arxiv: 1
author:
- first_name: Tomer
  full_name: Weiss, Tomer
  last_name: Weiss
- first_name: Nissim
  full_name: Peretz, Nissim
  last_name: Peretz
- first_name: Sanketh
  full_name: Vedula, Sanketh
  last_name: Vedula
- first_name: Arie
  full_name: Feuer, Arie
  last_name: Feuer
- first_name: Alexander
  full_name: Bronstein, Alexander
  id: 58f3726e-7cba-11ef-ad8b-e6e8cb3904e6
  last_name: Bronstein
  orcid: 0000-0001-9699-8730
citation:
  ama: 'Weiss T, Peretz N, Vedula S, Feuer A, Bronstein AM. Joint optimization of
    system design and reconstruction in MIMO radar imaging. In: <i>31st International
    Workshop on Machine Learning for Signal Processing</i>. Vol 4. Institute of Electrical
    and Electronics Engineers; 2021. doi:<a href="https://doi.org/10.1109/mlsp52302.2021.9596168">10.1109/mlsp52302.2021.9596168</a>'
  apa: 'Weiss, T., Peretz, N., Vedula, S., Feuer, A., &#38; Bronstein, A. M. (2021).
    Joint optimization of system design and reconstruction in MIMO radar imaging.
    In <i>31st International Workshop on Machine Learning for Signal Processing</i>
    (Vol. 4). Gold Coast, Australia: Institute of Electrical and Electronics Engineers.
    <a href="https://doi.org/10.1109/mlsp52302.2021.9596168">https://doi.org/10.1109/mlsp52302.2021.9596168</a>'
  chicago: Weiss, Tomer, Nissim Peretz, Sanketh Vedula, Arie Feuer, and Alex M. Bronstein.
    “Joint Optimization of System Design and Reconstruction in MIMO Radar Imaging.”
    In <i>31st International Workshop on Machine Learning for Signal Processing</i>,
    Vol. 4. Institute of Electrical and Electronics Engineers, 2021. <a href="https://doi.org/10.1109/mlsp52302.2021.9596168">https://doi.org/10.1109/mlsp52302.2021.9596168</a>.
  ieee: T. Weiss, N. Peretz, S. Vedula, A. Feuer, and A. M. Bronstein, “Joint optimization
    of system design and reconstruction in MIMO radar imaging,” in <i>31st International
    Workshop on Machine Learning for Signal Processing</i>, Gold Coast, Australia,
    2021, vol. 4.
  ista: 'Weiss T, Peretz N, Vedula S, Feuer A, Bronstein AM. 2021. Joint optimization
    of system design and reconstruction in MIMO radar imaging. 31st International
    Workshop on Machine Learning for Signal Processing. MLSP: Machine Learning for
    Signal Processing vol. 4.'
  mla: Weiss, Tomer, et al. “Joint Optimization of System Design and Reconstruction
    in MIMO Radar Imaging.” <i>31st International Workshop on Machine Learning for
    Signal Processing</i>, vol. 4, Institute of Electrical and Electronics Engineers,
    2021, doi:<a href="https://doi.org/10.1109/mlsp52302.2021.9596168">10.1109/mlsp52302.2021.9596168</a>.
  short: T. Weiss, N. Peretz, S. Vedula, A. Feuer, A.M. Bronstein, in:, 31st International
    Workshop on Machine Learning for Signal Processing, Institute of Electrical and
    Electronics Engineers, 2021.
conference:
  end_date: 2021-10-28
  location: Gold Coast, Australia
  name: 'MLSP: Machine Learning for Signal Processing'
  start_date: 2021-10-25
date_created: 2024-10-08T13:03:09Z
date_published: 2021-10-01T00:00:00Z
date_updated: 2024-10-16T09:41:11Z
day: '01'
doi: 10.1109/mlsp52302.2021.9596168
extern: '1'
external_id:
  arxiv:
  - '2110.03218'
intvolume: '         4'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.48550/arXiv.2110.03218
month: '10'
oa: 1
oa_version: Preprint
publication: 31st International Workshop on Machine Learning for Signal Processing
publication_identifier:
  eisbn:
  - '9781728163383'
publication_status: published
publisher: Institute of Electrical and Electronics Engineers
quality_controlled: '1'
scopus_import: '1'
status: public
title: Joint optimization of system design and reconstruction in MIMO radar imaging
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 4
year: '2021'
...
