---
_id: '14326'
abstract:
- lang: eng
  text: "Learning object-centric representations of complex scenes is a promising
    step towards enabling efficient abstract reasoning from low-level perceptual features.
    Yet, most deep learning approaches learn distributed representations that do not
    capture the compositional properties of natural scenes. In this paper, we present
    the Slot Attention module, an architectural component that interfaces with perceptual
    representations such as the output of a convolutional neural network and produces
    a set of task-dependent abstract representations which we call slots. These slots
    are exchangeable and can bind to any object in the input by specializing through
    a competitive procedure over multiple rounds of attention. We empirically demonstrate
    that Slot Attention can extract object-centric representations that enable generalization
    to unseen compositions when trained on unsupervised object discovery and supervised
    property prediction tasks.\r\n\r\n"
alternative_title:
- Advances in Neural Information Processing Systems
article_processing_charge: No
arxiv: 1
author:
- first_name: Francesco
  full_name: Locatello, Francesco
  id: 26cfd52f-2483-11ee-8040-88983bcc06d4
  last_name: Locatello
  orcid: 0000-0002-4850-0683
- first_name: Dirk
  full_name: Weissenborn, Dirk
  last_name: Weissenborn
- first_name: Thomas
  full_name: Unterthiner, Thomas
  last_name: Unterthiner
- first_name: Aravindh
  full_name: Mahendran, Aravindh
  last_name: Mahendran
- first_name: Georg
  full_name: Heigold, Georg
  last_name: Heigold
- first_name: Jakob
  full_name: Uszkoreit, Jakob
  last_name: Uszkoreit
- first_name: Alexey
  full_name: Dosovitskiy, Alexey
  last_name: Dosovitskiy
- first_name: Thomas
  full_name: Kipf, Thomas
  last_name: Kipf
citation:
  ama: 'Locatello F, Weissenborn D, Unterthiner T, et al. Object-centric learning
    with slot attention. In: <i>34th International Conference on Neural Information
    Processing Systems</i>. Vol 33. Neural Information Processing Systems Foundation;
    2020:11525-11538.'
  apa: 'Locatello, F., Weissenborn, D., Unterthiner, T., Mahendran, A., Heigold, G.,
    Uszkoreit, J., … Kipf, T. (2020). Object-centric learning with slot attention.
    In <i>34th International Conference on Neural Information Processing Systems</i>
    (Vol. 33, pp. 11525–11538). Virtual: Neural Information Processing Systems Foundation.'
  chicago: Locatello, Francesco, Dirk Weissenborn, Thomas Unterthiner, Aravindh Mahendran,
    Georg Heigold, Jakob Uszkoreit, Alexey Dosovitskiy, and Thomas Kipf. “Object-Centric
    Learning with Slot Attention.” In <i>34th International Conference on Neural Information
    Processing Systems</i>, 33:11525–38. Neural Information Processing Systems Foundation,
    2020.
  ieee: F. Locatello <i>et al.</i>, “Object-centric learning with slot attention,”
    in <i>34th International Conference on Neural Information Processing Systems</i>,
    Virtual, 2020, vol. 33, pp. 11525–11538.
  ista: 'Locatello F, Weissenborn D, Unterthiner T, Mahendran A, Heigold G, Uszkoreit
    J, Dosovitskiy A, Kipf T. 2020. Object-centric learning with slot attention. 34th
    International Conference on Neural Information Processing Systems. NeurIPS: Neural
    Information Processing Systems, Advances in Neural Information Processing Systems,
    vol. 33, 11525–11538.'
  mla: Locatello, Francesco, et al. “Object-Centric Learning with Slot Attention.”
    <i>34th International Conference on Neural Information Processing Systems</i>,
    vol. 33, Neural Information Processing Systems Foundation, 2020, pp. 11525–38.
  short: F. Locatello, D. Weissenborn, T. Unterthiner, A. Mahendran, G. Heigold, J.
    Uszkoreit, A. Dosovitskiy, T. Kipf, in:, 34th International Conference on Neural
    Information Processing Systems, Neural Information Processing Systems Foundation,
    2020, pp. 11525–11538.
conference:
  end_date: 2020-12-12
  location: Virtual
  name: 'NeurIPS: Neural Information Processing Systems'
  start_date: 2020-12-06
date_created: 2023-09-13T12:03:46Z
date_published: 2020-12-20T00:00:00Z
date_updated: 2025-07-10T11:50:47Z
day: '20'
department:
- _id: FrLo
extern: '1'
external_id:
  arxiv:
  - '2006.15055'
intvolume: '        33'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.48550/arXiv.2006.15055
month: '12'
oa: 1
oa_version: Preprint
page: 11525-11538
publication: 34th International Conference on Neural Information Processing Systems
publication_identifier:
  eissn:
  - 1049-5258
  isbn:
  - '9781713829546'
publication_status: published
publisher: Neural Information Processing Systems Foundation
quality_controlled: '1'
status: public
title: Object-centric learning with slot attention
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 33
year: '2020'
...
---
_id: '15086'
abstract:
- lang: eng
  text: "Many communication-efficient variants of SGD use gradient quantization schemes.
    These schemes are often heuristic and fixed over the course of training. We empirically
    observe that the statistics of gradients of deep models change during the training.
    Motivated by this observation, we introduce two adaptive quantization schemes,
    ALQ and AMQ. In both schemes, processors update their compression schemes in parallel
    by efficiently computing sufficient statistics of a parametric distribution. We
    improve the validation accuracy by almost 2% on CIFAR-10 and 1% on ImageNet in
    challenging low-cost communication setups. Our adaptive methods are also significantly
    more robust to the choice of hyperparameters.\r\n\r\n"
acknowledgement: "The authors would like to thank Blair Bilodeau, David Fleet, Mufan
  Li, and Jeffrey Negrea for\r\nhelpful discussions. FF was supported by OGS Scholarship.
  DA and IM were supported the\r\nEuropean Research Council (ERC) under the European
  Union’s Horizon 2020 research and innovation\r\nprogramme (grant agreement No 805223
  ScaleML). DMR was supported by an NSERC Discovery\r\nGrant. ARK was supported by
  NSERC Postdoctoral Fellowship. Resources used in preparing this research were provided,
  in part, by the Province of Ontario, the Government of Canada through CIFAR, and
  companies sponsoring the Vector Institute."
alternative_title:
- NeurIPS
article_processing_charge: No
arxiv: 1
author:
- first_name: 'Fartash '
  full_name: 'Faghri, Fartash '
  last_name: Faghri
- first_name: 'Iman '
  full_name: 'Tabrizian, Iman '
  last_name: Tabrizian
- first_name: Ilia
  full_name: Markov, Ilia
  id: D0CF4148-C985-11E9-8066-0BDEE5697425
  last_name: Markov
- first_name: Dan-Adrian
  full_name: Alistarh, Dan-Adrian
  id: 4A899BFC-F248-11E8-B48F-1D18A9856A87
  last_name: Alistarh
  orcid: 0000-0003-3650-940X
- first_name: 'Daniel '
  full_name: 'Roy, Daniel '
  last_name: Roy
- first_name: 'Ali '
  full_name: 'Ramezani-Kebrya, Ali '
  last_name: Ramezani-Kebrya
citation:
  ama: 'Faghri F, Tabrizian I, Markov I, Alistarh D-A, Roy D, Ramezani-Kebrya A. Adaptive
    gradient quantization for data-parallel SGD. In: <i>Advances in Neural Information
    Processing Systems</i>. Vol 33. Neural Information Processing Systems Foundation;
    2020.'
  apa: 'Faghri, F., Tabrizian, I., Markov, I., Alistarh, D.-A., Roy, D., &#38; Ramezani-Kebrya,
    A. (2020). Adaptive gradient quantization for data-parallel SGD. In <i>Advances
    in Neural Information Processing Systems</i> (Vol. 33). Vancouver, Canada: Neural
    Information Processing Systems Foundation.'
  chicago: Faghri, Fartash , Iman  Tabrizian, Ilia Markov, Dan-Adrian Alistarh, Daniel  Roy,
    and Ali  Ramezani-Kebrya. “Adaptive Gradient Quantization for Data-Parallel SGD.”
    In <i>Advances in Neural Information Processing Systems</i>, Vol. 33. Neural Information
    Processing Systems Foundation, 2020.
  ieee: F. Faghri, I. Tabrizian, I. Markov, D.-A. Alistarh, D. Roy, and A. Ramezani-Kebrya,
    “Adaptive gradient quantization for data-parallel SGD,” in <i>Advances in Neural
    Information Processing Systems</i>, Vancouver, Canada, 2020, vol. 33.
  ista: 'Faghri F, Tabrizian I, Markov I, Alistarh D-A, Roy D, Ramezani-Kebrya A.
    2020. Adaptive gradient quantization for data-parallel SGD. Advances in Neural
    Information Processing Systems. NeurIPS: Neural Information Processing Systems,
    NeurIPS, vol. 33.'
  mla: Faghri, Fartash, et al. “Adaptive Gradient Quantization for Data-Parallel SGD.”
    <i>Advances in Neural Information Processing Systems</i>, vol. 33, Neural Information
    Processing Systems Foundation, 2020.
  short: F. Faghri, I. Tabrizian, I. Markov, D.-A. Alistarh, D. Roy, A. Ramezani-Kebrya,
    in:, Advances in Neural Information Processing Systems, Neural Information Processing
    Systems Foundation, 2020.
conference:
  end_date: 2020-12-12
  location: Vancouver, Canada
  name: 'NeurIPS: Neural Information Processing Systems'
  start_date: 2020-12-06
date_created: 2024-03-06T08:35:58Z
date_published: 2020-12-10T00:00:00Z
date_updated: 2025-04-14T07:49:16Z
day: '10'
department:
- _id: DaAl
ec_funded: 1
external_id:
  arxiv:
  - '2010.12460'
intvolume: '        33'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.48550/arXiv.2010.12460
month: '12'
oa: 1
oa_version: Preprint
project:
- _id: 268A44D6-B435-11E9-9278-68D0E5697425
  call_identifier: H2020
  grant_number: '805223'
  name: Elastic Coordination for Scalable Machine Learning
publication: Advances in Neural Information Processing Systems
publication_identifier:
  isbn:
  - '9781713829546'
publication_status: published
publisher: Neural Information Processing Systems Foundation
quality_controlled: '1'
status: public
title: Adaptive gradient quantization for data-parallel SGD
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 33
year: '2020'
...
---
_id: '8188'
abstract:
- lang: eng
  text: "A natural approach to generative modeling of videos is to represent them
    as a composition of moving objects. Recent works model a set of 2D sprites over
    a slowly-varying background, but without considering the underlying 3D scene that\r\ngives
    rise to them. We instead propose to model a video as the view seen while moving
    through a scene with multiple 3D objects and a 3D background. Our model is trained
    from monocular videos without any supervision, yet learns to\r\ngenerate coherent
    3D scenes containing several moving objects. We conduct detailed experiments on
    two datasets, going beyond the visual complexity supported by state-of-the-art
    generative approaches. We evaluate our method on\r\ndepth-prediction and 3D object
    detection---tasks which cannot be addressed by those earlier works---and show
    it out-performs them even on 2D instance segmentation and tracking."
acknowledged_ssus:
- _id: ScienComp
acknowledgement: "This research was supported by the Scientific Service Units (SSU)
  of IST Austria through resources\r\nprovided by Scientific Computing (SciComp).
  PH is employed part-time by Blackford Analysis, but\r\nthey did not support this
  project in any way."
alternative_title:
- Advances in Neural Information Processing Systems
article_processing_charge: No
arxiv: 1
author:
- first_name: Paul M
  full_name: Henderson, Paul M
  id: 13C09E74-18D9-11E9-8878-32CFE5697425
  last_name: Henderson
  orcid: 0000-0002-5198-7445
- first_name: Christoph
  full_name: Lampert, Christoph
  id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
  last_name: Lampert
  orcid: 0000-0001-8622-7887
citation:
  ama: 'Henderson PM, Lampert C. Unsupervised object-centric video generation and
    decomposition in 3D. In: <i>34th Conference on Neural Information Processing Systems</i>.
    Vol 33. Neural Information Processing Systems Foundation; 2020:3106–3117.'
  apa: 'Henderson, P. M., &#38; Lampert, C. (2020). Unsupervised object-centric video
    generation and decomposition in 3D. In <i>34th Conference on Neural Information
    Processing Systems</i> (Vol. 33, pp. 3106–3117). Vancouver, Canada: Neural Information
    Processing Systems Foundation.'
  chicago: Henderson, Paul M, and Christoph Lampert. “Unsupervised Object-Centric
    Video Generation and Decomposition in 3D.” In <i>34th Conference on Neural Information
    Processing Systems</i>, 33:3106–3117. Neural Information Processing Systems Foundation,
    2020.
  ieee: P. M. Henderson and C. Lampert, “Unsupervised object-centric video generation
    and decomposition in 3D,” in <i>34th Conference on Neural Information Processing
    Systems</i>, Vancouver, Canada, 2020, vol. 33, pp. 3106–3117.
  ista: 'Henderson PM, Lampert C. 2020. Unsupervised object-centric video generation
    and decomposition in 3D. 34th Conference on Neural Information Processing Systems.
    NeurIPS: Neural Information Processing Systems, Advances in Neural Information
    Processing Systems, vol. 33, 3106–3117.'
  mla: Henderson, Paul M., and Christoph Lampert. “Unsupervised Object-Centric Video
    Generation and Decomposition in 3D.” <i>34th Conference on Neural Information
    Processing Systems</i>, vol. 33, Neural Information Processing Systems Foundation,
    2020, pp. 3106–3117.
  short: P.M. Henderson, C. Lampert, in:, 34th Conference on Neural Information Processing
    Systems, Neural Information Processing Systems Foundation, 2020, pp. 3106–3117.
conference:
  end_date: 2020-12-12
  location: Vancouver, Canada
  name: 'NeurIPS: Neural Information Processing Systems'
  start_date: 2020-12-06
corr_author: '1'
date_created: 2020-07-31T16:59:19Z
date_published: 2020-07-07T00:00:00Z
date_updated: 2025-05-14T11:26:57Z
day: '07'
department:
- _id: ChLa
external_id:
  arxiv:
  - '2007.06705'
intvolume: '        33'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://arxiv.org/abs/2007.06705
month: '07'
oa: 1
oa_version: Preprint
page: 3106–3117
publication: 34th Conference on Neural Information Processing Systems
publication_identifier:
  isbn:
  - '9781713829546'
publication_status: published
publisher: Neural Information Processing Systems Foundation
quality_controlled: '1'
status: public
title: Unsupervised object-centric video generation and decomposition in 3D
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 33
year: '2020'
...
---
_id: '9631'
abstract:
- lang: eng
  text: The ability to leverage large-scale hardware parallelism has been one of the
    key enablers of the accelerated recent progress in machine learning. Consequently,
    there has been considerable effort invested into developing efficient parallel
    variants of classic machine learning algorithms. However, despite the wealth of
    knowledge on parallelization, some classic machine learning algorithms often prove
    hard to parallelize efficiently while maintaining convergence. In this paper,
    we focus on efficient parallel algorithms for the key machine learning task of
    inference on graphical models, in particular on the fundamental belief propagation
    algorithm. We address the challenge of efficiently parallelizing this classic
    paradigm by showing how to leverage scalable relaxed schedulers in this context.
    We present an extensive empirical study, showing that our approach outperforms
    previous parallel belief propagation implementations both in terms of scalability
    and in terms of wall-clock convergence time, on a range of practical applications.
acknowledgement: "We thank Marco Mondelli for discussions related to LDPC decoding,
  and Giorgi Nadiradze for discussions on analysis of relaxed schedulers. This project
  has received funding from the European Research Council (ERC) under the European\r\nUnion’s
  Horizon 2020 research and innovation programme (grant agreement No 805223 ScaleML)."
alternative_title:
- Advances in Neural Information Processing Systems
article_processing_charge: No
arxiv: 1
author:
- first_name: Vitaly
  full_name: Aksenov, Vitaly
  last_name: Aksenov
- first_name: Dan-Adrian
  full_name: Alistarh, Dan-Adrian
  id: 4A899BFC-F248-11E8-B48F-1D18A9856A87
  last_name: Alistarh
  orcid: 0000-0003-3650-940X
- first_name: Janne
  full_name: Korhonen, Janne
  id: C5402D42-15BC-11E9-A202-CA2BE6697425
  last_name: Korhonen
citation:
  ama: 'Aksenov V, Alistarh D-A, Korhonen J. Scalable belief propagation via relaxed
    scheduling. In: Vol 33. Neural Information Processing Systems Foundation; 2020:22361-22372.'
  apa: 'Aksenov, V., Alistarh, D.-A., &#38; Korhonen, J. (2020). Scalable belief propagation
    via relaxed scheduling (Vol. 33, pp. 22361–22372). Presented at the NeurIPS: Conference
    on Neural Information Processing Systems, Vancouver, Canada: Neural Information
    Processing Systems Foundation.'
  chicago: Aksenov, Vitaly, Dan-Adrian Alistarh, and Janne Korhonen. “Scalable Belief
    Propagation via Relaxed Scheduling,” 33:22361–72. Neural Information Processing
    Systems Foundation, 2020.
  ieee: 'V. Aksenov, D.-A. Alistarh, and J. Korhonen, “Scalable belief propagation
    via relaxed scheduling,” presented at the NeurIPS: Conference on Neural Information
    Processing Systems, Vancouver, Canada, 2020, vol. 33, pp. 22361–22372.'
  ista: 'Aksenov V, Alistarh D-A, Korhonen J. 2020. Scalable belief propagation via
    relaxed scheduling. NeurIPS: Conference on Neural Information Processing Systems,
    Advances in Neural Information Processing Systems, vol. 33, 22361–22372.'
  mla: Aksenov, Vitaly, et al. <i>Scalable Belief Propagation via Relaxed Scheduling</i>.
    Vol. 33, Neural Information Processing Systems Foundation, 2020, pp. 22361–72.
  short: V. Aksenov, D.-A. Alistarh, J. Korhonen, in:, Neural Information Processing
    Systems Foundation, 2020, pp. 22361–22372.
conference:
  end_date: 2020-12-12
  location: Vancouver, Canada
  name: 'NeurIPS: Conference on Neural Information Processing Systems'
  start_date: 2020-12-06
corr_author: '1'
date_created: 2021-07-04T22:01:26Z
date_published: 2020-12-06T00:00:00Z
date_updated: 2025-05-14T11:27:33Z
day: '06'
department:
- _id: DaAl
ec_funded: 1
external_id:
  arxiv:
  - '2002.11505'
intvolume: '        33'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://proceedings.neurips.cc/paper/2020/hash/fdb2c3bab9d0701c4a050a4d8d782c7f-Abstract.html
month: '12'
oa: 1
oa_version: Published Version
page: 22361-22372
project:
- _id: 268A44D6-B435-11E9-9278-68D0E5697425
  call_identifier: H2020
  grant_number: '805223'
  name: Elastic Coordination for Scalable Machine Learning
publication_identifier:
  isbn:
  - '9781713829546'
  issn:
  - 1049-5258
publication_status: published
publisher: Neural Information Processing Systems Foundation
quality_controlled: '1'
scopus_import: '1'
status: public
title: Scalable belief propagation via relaxed scheduling
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 33
year: '2020'
...
---
_id: '9632'
abstract:
- lang: eng
  text: "Second-order information, in the form of Hessian- or Inverse-Hessian-vector
    products, is a fundamental tool for solving optimization problems. Recently, there
    has been significant interest in utilizing this information in the context of
    deep\r\nneural networks; however, relatively little is known about the quality
    of existing approximations in this context. Our work examines this question, identifies
    issues with existing approaches, and proposes a method called WoodFisher to compute
    a faithful and efficient estimate of the inverse Hessian. Our main application
    is to neural network compression, where we build on the classic Optimal Brain
    Damage/Surgeon framework. We demonstrate that WoodFisher significantly outperforms
    popular state-of-the-art methods for oneshot pruning. Further, even when iterative,
    gradual pruning is allowed, our method results in a gain in test accuracy over
    the state-of-the-art approaches, for standard image classification datasets such
    as ImageNet ILSVRC. We examine how our method can be extended to take into account
    first-order information, as well as\r\nillustrate its ability to automatically
    set layer-wise pruning thresholds and perform compression in the limited-data
    regime. The code is available at the following link, https://github.com/IST-DASLab/WoodFisher."
acknowledgement: This project has received funding from the European Research Council
  (ERC) under the European Union’s Horizon 2020 research and innovation programme
  (grant agreement No 805223 ScaleML). Also, we would like to thank Alexander Shevchenko,
  Alexandra Peste, and other members of the group for fruitful discussions.
alternative_title:
- Advances in Neural Information Processing Systems
article_processing_charge: No
arxiv: 1
author:
- first_name: Sidak Pal
  full_name: Singh, Sidak Pal
  id: DD138E24-D89D-11E9-9DC0-DEF6E5697425
  last_name: Singh
- first_name: Dan-Adrian
  full_name: Alistarh, Dan-Adrian
  id: 4A899BFC-F248-11E8-B48F-1D18A9856A87
  last_name: Alistarh
  orcid: 0000-0003-3650-940X
citation:
  ama: 'Singh SP, Alistarh D-A. WoodFisher: Efficient second-order approximation for
    neural network compression. In: Vol 33. Neural Information Processing Systems
    Foundation; 2020:18098-18109.'
  apa: 'Singh, S. P., &#38; Alistarh, D.-A. (2020). WoodFisher: Efficient second-order
    approximation for neural network compression (Vol. 33, pp. 18098–18109). Presented
    at the NeurIPS: Conference on Neural Information Processing Systems, Vancouver,
    Canada: Neural Information Processing Systems Foundation.'
  chicago: 'Singh, Sidak Pal, and Dan-Adrian Alistarh. “WoodFisher: Efficient Second-Order
    Approximation for Neural Network Compression,” 33:18098–109. Neural Information
    Processing Systems Foundation, 2020.'
  ieee: 'S. P. Singh and D.-A. Alistarh, “WoodFisher: Efficient second-order approximation
    for neural network compression,” presented at the NeurIPS: Conference on Neural
    Information Processing Systems, Vancouver, Canada, 2020, vol. 33, pp. 18098–18109.'
  ista: 'Singh SP, Alistarh D-A. 2020. WoodFisher: Efficient second-order approximation
    for neural network compression. NeurIPS: Conference on Neural Information Processing
    Systems, Advances in Neural Information Processing Systems, vol. 33, 18098–18109.'
  mla: 'Singh, Sidak Pal, and Dan-Adrian Alistarh. <i>WoodFisher: Efficient Second-Order
    Approximation for Neural Network Compression</i>. Vol. 33, Neural Information
    Processing Systems Foundation, 2020, pp. 18098–109.'
  short: S.P. Singh, D.-A. Alistarh, in:, Neural Information Processing Systems Foundation,
    2020, pp. 18098–18109.
conference:
  end_date: 2020-12-12
  location: Vancouver, Canada
  name: 'NeurIPS: Conference on Neural Information Processing Systems'
  start_date: 2020-12-06
corr_author: '1'
date_created: 2021-07-04T22:01:26Z
date_published: 2020-12-06T00:00:00Z
date_updated: 2025-05-14T11:27:23Z
day: '06'
department:
- _id: DaAl
- _id: ToHe
ec_funded: 1
external_id:
  arxiv:
  - '2004.14340'
intvolume: '        33'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://proceedings.neurips.cc/paper/2020/hash/d1ff1ec86b62cd5f3903ff19c3a326b2-Abstract.html
month: '12'
oa: 1
oa_version: Published Version
page: 18098-18109
project:
- _id: 268A44D6-B435-11E9-9278-68D0E5697425
  call_identifier: H2020
  grant_number: '805223'
  name: Elastic Coordination for Scalable Machine Learning
publication_identifier:
  isbn:
  - '9781713829546'
  issn:
  - 1049-5258
publication_status: published
publisher: Neural Information Processing Systems Foundation
quality_controlled: '1'
scopus_import: '1'
status: public
title: 'WoodFisher: Efficient second-order approximation for neural network compression'
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 33
year: '2020'
...
