---
_id: '12128'
abstract:
- lang: eng
  text: We introduce a machine-learning (ML) framework for high-throughput benchmarking
    of diverse representations of chemical systems against datasets of materials and
    molecules. The guiding principle underlying the benchmarking approach is to evaluate
    raw descriptor performance by limiting model complexity to simple regression schemes
    while enforcing best ML practices, allowing for unbiased hyperparameter optimization,
    and assessing learning progress through learning curves along series of synchronized
    train-test splits. The resulting models are intended as baselines that can inform
    future method development, in addition to indicating how easily a given dataset
    can be learnt. Through a comparative analysis of the training outcome across a
    diverse set of physicochemical, topological and geometric representations, we
    glean insight into the relative merits of these representations as well as their
    interrelatedness.
acknowledgement: 'C P acknowledges funding from Astex through the Sustaining Innovation
  Program under the Milner Consortium. B C acknowledges resources provided by the
  Cambridge Tier-2 system operated by the University of Cambridge Research Computing
  Service funded by EPSRC Tier-2 capital Grant EP/P020259/1. F A F acknowledges funding
  from the Swiss National Science Foundation (Grant No. P2BSP2_191736). '
article_number: '040501'
article_processing_charge: No
article_type: original
author:
- first_name: Carl
  full_name: Poelking, Carl
  last_name: Poelking
- first_name: Felix A
  full_name: Faber, Felix A
  last_name: Faber
- first_name: Bingqing
  full_name: Cheng, Bingqing
  id: cbe3cda4-d82c-11eb-8dc7-8ff94289fcc9
  last_name: Cheng
  orcid: 0000-0002-3584-9632
citation:
  ama: 'Poelking C, Faber FA, Cheng B. BenchML: An extensible pipelining framework
    for benchmarking representations of materials and molecules at scale. <i>Machine
    Learning: Science and Technology</i>. 2022;3(4). doi:<a href="https://doi.org/10.1088/2632-2153/ac4d11">10.1088/2632-2153/ac4d11</a>'
  apa: 'Poelking, C., Faber, F. A., &#38; Cheng, B. (2022). BenchML: An extensible
    pipelining framework for benchmarking representations of materials and molecules
    at scale. <i>Machine Learning: Science and Technology</i>. IOP Publishing. <a
    href="https://doi.org/10.1088/2632-2153/ac4d11">https://doi.org/10.1088/2632-2153/ac4d11</a>'
  chicago: 'Poelking, Carl, Felix A Faber, and Bingqing Cheng. “BenchML: An Extensible
    Pipelining Framework for Benchmarking Representations of Materials and Molecules
    at Scale.” <i>Machine Learning: Science and Technology</i>. IOP Publishing, 2022.
    <a href="https://doi.org/10.1088/2632-2153/ac4d11">https://doi.org/10.1088/2632-2153/ac4d11</a>.'
  ieee: 'C. Poelking, F. A. Faber, and B. Cheng, “BenchML: An extensible pipelining
    framework for benchmarking representations of materials and molecules at scale,”
    <i>Machine Learning: Science and Technology</i>, vol. 3, no. 4. IOP Publishing,
    2022.'
  ista: 'Poelking C, Faber FA, Cheng B. 2022. BenchML: An extensible pipelining framework
    for benchmarking representations of materials and molecules at scale. Machine
    Learning: Science and Technology. 3(4), 040501.'
  mla: 'Poelking, Carl, et al. “BenchML: An Extensible Pipelining Framework for Benchmarking
    Representations of Materials and Molecules at Scale.” <i>Machine Learning: Science
    and Technology</i>, vol. 3, no. 4, 040501, IOP Publishing, 2022, doi:<a href="https://doi.org/10.1088/2632-2153/ac4d11">10.1088/2632-2153/ac4d11</a>.'
  short: 'C. Poelking, F.A. Faber, B. Cheng, Machine Learning: Science and Technology
    3 (2022).'
corr_author: '1'
date_created: 2023-01-12T12:02:21Z
date_published: 2022-11-17T00:00:00Z
date_updated: 2024-10-09T21:03:32Z
day: '17'
ddc:
- '000'
department:
- _id: BiCh
doi: 10.1088/2632-2153/ac4d11
external_id:
  isi:
  - '000886534000001'
file:
- access_level: open_access
  checksum: 8930d4ad6ed9b47358c6f1a68666adb6
  content_type: application/pdf
  creator: dernst
  date_created: 2023-01-23T10:42:04Z
  date_updated: 2023-01-23T10:42:04Z
  file_id: '12343'
  file_name: 2022_MachLearning_Poelking.pdf
  file_size: 13814559
  relation: main_file
  success: 1
file_date_updated: 2023-01-23T10:42:04Z
has_accepted_license: '1'
intvolume: '         3'
isi: 1
issue: '4'
keyword:
- Artificial Intelligence
- Human-Computer Interaction
- Software
language:
- iso: eng
month: '11'
oa: 1
oa_version: Published Version
publication: 'Machine Learning: Science and Technology'
publication_identifier:
  issn:
  - 2632-2153
publication_status: published
publisher: IOP Publishing
quality_controlled: '1'
related_material:
  link:
  - relation: software
    url: https://github.com/capoe/benchml
scopus_import: '1'
status: public
title: 'BenchML: An extensible pipelining framework for benchmarking representations
  of materials and molecules at scale'
tmp:
  image: /images/cc_by.png
  legal_code_url: https://creativecommons.org/licenses/by/4.0/legalcode
  name: Creative Commons Attribution 4.0 International Public License (CC-BY 4.0)
  short: CC BY (4.0)
type: journal_article
user_id: 4359f0d1-fa6c-11eb-b949-802e58b17ae8
volume: 3
year: '2022'
...
---
_id: '12147'
abstract:
- lang: eng
  text: Continuous-time neural networks are a class of machine learning systems that
    can tackle representation learning on spatiotemporal decision-making tasks. These
    models are typically represented by continuous differential equations. However,
    their expressive power when they are deployed on computers is bottlenecked by
    numerical differential equation solvers. This limitation has notably slowed down
    the scaling and understanding of numerous natural physical phenomena such as the
    dynamics of nervous systems. Ideally, we would circumvent this bottleneck by solving
    the given dynamical system in closed form. This is known to be intractable in
    general. Here, we show that it is possible to closely approximate the interaction
    between neurons and synapses—the building blocks of natural and artificial neural
    networks—constructed by liquid time-constant networks efficiently in closed form.
    To this end, we compute a tightly bounded approximation of the solution of an
    integral appearing in liquid time-constant dynamics that has had no known closed-form
    solution so far. This closed-form solution impacts the design of continuous-time
    and continuous-depth neural models. For instance, since time appears explicitly
    in closed form, the formulation relaxes the need for complex numerical solvers.
    Consequently, we obtain models that are between one and five orders of magnitude
    faster in training and inference compared with differential equation-based counterparts.
    More importantly, in contrast to ordinary differential equation-based continuous
    networks, closed-form networks can scale remarkably well compared with other deep
    learning instances. Lastly, as these models are derived from liquid networks,
    they show good performance in time-series modelling compared with advanced recurrent
    neural network models.
acknowledgement: This research was supported in part by the AI2050 program at Schmidt
  Futures (grant G-22-63172), the Boeing Company, and the United States Air Force
  Research Laboratory and the United States Air Force Artificial Intelligence Accelerator
  and was accomplished under cooperative agreement number FA8750-19-2-1000. The views
  and conclusions contained in this document are those of the authors and should not
  be interpreted as representing the official policies, either expressed or implied,
  of the United States Air Force or the U.S. Government. The U.S. Government is authorized
  to reproduce and distribute reprints for Government purposes, notwithstanding any
  copyright notation herein. This work was further supported by The Boeing Company
  and Office of Naval Research grant N00014-18-1-2830. M.T. is supported by the Poul
  Due Jensen Foundation, grant 883901. M.L. was supported in part by the Austrian
  Science Fund under grant Z211-N23 (Wittgenstein Award). A.A. was supported by the
  National Science Foundation Graduate Research Fellowship Program. We thank T.-H.
  Wang, P. Kao, M. Chahine, W. Xiao, X. Li, L. Yin and Y. Ben for useful suggestions
  and for testing of CfC models to confirm the results across other domains.
article_processing_charge: No
article_type: original
arxiv: 1
author:
- first_name: Ramin
  full_name: Hasani, Ramin
  last_name: Hasani
- first_name: Mathias
  full_name: Lechner, Mathias
  id: 3DC22916-F248-11E8-B48F-1D18A9856A87
  last_name: Lechner
- first_name: Alexander
  full_name: Amini, Alexander
  last_name: Amini
- first_name: Lucas
  full_name: Liebenwein, Lucas
  last_name: Liebenwein
- first_name: Aaron
  full_name: Ray, Aaron
  last_name: Ray
- first_name: Max
  full_name: Tschaikowski, Max
  last_name: Tschaikowski
- first_name: Gerald
  full_name: Teschl, Gerald
  last_name: Teschl
- first_name: Daniela
  full_name: Rus, Daniela
  last_name: Rus
citation:
  ama: Hasani R, Lechner M, Amini A, et al. Closed-form continuous-time neural networks.
    <i>Nature Machine Intelligence</i>. 2022;4(11):992-1003. doi:<a href="https://doi.org/10.1038/s42256-022-00556-7">10.1038/s42256-022-00556-7</a>
  apa: Hasani, R., Lechner, M., Amini, A., Liebenwein, L., Ray, A., Tschaikowski,
    M., … Rus, D. (2022). Closed-form continuous-time neural networks. <i>Nature Machine
    Intelligence</i>. Springer Nature. <a href="https://doi.org/10.1038/s42256-022-00556-7">https://doi.org/10.1038/s42256-022-00556-7</a>
  chicago: Hasani, Ramin, Mathias Lechner, Alexander Amini, Lucas Liebenwein, Aaron
    Ray, Max Tschaikowski, Gerald Teschl, and Daniela Rus. “Closed-Form Continuous-Time
    Neural Networks.” <i>Nature Machine Intelligence</i>. Springer Nature, 2022. <a
    href="https://doi.org/10.1038/s42256-022-00556-7">https://doi.org/10.1038/s42256-022-00556-7</a>.
  ieee: R. Hasani <i>et al.</i>, “Closed-form continuous-time neural networks,” <i>Nature
    Machine Intelligence</i>, vol. 4, no. 11. Springer Nature, pp. 992–1003, 2022.
  ista: Hasani R, Lechner M, Amini A, Liebenwein L, Ray A, Tschaikowski M, Teschl
    G, Rus D. 2022. Closed-form continuous-time neural networks. Nature Machine Intelligence.
    4(11), 992–1003.
  mla: Hasani, Ramin, et al. “Closed-Form Continuous-Time Neural Networks.” <i>Nature
    Machine Intelligence</i>, vol. 4, no. 11, Springer Nature, 2022, pp. 992–1003,
    doi:<a href="https://doi.org/10.1038/s42256-022-00556-7">10.1038/s42256-022-00556-7</a>.
  short: R. Hasani, M. Lechner, A. Amini, L. Liebenwein, A. Ray, M. Tschaikowski,
    G. Teschl, D. Rus, Nature Machine Intelligence 4 (2022) 992–1003.
date_created: 2023-01-12T12:07:21Z
date_published: 2022-11-15T00:00:00Z
date_updated: 2025-04-15T06:26:02Z
day: '15'
ddc:
- '000'
department:
- _id: ToHe
doi: 10.1038/s42256-022-00556-7
external_id:
  arxiv:
  - '2106.13898'
  isi:
  - '000884215600003'
file:
- access_level: open_access
  checksum: b4789122ce04bfb4ac042390f59aaa8b
  content_type: application/pdf
  creator: dernst
  date_created: 2023-01-24T09:49:44Z
  date_updated: 2023-01-24T09:49:44Z
  file_id: '12355'
  file_name: 2022_NatureMachineIntelligence_Hasani.pdf
  file_size: 3259553
  relation: main_file
  success: 1
file_date_updated: 2023-01-24T09:49:44Z
has_accepted_license: '1'
intvolume: '         4'
isi: 1
issue: '11'
keyword:
- Artificial Intelligence
- Computer Networks and Communications
- Computer Vision and Pattern Recognition
- Human-Computer Interaction
- Software
language:
- iso: eng
month: '11'
oa: 1
oa_version: Published Version
page: 992-1003
project:
- _id: 25F42A32-B435-11E9-9278-68D0E5697425
  call_identifier: FWF
  grant_number: Z211
  name: Formal methods for the design and analysis of complex systems
publication: Nature Machine Intelligence
publication_identifier:
  issn:
  - 2522-5839
publication_status: published
publisher: Springer Nature
quality_controlled: '1'
related_material:
  link:
  - relation: erratum
    url: https://doi.org/10.1038/s42256-022-00597-y
scopus_import: '1'
status: public
title: Closed-form continuous-time neural networks
tmp:
  image: /images/cc_by.png
  legal_code_url: https://creativecommons.org/licenses/by/4.0/legalcode
  name: Creative Commons Attribution 4.0 International Public License (CC-BY 4.0)
  short: CC BY (4.0)
type: journal_article
user_id: 4359f0d1-fa6c-11eb-b949-802e58b17ae8
volume: 4
year: '2022'
...
