---
_id: '14451'
abstract:
- lang: eng
  text: 'We investigate the potential of Multi-Objective, Deep Reinforcement Learning
    for stock and cryptocurrency single-asset trading: in particular, we consider
    a Multi-Objective algorithm which generalizes the reward functions and discount
    factor (i.e., these components are not specified a priori, but incorporated in
    the learning process). Firstly, using several important assets (BTCUSD, ETHUSDT,
    XRPUSDT, AAPL, SPY, NIFTY50), we verify the reward generalization property of
    the proposed Multi-Objective algorithm, and provide preliminary statistical evidence
    showing increased predictive stability over the corresponding Single-Objective
    strategy. Secondly, we show that the Multi-Objective algorithm has a clear edge
    over the corresponding Single-Objective strategy when the reward mechanism is
    sparse (i.e., when non-null feedback is infrequent over time). Finally, we discuss
    the generalization properties with respect to the discount factor. The entirety
    of our code is provided in open-source format.'
acknowledgement: Open access funding provided by Università degli Studi di Trieste
  within the CRUI-CARE Agreement. Funding was provided by Austrian Science Fund (Grant
  No. F65), Horizon 2020 (Grant No. 754411) and Österreichische Forschungsförderungsgesellschaft.
article_processing_charge: Yes (via OA deal)
article_type: original
arxiv: 1
author:
- first_name: Federico
  full_name: Cornalba, Federico
  id: 2CEB641C-A400-11E9-A717-D712E6697425
  last_name: Cornalba
  orcid: 0000-0002-6269-5149
- first_name: Constantin
  full_name: Disselkamp, Constantin
  last_name: Disselkamp
- first_name: Davide
  full_name: Scassola, Davide
  last_name: Scassola
- first_name: Christopher
  full_name: Helf, Christopher
  last_name: Helf
citation:
  ama: 'Cornalba F, Disselkamp C, Scassola D, Helf C. Multi-objective reward generalization:
    Improving performance of Deep Reinforcement Learning for applications in single-asset
    trading. <i>Neural Computing and Applications</i>. 2024;36(2):617-637. doi:<a
    href="https://doi.org/10.1007/s00521-023-09033-7">10.1007/s00521-023-09033-7</a>'
  apa: 'Cornalba, F., Disselkamp, C., Scassola, D., &#38; Helf, C. (2024). Multi-objective
    reward generalization: Improving performance of Deep Reinforcement Learning for
    applications in single-asset trading. <i>Neural Computing and Applications</i>.
    Springer Nature. <a href="https://doi.org/10.1007/s00521-023-09033-7">https://doi.org/10.1007/s00521-023-09033-7</a>'
  chicago: 'Cornalba, Federico, Constantin Disselkamp, Davide Scassola, and Christopher
    Helf. “Multi-Objective Reward Generalization: Improving Performance of Deep Reinforcement
    Learning for Applications in Single-Asset Trading.” <i>Neural Computing and Applications</i>.
    Springer Nature, 2024. <a href="https://doi.org/10.1007/s00521-023-09033-7">https://doi.org/10.1007/s00521-023-09033-7</a>.'
  ieee: 'F. Cornalba, C. Disselkamp, D. Scassola, and C. Helf, “Multi-objective reward
    generalization: Improving performance of Deep Reinforcement Learning for applications
    in single-asset trading,” <i>Neural Computing and Applications</i>, vol. 36, no.
    2. Springer Nature, pp. 617–637, 2024.'
  ista: 'Cornalba F, Disselkamp C, Scassola D, Helf C. 2024. Multi-objective reward
    generalization: Improving performance of Deep Reinforcement Learning for applications
    in single-asset trading. Neural Computing and Applications. 36(2), 617–637.'
  mla: 'Cornalba, Federico, et al. “Multi-Objective Reward Generalization: Improving
    Performance of Deep Reinforcement Learning for Applications in Single-Asset Trading.”
    <i>Neural Computing and Applications</i>, vol. 36, no. 2, Springer Nature, 2024,
    pp. 617–37, doi:<a href="https://doi.org/10.1007/s00521-023-09033-7">10.1007/s00521-023-09033-7</a>.'
  short: F. Cornalba, C. Disselkamp, D. Scassola, C. Helf, Neural Computing and Applications
    36 (2024) 617–637.
corr_author: '1'
date_created: 2023-10-22T22:01:16Z
date_published: 2024-01-01T00:00:00Z
date_updated: 2025-04-23T07:39:14Z
day: '01'
ddc:
- '000'
department:
- _id: JuFi
doi: 10.1007/s00521-023-09033-7
ec_funded: 1
external_id:
  arxiv:
  - '2203.04579'
  pmid:
  - '38187995'
file:
- access_level: open_access
  checksum: 04573d8e74c6119b97c2ca0a984e19a1
  content_type: application/pdf
  creator: dernst
  date_created: 2024-07-16T08:08:54Z
  date_updated: 2024-07-16T08:08:54Z
  file_id: '17251'
  file_name: 2024_NeuralCompApplications_Cornalba.pdf
  file_size: 4412285
  relation: main_file
  success: 1
file_date_updated: 2024-07-16T08:08:54Z
has_accepted_license: '1'
intvolume: '        36'
issue: '2'
language:
- iso: eng
month: '01'
oa: 1
oa_version: Published Version
page: 617-637
pmid: 1
project:
- _id: fc31cba2-9c52-11eb-aca3-ff467d239cd2
  grant_number: F6504
  name: Taming Complexity in Partial Differential Systems
- _id: 260C2330-B435-11E9-9278-68D0E5697425
  call_identifier: H2020
  grant_number: '754411'
  name: ISTplus - Postdoctoral Fellowships
publication: Neural Computing and Applications
publication_identifier:
  eissn:
  - 1433-3058
  issn:
  - 0941-0643
publication_status: published
publisher: Springer Nature
quality_controlled: '1'
scopus_import: '1'
status: public
title: 'Multi-objective reward generalization: Improving performance of Deep Reinforcement
  Learning for applications in single-asset trading'
tmp:
  image: /images/cc_by.png
  legal_code_url: https://creativecommons.org/licenses/by/4.0/legalcode
  name: Creative Commons Attribution 4.0 International Public License (CC-BY 4.0)
  short: CC BY (4.0)
type: journal_article
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 36
year: '2024'
...
