Multi-objective reward generalization: improving performance of Deep Reinforcement Learning for applications in single-asset trading
Cornalba F, Disselkamp C, Scassola D, Helf C. 2024. Multi-objective reward generalization: improving performance of Deep Reinforcement Learning for applications in single-asset trading. Neural Computing and Applications. 36(2), 617–637.
Download
Journal Article
| Published
| English
Scopus indexed
Author
Cornalba, FedericoISTA ;
Disselkamp, Constantin;
Scassola, Davide;
Helf, Christopher
Corresponding author has ISTA affiliation
Department
Abstract
We investigate the potential of Multi-Objective, Deep Reinforcement Learning for stock and cryptocurrency single-asset trading: in particular, we consider a Multi-Objective algorithm which generalizes the reward functions and discount factor (i.e., these components are not specified a priori, but incorporated in the learning process). Firstly, using several important assets (BTCUSD, ETHUSDT, XRPUSDT, AAPL, SPY, NIFTY50), we verify the reward generalization property of the proposed Multi-Objective algorithm, and provide preliminary statistical evidence showing increased predictive stability over the corresponding Single-Objective strategy. Secondly, we show that the Multi-Objective algorithm has a clear edge over the corresponding Single-Objective strategy when the reward mechanism is sparse (i.e., when non-null feedback is infrequent over time). Finally, we discuss the generalization properties with respect to the discount factor. The entirety of our code is provided in open-source format.
Publishing Year
Date Published
2024-01-01
Journal Title
Neural Computing and Applications
Publisher
Springer Nature
Acknowledgement
Open access funding provided by Università degli Studi di Trieste within the CRUI-CARE Agreement. Funding was provided by Austrian Science Fund (Grant No. F65), Horizon 2020 (Grant No. 754411) and Österreichische Forschungsförderungsgesellschaft.
Volume
36
Issue
2
Page
617-637
ISSN
eISSN
IST-REx-ID
Cite this
Cornalba F, Disselkamp C, Scassola D, Helf C. Multi-objective reward generalization: improving performance of Deep Reinforcement Learning for applications in single-asset trading. Neural Computing and Applications. 2024;36(2):617-637. doi:10.1007/s00521-023-09033-7
Cornalba, F., Disselkamp, C., Scassola, D., & Helf, C. (2024). Multi-objective reward generalization: improving performance of Deep Reinforcement Learning for applications in single-asset trading. Neural Computing and Applications. Springer Nature. https://doi.org/10.1007/s00521-023-09033-7
Cornalba, Federico, Constantin Disselkamp, Davide Scassola, and Christopher Helf. “Multi-Objective Reward Generalization: Improving Performance of Deep Reinforcement Learning for Applications in Single-Asset Trading.” Neural Computing and Applications. Springer Nature, 2024. https://doi.org/10.1007/s00521-023-09033-7.
F. Cornalba, C. Disselkamp, D. Scassola, and C. Helf, “Multi-objective reward generalization: improving performance of Deep Reinforcement Learning for applications in single-asset trading,” Neural Computing and Applications, vol. 36, no. 2. Springer Nature, pp. 617–637, 2024.
Cornalba F, Disselkamp C, Scassola D, Helf C. 2024. Multi-objective reward generalization: improving performance of Deep Reinforcement Learning for applications in single-asset trading. Neural Computing and Applications. 36(2), 617–637.
Cornalba, Federico, et al. “Multi-Objective Reward Generalization: Improving Performance of Deep Reinforcement Learning for Applications in Single-Asset Trading.” Neural Computing and Applications, vol. 36, no. 2, Springer Nature, 2024, pp. 617–37, doi:10.1007/s00521-023-09033-7.
All files available under the following license(s):
Creative Commons Attribution 4.0 International Public License (CC-BY 4.0):
Main File(s)
File Name
Access Level
Open Access
Date Uploaded
2024-07-16
MD5 Checksum
04573d8e74c6119b97c2ca0a984e19a1
Export
Marked PublicationsOpen Data ISTA Research Explorer
Sources
arXiv 2203.04579