{"intvolume":" 36","license":"https://creativecommons.org/licenses/by/4.0/","quality_controlled":"1","language":[{"iso":"eng"}],"corr_author":"1","oa":1,"tmp":{"legal_code_url":"https://creativecommons.org/licenses/by/4.0/legalcode","short":"CC BY (4.0)","name":"Creative Commons Attribution 4.0 International Public License (CC-BY 4.0)","image":"/images/cc_by.png"},"page":"617-637","file_date_updated":"2024-07-16T08:08:54Z","user_id":"2DF688A6-F248-11E8-B48F-1D18A9856A87","file":[{"file_name":"2024_NeuralCompApplications_Cornalba.pdf","relation":"main_file","access_level":"open_access","success":1,"date_created":"2024-07-16T08:08:54Z","file_id":"17251","checksum":"04573d8e74c6119b97c2ca0a984e19a1","creator":"dernst","content_type":"application/pdf","file_size":4412285,"date_updated":"2024-07-16T08:08:54Z"}],"date_published":"2024-01-01T00:00:00Z","publication":"Neural Computing and Applications","day":"01","publication_status":"published","doi":"10.1007/s00521-023-09033-7","author":[{"first_name":"Federico","orcid":"0000-0002-6269-5149","id":"2CEB641C-A400-11E9-A717-D712E6697425","last_name":"Cornalba","full_name":"Cornalba, Federico"},{"full_name":"Disselkamp, Constantin","last_name":"Disselkamp","first_name":"Constantin"},{"last_name":"Scassola","full_name":"Scassola, Davide","first_name":"Davide"},{"first_name":"Christopher","last_name":"Helf","full_name":"Helf, Christopher"}],"volume":36,"_id":"14451","external_id":{"arxiv":["2203.04579"]},"project":[{"grant_number":"F6504","name":"Taming Complexity in Partial Differential Systems","_id":"fc31cba2-9c52-11eb-aca3-ff467d239cd2"},{"name":"ISTplus - Postdoctoral Fellowships","_id":"260C2330-B435-11E9-9278-68D0E5697425","call_identifier":"H2020","grant_number":"754411"}],"publisher":"Springer Nature","issue":"2","year":"2024","ddc":["000"],"abstract":[{"text":"We investigate the potential of Multi-Objective, Deep Reinforcement Learning for stock and cryptocurrency single-asset trading: in particular, we consider a Multi-Objective algorithm which generalizes the reward functions and discount factor (i.e., these components are not specified a priori, but incorporated in the learning process). Firstly, using several important assets (BTCUSD, ETHUSDT, XRPUSDT, AAPL, SPY, NIFTY50), we verify the reward generalization property of the proposed Multi-Objective algorithm, and provide preliminary statistical evidence showing increased predictive stability over the corresponding Single-Objective strategy. Secondly, we show that the Multi-Objective algorithm has a clear edge over the corresponding Single-Objective strategy when the reward mechanism is sparse (i.e., when non-null feedback is infrequent over time). Finally, we discuss the generalization properties with respect to the discount factor. The entirety of our code is provided in open-source format.","lang":"eng"}],"publication_identifier":{"issn":["0941-0643"],"eissn":["1433-3058"]},"oa_version":"Published Version","department":[{"_id":"JuFi"}],"article_type":"original","ec_funded":1,"status":"public","has_accepted_license":"1","date_created":"2023-10-22T22:01:16Z","citation":{"mla":"Cornalba, Federico, et al. “Multi-Objective Reward Generalization: Improving Performance of Deep Reinforcement Learning for Applications in Single-Asset Trading.” Neural Computing and Applications, vol. 36, no. 2, Springer Nature, 2024, pp. 617–37, doi:10.1007/s00521-023-09033-7.","ieee":"F. Cornalba, C. Disselkamp, D. Scassola, and C. Helf, “Multi-objective reward generalization: improving performance of Deep Reinforcement Learning for applications in single-asset trading,” Neural Computing and Applications, vol. 36, no. 2. Springer Nature, pp. 617–637, 2024.","ama":"Cornalba F, Disselkamp C, Scassola D, Helf C. Multi-objective reward generalization: improving performance of Deep Reinforcement Learning for applications in single-asset trading. Neural Computing and Applications. 2024;36(2):617-637. doi:10.1007/s00521-023-09033-7","ista":"Cornalba F, Disselkamp C, Scassola D, Helf C. 2024. Multi-objective reward generalization: improving performance of Deep Reinforcement Learning for applications in single-asset trading. Neural Computing and Applications. 36(2), 617–637.","short":"F. Cornalba, C. Disselkamp, D. Scassola, C. Helf, Neural Computing and Applications 36 (2024) 617–637.","apa":"Cornalba, F., Disselkamp, C., Scassola, D., & Helf, C. (2024). Multi-objective reward generalization: improving performance of Deep Reinforcement Learning for applications in single-asset trading. Neural Computing and Applications. Springer Nature. https://doi.org/10.1007/s00521-023-09033-7","chicago":"Cornalba, Federico, Constantin Disselkamp, Davide Scassola, and Christopher Helf. “Multi-Objective Reward Generalization: Improving Performance of Deep Reinforcement Learning for Applications in Single-Asset Trading.” Neural Computing and Applications. Springer Nature, 2024. https://doi.org/10.1007/s00521-023-09033-7."},"type":"journal_article","title":"Multi-objective reward generalization: improving performance of Deep Reinforcement Learning for applications in single-asset trading","month":"01","date_updated":"2024-07-16T08:10:08Z","acknowledgement":"Open access funding provided by Università degli Studi di Trieste within the CRUI-CARE Agreement. Funding was provided by Austrian Science Fund (Grant No. F65), Horizon 2020 (Grant No. 754411) and Österreichische Forschungsförderungsgesellschaft.","article_processing_charge":"Yes (via OA deal)","scopus_import":"1"}