{"month":"04","article_number":"e0248940","publication_identifier":{"eissn":["19326203"]},"publication_status":"published","article_processing_charge":"No","has_accepted_license":"1","date_published":"2021-04-15T00:00:00Z","file":[{"date_created":"2021-05-04T13:22:19Z","content_type":"application/pdf","file_id":"9371","file_size":2768282,"relation":"main_file","date_updated":"2021-05-04T13:22:19Z","file_name":"2021_pone_Chalk.pdf","access_level":"open_access","creator":"kschuh","checksum":"c52da133850307d2031f552d998f00e8","success":1}],"author":[{"orcid":"0000-0001-7782-4436","full_name":"Chalk, Matthew J","id":"2BAAC544-F248-11E8-B48F-1D18A9856A87","first_name":"Matthew J","last_name":"Chalk"},{"first_name":"Gašper","id":"3D494DCA-F248-11E8-B48F-1D18A9856A87","full_name":"Tkačik, Gašper","orcid":"0000-0002-6699-1455","last_name":"Tkačik"},{"last_name":"Marre","full_name":"Marre, Olivier","first_name":"Olivier"}],"publisher":"Public Library of Science","oa_version":"Published Version","language":[{"iso":"eng"}],"external_id":{"isi":["000641474900072"],"pmid":["33857170"]},"day":"15","status":"public","publication":"PLoS ONE","_id":"9362","scopus_import":"1","date_created":"2021-05-02T22:01:28Z","ddc":["570"],"volume":16,"user_id":"2DF688A6-F248-11E8-B48F-1D18A9856A87","oa":1,"issue":"4","quality_controlled":"1","abstract":[{"lang":"eng","text":"A central goal in systems neuroscience is to understand the functions performed by neural circuits. Previous top-down models addressed this question by comparing the behaviour of an ideal model circuit, optimised to perform a given function, with neural recordings. However, this requires guessing in advance what function is being performed, which may not be possible for many neural systems. To address this, we propose an inverse reinforcement learning (RL) framework for inferring the function performed by a neural network from data. We assume that the responses of each neuron in a network are optimised so as to drive the network towards ‘rewarded’ states, that are desirable for performing a given function. We then show how one can use inverse RL to infer the reward function optimised by the network from observing its responses. This inferred reward function can be used to predict how the neural network should adapt its dynamics to perform the same function when the external environment or network structure changes. This could lead to theoretical predictions about how neural network dynamics adapt to deal with cell death and/or varying sensory stimulus statistics."}],"department":[{"_id":"GaTk"}],"isi":1,"file_date_updated":"2021-05-04T13:22:19Z","year":"2021","acknowledgement":"The authors would like to thank Ulisse Ferrari for useful discussions and feedback.","citation":{"apa":"Chalk, M. J., Tkačik, G., & Marre, O. (2021). Inferring the function performed by a recurrent neural network. PLoS ONE. Public Library of Science. https://doi.org/10.1371/journal.pone.0248940","ama":"Chalk MJ, Tkačik G, Marre O. Inferring the function performed by a recurrent neural network. PLoS ONE. 2021;16(4). doi:10.1371/journal.pone.0248940","ieee":"M. J. Chalk, G. Tkačik, and O. Marre, “Inferring the function performed by a recurrent neural network,” PLoS ONE, vol. 16, no. 4. Public Library of Science, 2021.","short":"M.J. Chalk, G. Tkačik, O. Marre, PLoS ONE 16 (2021).","ista":"Chalk MJ, Tkačik G, Marre O. 2021. Inferring the function performed by a recurrent neural network. PLoS ONE. 16(4), e0248940.","chicago":"Chalk, Matthew J, Gašper Tkačik, and Olivier Marre. “Inferring the Function Performed by a Recurrent Neural Network.” PLoS ONE. Public Library of Science, 2021. https://doi.org/10.1371/journal.pone.0248940.","mla":"Chalk, Matthew J., et al. “Inferring the Function Performed by a Recurrent Neural Network.” PLoS ONE, vol. 16, no. 4, e0248940, Public Library of Science, 2021, doi:10.1371/journal.pone.0248940."},"doi":"10.1371/journal.pone.0248940","title":"Inferring the function performed by a recurrent neural network","intvolume":" 16","type":"journal_article","pmid":1,"article_type":"original","tmp":{"image":"/images/cc_by.png","short":"CC BY (4.0)","name":"Creative Commons Attribution 4.0 International Public License (CC-BY 4.0)","legal_code_url":"https://creativecommons.org/licenses/by/4.0/legalcode"},"date_updated":"2023-10-18T08:17:42Z"}