{"month":"10","publication":"Nature Machine Intelligence","citation":{"mla":"Lechner, Mathias, et al. “Neural Circuit Policies Enabling Auditable Autonomy.” Nature Machine Intelligence, vol. 2, Springer Nature, 2020, pp. 642–52, doi:10.1038/s42256-020-00237-3.","ista":"Lechner M, Hasani R, Amini A, Henzinger TA, Rus D, Grosu R. 2020. Neural circuit policies enabling auditable autonomy. Nature Machine Intelligence. 2, 642–652.","short":"M. Lechner, R. Hasani, A. Amini, T.A. Henzinger, D. Rus, R. Grosu, Nature Machine Intelligence 2 (2020) 642–652.","ieee":"M. Lechner, R. Hasani, A. Amini, T. A. Henzinger, D. Rus, and R. Grosu, “Neural circuit policies enabling auditable autonomy,” Nature Machine Intelligence, vol. 2. Springer Nature, pp. 642–652, 2020.","chicago":"Lechner, Mathias, Ramin Hasani, Alexander Amini, Thomas A Henzinger, Daniela Rus, and Radu Grosu. “Neural Circuit Policies Enabling Auditable Autonomy.” Nature Machine Intelligence. Springer Nature, 2020. https://doi.org/10.1038/s42256-020-00237-3.","ama":"Lechner M, Hasani R, Amini A, Henzinger TA, Rus D, Grosu R. Neural circuit policies enabling auditable autonomy. Nature Machine Intelligence. 2020;2:642-652. doi:10.1038/s42256-020-00237-3","apa":"Lechner, M., Hasani, R., Amini, A., Henzinger, T. A., Rus, D., & Grosu, R. (2020). Neural circuit policies enabling auditable autonomy. Nature Machine Intelligence. Springer Nature. https://doi.org/10.1038/s42256-020-00237-3"},"volume":2,"scopus_import":"1","publication_status":"published","isi":1,"status":"public","related_material":{"link":[{"description":"News on IST Homepage","url":"https://ist.ac.at/en/news/new-deep-learning-models/","relation":"press_release"}]},"user_id":"4359f0d1-fa6c-11eb-b949-802e58b17ae8","date_updated":"2023-08-22T10:36:06Z","external_id":{"isi":["000583337200011"]},"author":[{"full_name":"Lechner, Mathias","first_name":"Mathias","id":"3DC22916-F248-11E8-B48F-1D18A9856A87","last_name":"Lechner"},{"full_name":"Hasani, Ramin","first_name":"Ramin","last_name":"Hasani"},{"full_name":"Amini, Alexander","first_name":"Alexander","last_name":"Amini"},{"first_name":"Thomas A","id":"40876CD8-F248-11E8-B48F-1D18A9856A87","orcid":"0000-0002-2985-7724","last_name":"Henzinger","full_name":"Henzinger, Thomas A"},{"full_name":"Rus, Daniela","last_name":"Rus","first_name":"Daniela"},{"last_name":"Grosu","first_name":"Radu","full_name":"Grosu, Radu"}],"quality_controlled":"1","doi":"10.1038/s42256-020-00237-3","year":"2020","_id":"8679","article_type":"original","publisher":"Springer Nature","type":"journal_article","date_published":"2020-10-01T00:00:00Z","oa_version":"None","abstract":[{"text":"A central goal of artificial intelligence in high-stakes decision-making applications is to design a single algorithm that simultaneously expresses generalizability by learning coherent representations of their world and interpretable explanations of its dynamics. Here, we combine brain-inspired neural computation principles and scalable deep learning architectures to design compact neural controllers for task-specific compartments of a full-stack autonomous vehicle control system. We discover that a single algorithm with 19 control neurons, connecting 32 encapsulated input features to outputs by 253 synapses, learns to map high-dimensional inputs into steering commands. This system shows superior generalizability, interpretability and robustness compared with orders-of-magnitude larger black-box learning systems. The obtained neural agents enable high-fidelity autonomy for task-specific parts of a complex autonomous system.","lang":"eng"}],"department":[{"_id":"ToHe"}],"article_processing_charge":"No","title":"Neural circuit policies enabling auditable autonomy","date_created":"2020-10-19T13:46:06Z","language":[{"iso":"eng"}],"publication_identifier":{"eissn":["2522-5839"]},"intvolume":" 2","project":[{"call_identifier":"FWF","grant_number":"Z211","_id":"25F42A32-B435-11E9-9278-68D0E5697425","name":"The Wittgenstein Prize"}],"day":"01","page":"642-652"}