{"file":[{"file_id":"11378","content_type":"application/zip","date_created":"2022-05-13T12:33:26Z","checksum":"8eefa9c7c10ca7e1a2ccdd731962a645","file_size":13210143,"creator":"mlechner","relation":"source_file","date_updated":"2022-05-13T12:49:00Z","file_name":"src.zip","access_level":"closed"},{"file_id":"11382","date_created":"2022-05-16T08:02:28Z","content_type":"application/pdf","checksum":"1b9e1e5a9a83ed9d89dad2f5133dc026","file_size":2732536,"creator":"mlechner","relation":"main_file","date_updated":"2022-05-17T15:19:39Z","file_name":"thesis_main-a2.pdf","access_level":"open_access"}],"oa":1,"_id":"11362","file_date_updated":"2022-05-17T15:19:39Z","keyword":["neural networks","verification","machine learning"],"author":[{"id":"3DC22916-F248-11E8-B48F-1D18A9856A87","last_name":"Lechner","full_name":"Lechner, Mathias","first_name":"Mathias"}],"day":"12","article_processing_charge":"No","license":"https://creativecommons.org/licenses/by-nd/4.0/","doi":"10.15479/at:ista:11362","has_accepted_license":"1","degree_awarded":"PhD","type":"dissertation","department":[{"_id":"GradSch"},{"_id":"ToHe"}],"ddc":["004"],"status":"public","supervisor":[{"first_name":"Thomas A","full_name":"Henzinger, Thomas A","last_name":"Henzinger","orcid":"0000-0002-2985-7724","id":"40876CD8-F248-11E8-B48F-1D18A9856A87"}],"citation":{"ama":"Lechner M. Learning verifiable representations. 2022. doi:10.15479/at:ista:11362","ista":"Lechner M. 2022. Learning verifiable representations. Institute of Science and Technology Austria.","mla":"Lechner, Mathias. Learning Verifiable Representations. Institute of Science and Technology Austria, 2022, doi:10.15479/at:ista:11362.","ieee":"M. Lechner, “Learning verifiable representations,” Institute of Science and Technology Austria, 2022.","chicago":"Lechner, Mathias. “Learning Verifiable Representations.” Institute of Science and Technology Austria, 2022. https://doi.org/10.15479/at:ista:11362.","short":"M. Lechner, Learning Verifiable Representations, Institute of Science and Technology Austria, 2022.","apa":"Lechner, M. (2022). Learning verifiable representations. Institute of Science and Technology Austria. https://doi.org/10.15479/at:ista:11362"},"page":"124","tmp":{"name":"Creative Commons Attribution-NoDerivatives 4.0 International (CC BY-ND 4.0)","image":"/image/cc_by_nd.png","legal_code_url":"https://creativecommons.org/licenses/by-nd/4.0/legalcode","short":"CC BY-ND (4.0)"},"project":[{"name":"The Wittgenstein Prize","_id":"25F42A32-B435-11E9-9278-68D0E5697425","call_identifier":"FWF","grant_number":"Z211"},{"call_identifier":"H2020","_id":"62781420-2b32-11ec-9570-8d9b63373d4d","name":"Vigilant Algorithmic Monitoring of Software","grant_number":"101020093"}],"alternative_title":["ISTA Thesis"],"language":[{"iso":"eng"}],"date_published":"2022-05-12T00:00:00Z","user_id":"8b945eb4-e2f2-11eb-945a-df72226e66a9","date_created":"2022-05-12T07:14:01Z","abstract":[{"lang":"eng","text":"Deep learning has enabled breakthroughs in challenging computing problems and has emerged as the standard problem-solving tool for computer vision and natural language processing tasks.\r\nOne exception to this trend is safety-critical tasks where robustness and resilience requirements contradict the black-box nature of neural networks. \r\nTo deploy deep learning methods for these tasks, it is vital to provide guarantees on neural network agents' safety and robustness criteria. \r\nThis can be achieved by developing formal verification methods to verify the safety and robustness properties of neural networks.\r\n\r\nOur goal is to design, develop and assess safety verification methods for neural networks to improve their reliability and trustworthiness in real-world applications.\r\nThis thesis establishes techniques for the verification of compressed and adversarially trained models as well as the design of novel neural networks for verifiably safe decision-making.\r\n\r\nFirst, we establish the problem of verifying quantized neural networks. Quantization is a technique that trades numerical precision for the computational efficiency of running a neural network and is widely adopted in industry.\r\nWe show that neglecting the reduced precision when verifying a neural network can lead to wrong conclusions about the robustness and safety of the network, highlighting that novel techniques for quantized network verification are necessary. We introduce several bit-exact verification methods explicitly designed for quantized neural networks and experimentally confirm on realistic networks that the network's robustness and other formal properties are affected by the quantization.\r\n\r\nFurthermore, we perform a case study providing evidence that adversarial training, a standard technique for making neural networks more robust, has detrimental effects on the network's performance. This robustness-accuracy tradeoff has been studied before regarding the accuracy obtained on classification datasets where each data point is independent of all other data points. On the other hand, we investigate the tradeoff empirically in robot learning settings where a both, a high accuracy and a high robustness, are desirable.\r\nOur results suggest that the negative side-effects of adversarial training outweigh its robustness benefits in practice.\r\n\r\nFinally, we consider the problem of verifying safety when running a Bayesian neural network policy in a feedback loop with systems over the infinite time horizon. Bayesian neural networks are probabilistic models for learning uncertainties in the data and are therefore often used on robotic and healthcare applications where data is inherently stochastic.\r\nWe introduce a method for recalibrating Bayesian neural networks so that they yield probability distributions over safe decisions only.\r\nOur method learns a safety certificate that guarantees safety over the infinite time horizon to determine which decisions are safe in every possible state of the system.\r\nWe demonstrate the effectiveness of our approach on a series of reinforcement learning benchmarks."}],"date_updated":"2023-08-17T06:58:38Z","ec_funded":1,"publication_status":"published","title":"Learning verifiable representations","publisher":"Institute of Science and Technology Austria","month":"05","publication_identifier":{"isbn":["978-3-99078-017-6"]},"related_material":{"record":[{"id":"10665","status":"public","relation":"part_of_dissertation"},{"relation":"part_of_dissertation","status":"public","id":"10667"},{"relation":"part_of_dissertation","status":"public","id":"11366"},{"id":"7808","status":"public","relation":"part_of_dissertation"},{"status":"public","relation":"part_of_dissertation","id":"10666"}]},"oa_version":"Published Version","year":"2022"}