Learning verifiable representations
Lechner M. 2022. Learning verifiable representations. Institute of Science and Technology Austria.
Download
Thesis
| PhD
| Published
| English
Author
Supervisor
Corresponding author has ISTA affiliation
Department
Series Title
ISTA Thesis
Abstract
Deep learning has enabled breakthroughs in challenging computing problems and has emerged as the standard problem-solving tool for computer vision and natural language processing tasks.
One exception to this trend is safety-critical tasks where robustness and resilience requirements contradict the black-box nature of neural networks.
To deploy deep learning methods for these tasks, it is vital to provide guarantees on neural network agents' safety and robustness criteria.
This can be achieved by developing formal verification methods to verify the safety and robustness properties of neural networks.
Our goal is to design, develop and assess safety verification methods for neural networks to improve their reliability and trustworthiness in real-world applications.
This thesis establishes techniques for the verification of compressed and adversarially trained models as well as the design of novel neural networks for verifiably safe decision-making.
First, we establish the problem of verifying quantized neural networks. Quantization is a technique that trades numerical precision for the computational efficiency of running a neural network and is widely adopted in industry.
We show that neglecting the reduced precision when verifying a neural network can lead to wrong conclusions about the robustness and safety of the network, highlighting that novel techniques for quantized network verification are necessary. We introduce several bit-exact verification methods explicitly designed for quantized neural networks and experimentally confirm on realistic networks that the network's robustness and other formal properties are affected by the quantization.
Furthermore, we perform a case study providing evidence that adversarial training, a standard technique for making neural networks more robust, has detrimental effects on the network's performance. This robustness-accuracy tradeoff has been studied before regarding the accuracy obtained on classification datasets where each data point is independent of all other data points. On the other hand, we investigate the tradeoff empirically in robot learning settings where a both, a high accuracy and a high robustness, are desirable.
Our results suggest that the negative side-effects of adversarial training outweigh its robustness benefits in practice.
Finally, we consider the problem of verifying safety when running a Bayesian neural network policy in a feedback loop with systems over the infinite time horizon. Bayesian neural networks are probabilistic models for learning uncertainties in the data and are therefore often used on robotic and healthcare applications where data is inherently stochastic.
We introduce a method for recalibrating Bayesian neural networks so that they yield probability distributions over safe decisions only.
Our method learns a safety certificate that guarantees safety over the infinite time horizon to determine which decisions are safe in every possible state of the system.
We demonstrate the effectiveness of our approach on a series of reinforcement learning benchmarks.
Keywords
Publishing Year
Date Published
2022-05-12
Publisher
Institute of Science and Technology Austria
Page
124
ISBN
IST-REx-ID
Cite this
Lechner M. Learning verifiable representations. 2022. doi:10.15479/at:ista:11362
Lechner, M. (2022). Learning verifiable representations. Institute of Science and Technology Austria. https://doi.org/10.15479/at:ista:11362
Lechner, Mathias. “Learning Verifiable Representations.” Institute of Science and Technology Austria, 2022. https://doi.org/10.15479/at:ista:11362.
M. Lechner, “Learning verifiable representations,” Institute of Science and Technology Austria, 2022.
Lechner M. 2022. Learning verifiable representations. Institute of Science and Technology Austria.
Lechner, Mathias. Learning Verifiable Representations. Institute of Science and Technology Austria, 2022, doi:10.15479/at:ista:11362.
All files available under the following license(s):
Creative Commons Attribution-NoDerivatives 4.0 International (CC BY-ND 4.0):
Main File(s)
File Name
thesis_main-a2.pdf
2.73 MB
Access Level
Open Access
Date Uploaded
2022-05-16
MD5 Checksum
1b9e1e5a9a83ed9d89dad2f5133dc026
Source File
File Name
src.zip
13.21 MB
Access Level
Closed Access
Date Uploaded
2022-05-13
MD5 Checksum
8eefa9c7c10ca7e1a2ccdd731962a645
Material in ISTA:
Part of this Dissertation
Part of this Dissertation
Part of this Dissertation
Part of this Dissertation
Part of this Dissertation