Differentiable Abstract Interpretation for Provably Robust Neural Networks

Presented on January 24, 2019
Presenter: Mehmet

Preview

Mehmet will talk about Differentiable Abstract Interpretation for Provably Robust Neural Networks by Matthew Mirman, Timon Gehr, and Martin Vechev. The paper describes existing perturbation attacks against neural networks, models these attacks, describes abstract domains representing perturbed inputs, provides abstract transfer functions for neural networks’ building blocks, and uses the abstracted neural networks to train neural networks in a provably robust way against certain kinds of attacks.