Provable robustness against all adversarial $l_p$-perturbations for $p\geq 1$Download PDF

Published: 20 Dec 2019, Last Modified: 05 May 2023ICLR 2020 Conference Blind SubmissionReaders: Everyone
TL;DR: We introduce a method to train models with provable robustness wrt all the $l_p$-norms for $p\geq 1$ simultaneously.
Abstract: In recent years several adversarial attacks and defenses have been proposed. Often seemingly robust models turn out to be non-robust when more sophisticated attacks are used. One way out of this dilemma are provable robustness guarantees. While provably robust models for specific $l_p$-perturbation models have been developed, we show that they do not come with any guarantee against other $l_q$-perturbations. We propose a new regularization scheme, MMR-Universal, for ReLU networks which enforces robustness wrt $l_1$- \textit{and} $l_\infty$-perturbations and show how that leads to the first provably robust models wrt any $l_p$-norm for $p\geq 1$.
Keywords: adversarial robustness, provable guarantees
Code: https://github.com/fra31/mmr-universal
Original Pdf: pdf
8 Replies

Loading