Neural Architecture Design and Robustness:
A Dataset

Steffen Jung1,2,*, Jovita Lukasik1,*, Margret Keuper1,2

1 Max Planck Institute for Informatics, 2 University of Siegen

Paper | Video | Poster | Code | Data | OpenReview

Abstract: Deep learning models have proven to be successful in a wide range of machine learning tasks. Yet, they are often highly sensitive to perturbations on the input data which can lead to incorrect decisions with high confidence, hampering their deployment for practical use-cases. Thus, finding architectures that are (more) robust against perturbations has received much attention in recent years. Just like the search for well-performing architectures in terms of clean accuracy, this usually involves a tedious trial-and-error process with one additional challenge: the evaluation of a network's robustness is significantly more expensive than its evaluation for clean accuracy. Thus, the aim of this paper is to facilitate better streamlined research on architectural design choices with respect to their impact on robustness as well as, for example, the evaluation of surrogate measures for robustness. We therefore borrow one of the most commonly considered search spaces for neural architecture search for image classification, NAS-Bench-201, which contains a manageable size of 6466 non-isomorphic network designs. We evaluate all these networks on a range of common adversarial attacks and corruption types and introduce a database on neural architecture design and robustness evaluations. We further present three exemplary use cases of this dataset, in which we (i) benchmark robustness measurements based on Jacobian and Hessian matrices for their robustness predictability, (ii) perform neural architecture search on robust accuracies, and (iii) provide an initial analysis of how architectural design choices affect robustness. We find that carefully crafting the topology of a network can have substantial impact on its robustness, where networks with the same parameter count range in mean adversarial robust accuracy from 20%-41%.

NAS-Bench-201[1] Search Space


Figure: The NAS-Bench-201 search space is composed of a fixed macro architecture (top) and different configurations of operations in cells (highlighted in gray). A cell consists of 4 nodes (feature maps) and 6 edges (possible operations) between them. The set of possible operations includes 1x1/3x3 convolutions, 3x3 average pooling, skip connections, and zeroize (dropping the edge). Hence, the search space contains 5^6=15,625 possible architectures, of which 6,466 are non-isomorph.

Interactive Evaluation

You can browse the accuracy evaluation results by changing the operations of the cell below. Colors indicate whether your change improves (green) or deteriorates (red) accuracy.






CIFAR-10CIFAR-100ImageNet16-120
Clean
FGSM (e=1.0)
PGD (e=1.0)
APGD-CE (e=1.0)
Square (e=1.0)
Mean Corruption

Citation

		@inproceedings{Jung2023,
			author = {Steffen Jung and Jovita Lukasik and Margret Keuper},
			title = {Neural Architecture Design and Robustness: A Dataset},
			booktitle = {ICLR},
			year = {2023}
		}
		

References

  • [1] NAS-Bench-201
  • Impressum