Realistic Large-Scale Benchmark for Adversarial Patch


The goal of our study is to make machine learning models robust against patch attacks. In particular, we will develop the first standardized benchmark for security against patch attacks, under realistic threat models. Our benchmark will cover two important aspects often ignored in past work: (1) realistic patches that must work under multiple camera angles, lighting conditions, etc., and (2) realistic constraints on the location of the patch. Then, we will develop better patch attacks, and use them together with adversarial training to improve the defense side.


Patch attacks are a realistic threat, particularly for cyber-physical systems such as autonomous driving. Past work on defending against patch attacks has often made unrealistic assumptions, such as that the patch is square, axis-aligned, can be located anywhere on the image, and is fully under the control of the attacker (e.g., there is no noise or variation in lighting or pose). Creating a physically printable sign or sticker and testing it in the real world is ideal in terms of realism, but it is expensive and difficult to reproduce or standardize. We will build a more realistic benchmark, which we hope will put research in this area on a more solid footing. We also conjecture that taking into account the realistic assumptions may change attacking approaches and enable stronger defenses.


We will use a combination of multiple driving datasets, such as BDD100K, Mapillary, to build our benchmark. A few candidates are included in Table 1. We will use it to investigate the effectiveness of patch attacks against object detection, semantic segmentation, and classification tasks. Then, the same benchmark will be used to evaluate existing defenses. The first version of our benchmark will include two threat models: (1) adversarial stickers on traffic signs and (2) billboard attack.


This project is supported by Microsoft through a funding award and Azure cloud credits.