Real-world adversarial objects show large performance differences in small perturbations.
The article explores how to test real-world adversarial examples by creating a new scoring system. They found that small changes in scenes can greatly affect how well models can detect adversarial objects. The researchers developed a testbed and a score to better evaluate these adversarial examples, highlighting the need for more comprehensive reporting in this field.