This repository contains all the code used for running the experiments conducted on our work On the Robustness of Adversarially Trained Models against Uncertainty Attack, submitted at Pattern Recognition, August 2024.
From the colab it is possible to see the over- and under-confidence attacks in action with a quick snippet of code, visualizing the uncertainty span of any sample on any RobustBench model.
The file main_attack.py can be used for running a single experiment on a RobustBench model. It takes a list of arguments: TBC