Skip to content

EmanueleLedda97/UncertaintyAdversarialRobustness

Repository files navigation

Uncertainty Adversarial Robustness

Open In Colab Arxiv

This repository contains all the code used for running the experiments conducted on our work On the Robustness of Adversarially Trained Models against Uncertainty Attack, submitted at Pattern Recognition, August 2024.

graphical abstract

Quick Tests 🧪

From the colab it is possible to see the over- and under-confidence attacks in action with a quick snippet of code, visualizing the uncertainty span of any sample on any RobustBench model.

Reproducing the Experiments 🔬

The file main_attack.py can be used for running a single experiment on a RobustBench model. It takes a list of arguments: TBC

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •  

Languages