Advanced search
2 files | 1.38 MB Add to list

Not all adversarial examples require a complex defense : identifying over-optimized adversarial examples with IQR-based logit thresholding

Author
Organization
Abstract
Detecting adversarial examples currently stands as one of the biggest challenges in the field of deep learning. Adversarial attacks, which produce adversarial examples, increase the prediction likelihood of a target class for a particular data point. During this process, the adversarial example can be further optimized, even when it has already been wrongly classified with 100% confidence, thus making the adversarial example even more difficult to detect. For this kind of adversarial examples, which we refer to as over-optimized adversarial examples, we discovered that the logits of the model provide solid clues on whether the data point at hand is adversarial or genuine. In this context, we first discuss the masking effect of the softmax function for the prediction made and explain why the logits of the model are more useful in detecting over-optimized adversarial examples. To identify this type of adversarial examples in practice, we propose a non-parametric and computationally efficient method which relies on interquartile range, with this method becoming more effective as the image resolution increases. We support our observations throughout the paper with detailed experiments for different datasets (MNIST, CIFAR-10, and ImageNet) and several architectures.

Downloads

  • DS270 i.pdf
    • full text (Accepted manuscript)
    • |
    • open access
    • |
    • PDF
    • |
    • 700.57 KB
  • (...).pdf
    • full text (Published version)
    • |
    • UGent only
    • |
    • PDF
    • |
    • 679.10 KB

Citation

Please use this url to cite or link to this publication:

MLA
Özbulak, Utku, et al. “Not All Adversarial Examples Require a Complex Defense : Identifying over-Optimized Adversarial Examples with IQR-Based Logit Thresholding.” 2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), IEEE, 2019, doi:10.1109/ijcnn.2019.8851930.
APA
Özbulak, U., Van Messem, A., & De Neve, W. (2019). Not all adversarial examples require a complex defense : identifying over-optimized adversarial examples with IQR-based logit thresholding. In 2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN). Budapest, Hungary: IEEE. https://doi.org/10.1109/ijcnn.2019.8851930
Chicago author-date
Özbulak, Utku, Arnout Van Messem, and Wesley De Neve. 2019. “Not All Adversarial Examples Require a Complex Defense : Identifying over-Optimized Adversarial Examples with IQR-Based Logit Thresholding.” In 2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN). IEEE. https://doi.org/10.1109/ijcnn.2019.8851930.
Chicago author-date (all authors)
Özbulak, Utku, Arnout Van Messem, and Wesley De Neve. 2019. “Not All Adversarial Examples Require a Complex Defense : Identifying over-Optimized Adversarial Examples with IQR-Based Logit Thresholding.” In 2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN). IEEE. doi:10.1109/ijcnn.2019.8851930.
Vancouver
1.
Özbulak U, Van Messem A, De Neve W. Not all adversarial examples require a complex defense : identifying over-optimized adversarial examples with IQR-based logit thresholding. In: 2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN). IEEE; 2019.
IEEE
[1]
U. Özbulak, A. Van Messem, and W. De Neve, “Not all adversarial examples require a complex defense : identifying over-optimized adversarial examples with IQR-based logit thresholding,” in 2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), Budapest, Hungary, 2019.
@inproceedings{8632063,
  abstract     = {Detecting adversarial examples currently stands as one of the biggest challenges in the field of deep learning. Adversarial attacks, which produce adversarial examples, increase the prediction likelihood of a target class for a particular data point. During this process, the adversarial example can be further optimized, even when it has already been wrongly classified with 100% confidence, thus making the adversarial example even more difficult to detect. For this kind of adversarial examples, which we refer to as over-optimized adversarial examples, we discovered that the logits of the model provide solid clues on whether the data point at hand is adversarial or genuine. In this context, we first discuss the masking effect of the softmax function for the prediction made and explain why the logits of the model are more useful in detecting over-optimized adversarial examples. To identify this type of adversarial examples in practice, we propose a non-parametric and computationally efficient method which relies on interquartile range, with this method becoming more effective as the image resolution increases. We support our observations throughout the paper with detailed experiments for different datasets (MNIST, CIFAR-10, and ImageNet) and several architectures.},
  articleno    = {N-19374},
  author       = {Özbulak, Utku and Van Messem, Arnout and De Neve, Wesley},
  booktitle    = {2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN)},
  isbn         = {9781728119854},
  issn         = {2161-4393},
  language     = {eng},
  location     = {Budapest, Hungary},
  pages        = {8},
  publisher    = {IEEE},
  title        = {Not all adversarial examples require a complex defense : identifying over-optimized adversarial examples with IQR-based logit thresholding},
  url          = {http://dx.doi.org/10.1109/ijcnn.2019.8851930},
  year         = {2019},
}

Altmetric
View in Altmetric
Web of Science
Times cited: