Advanced search
2 files | 3.83 MB Add to list

Impact of adversarial examples on deep learning models for biomedical image segmentation

Author
Organization
Abstract
Deep learning models, which are increasingly being used in the field of medical image analysis, come with a major security risk, namely, their vulnerability to adversarial examples. Adversarial examples are carefully crafted samples that force machine learning models to make mistakes during testing time. These malicious samples have been shown to be highly effective in misguiding classification tasks. However, research on the influence of adversarial examples on segmentation is significantly lacking. Given that a large portion of medical imaging problems are effectively segmentation problems, we analyze the impact of adversarial examples on deep learning-based image segmentation models. Specifically, we expose the vulnerability of these models to adversarial examples by proposing the Adaptive Segmentation Mask Attack (ASMA). This novel algorithm makes it possible to craft targeted adversarial examples that come with (1) high intersection-over-union rates between the target adversarial mask and the prediction and (2) with perturbation that is, for the most part, invisible to the bare eye. We lay out experimental and visual evidence by showing results obtained for the ISIC skin lesion segmentation challenge and the problem of glaucoma optic disc segmentation. An implementation of this algorithm and additional examples can be found at https://github.com/utkuozbulak/adaptive-segmentation-mask-attack.

Downloads

  • DS271 i.pdf
    • full text
    • |
    • open access
    • |
    • PDF
    • |
    • 1.96 MB
  • (...).pdf
    • full text
    • |
    • UGent only
    • |
    • PDF
    • |
    • 1.87 MB

Citation

Please use this url to cite or link to this publication:

MLA
Özbulak, Utku, et al. “Impact of Adversarial Examples on Deep Learning Models for Biomedical Image Segmentation.” MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2019, PT II, edited by Dinggang Shen et al., vol. 11765, Springer, 2019, pp. 300–08, doi:10.1007/978-3-030-32245-8_34.
APA
Özbulak, U., Van Messem, A., & De Neve, W. (2019). Impact of adversarial examples on deep learning models for biomedical image segmentation. In D. Shen, T. Liu, T. M. Peters, L. H. Staib, C. Essert, S. Zhou, … A. Khan (Eds.), MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2019, PT II (Vol. 11765, pp. 300–308). Cham, Switzerland: Springer. https://doi.org/10.1007/978-3-030-32245-8_34
Chicago author-date
Özbulak, Utku, Arnout Van Messem, and Wesley De Neve. 2019. “Impact of Adversarial Examples on Deep Learning Models for Biomedical Image Segmentation.” In MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2019, PT II, edited by Dinggang Shen, Tianming Liu, Terry M Peters, Lawrence H Staib, Caroline Essert, Sean Zhou, Pew-Thian Yap, and Ali Khan, 11765:300–308. Cham, Switzerland: Springer. https://doi.org/10.1007/978-3-030-32245-8_34.
Chicago author-date (all authors)
Özbulak, Utku, Arnout Van Messem, and Wesley De Neve. 2019. “Impact of Adversarial Examples on Deep Learning Models for Biomedical Image Segmentation.” In MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2019, PT II, ed by. Dinggang Shen, Tianming Liu, Terry M Peters, Lawrence H Staib, Caroline Essert, Sean Zhou, Pew-Thian Yap, and Ali Khan, 11765:300–308. Cham, Switzerland: Springer. doi:10.1007/978-3-030-32245-8_34.
Vancouver
1.
Özbulak U, Van Messem A, De Neve W. Impact of adversarial examples on deep learning models for biomedical image segmentation. In: Shen D, Liu T, Peters TM, Staib LH, Essert C, Zhou S, et al., editors. MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2019, PT II. Cham, Switzerland: Springer; 2019. p. 300–8.
IEEE
[1]
U. Özbulak, A. Van Messem, and W. De Neve, “Impact of adversarial examples on deep learning models for biomedical image segmentation,” in MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2019, PT II, Shenzhen, PR China, 2019, vol. 11765, pp. 300–308.
@inproceedings{8632073,
  abstract     = {Deep learning models, which are increasingly being used in the field of medical image analysis, come with a major security risk, namely, their vulnerability to adversarial examples. Adversarial examples are carefully crafted samples that force machine learning models to make mistakes during testing time. These malicious samples have been shown to be highly effective in misguiding classification tasks. However, research on the influence of adversarial examples on segmentation is significantly lacking. Given that a large portion of medical imaging problems are effectively segmentation problems, we analyze the impact of adversarial examples on deep learning-based image segmentation models. Specifically, we expose the vulnerability of these models to adversarial examples by proposing the Adaptive Segmentation Mask Attack (ASMA). This novel algorithm makes it possible to craft targeted adversarial examples that come with (1) high intersection-over-union rates between the target adversarial mask and the prediction and (2) with perturbation that is, for the most part, invisible to the bare eye. We lay out experimental and visual evidence by showing results obtained for the ISIC skin lesion segmentation challenge and the problem of glaucoma optic disc segmentation. An implementation of this algorithm and additional examples can be found at https://github.com/utkuozbulak/adaptive-segmentation-mask-attack.},
  author       = {Özbulak, Utku and Van Messem, Arnout and De Neve, Wesley},
  booktitle    = {MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2019, PT II},
  editor       = {Shen, Dinggang and Liu, Tianming and Peters, Terry M and Staib, Lawrence H and Essert, Caroline and Zhou, Sean and Yap, Pew-Thian and Khan, Ali},
  isbn         = {9783030322441},
  issn         = {0302-9743},
  language     = {eng},
  location     = {Shenzhen, PR China},
  pages        = {300--308},
  publisher    = {Springer},
  title        = {Impact of adversarial examples on deep learning models for biomedical image segmentation},
  url          = {http://dx.doi.org/10.1007/978-3-030-32245-8_34},
  volume       = {11765},
  year         = {2019},
}

Altmetric
View in Altmetric
Web of Science
Times cited: