Advanced search
1 file | 9.56 MB Add to list

Leveraging the Bhattacharyya coefficient for uncertainty quantification in deep neural networks

Pieter Van Molle (UGent) , Tim Verbelen (UGent) , Bert Vankeirsbilck (UGent) , Jonas De Vylder (UGent) , Bart Diricx, Tom Kimpe, Pieter Simoens (UGent) and Bart Dhoedt (UGent)
(2021) NEURAL COMPUTING & APPLICATIONS. 33(16). p.10259-10275
Author
Organization
Abstract
Modern deep learning models achieve state-of-the-art results for many tasks in computer vision, such as image classification and segmentation. However, its adoption into high-risk applications, e.g. automated medical diagnosis systems, happens at a slow pace. One of the main reasons for this is that regular neural networks do not capture uncertainty. To assess uncertainty in classification, several techniques have been proposed casting neural network approaches in a Bayesian setting. Amongst these techniques, Monte Carlo dropout is by far the most popular. This particular technique estimates the moments of the output distribution through sampling with different dropout masks. The output uncertainty of a neural network is then approximated as the sample variance. In this paper, we highlight the limitations of such a variance-based uncertainty metric and propose an novel approach. Our approach is based on the overlap between output distributions of different classes. We show that our technique leads to a better approximation of the inter-class output confusion. We illustrate the advantages of our method using benchmark datasets. In addition, we apply our metric to skin lesion classification-a real-world use case-and show that this yields promising results.
Keywords
Deep learning, Bayesian networks, Uncertainty, Dropout Monte Carlo

Downloads

  • 7874.pdf
    • full text (Published version)
    • |
    • open access
    • |
    • PDF
    • |
    • 9.56 MB

Citation

Please use this url to cite or link to this publication:

MLA
Van Molle, Pieter, et al. “Leveraging the Bhattacharyya Coefficient for Uncertainty Quantification in Deep Neural Networks.” NEURAL COMPUTING & APPLICATIONS, vol. 33, no. 16, 2021, pp. 10259–75, doi:10.1007/s00521-021-05789-y.
APA
Van Molle, P., Verbelen, T., Vankeirsbilck, B., De Vylder, J., Diricx, B., Kimpe, T., … Dhoedt, B. (2021). Leveraging the Bhattacharyya coefficient for uncertainty quantification in deep neural networks. NEURAL COMPUTING & APPLICATIONS, 33(16), 10259–10275. https://doi.org/10.1007/s00521-021-05789-y
Chicago author-date
Van Molle, Pieter, Tim Verbelen, Bert Vankeirsbilck, Jonas De Vylder, Bart Diricx, Tom Kimpe, Pieter Simoens, and Bart Dhoedt. 2021. “Leveraging the Bhattacharyya Coefficient for Uncertainty Quantification in Deep Neural Networks.” NEURAL COMPUTING & APPLICATIONS 33 (16): 10259–75. https://doi.org/10.1007/s00521-021-05789-y.
Chicago author-date (all authors)
Van Molle, Pieter, Tim Verbelen, Bert Vankeirsbilck, Jonas De Vylder, Bart Diricx, Tom Kimpe, Pieter Simoens, and Bart Dhoedt. 2021. “Leveraging the Bhattacharyya Coefficient for Uncertainty Quantification in Deep Neural Networks.” NEURAL COMPUTING & APPLICATIONS 33 (16): 10259–10275. doi:10.1007/s00521-021-05789-y.
Vancouver
1.
Van Molle P, Verbelen T, Vankeirsbilck B, De Vylder J, Diricx B, Kimpe T, et al. Leveraging the Bhattacharyya coefficient for uncertainty quantification in deep neural networks. NEURAL COMPUTING & APPLICATIONS. 2021;33(16):10259–75.
IEEE
[1]
P. Van Molle et al., “Leveraging the Bhattacharyya coefficient for uncertainty quantification in deep neural networks,” NEURAL COMPUTING & APPLICATIONS, vol. 33, no. 16, pp. 10259–10275, 2021.
@article{8701260,
  abstract     = {{Modern deep learning models achieve state-of-the-art results for many tasks in computer vision, such as image classification and segmentation. However, its adoption into high-risk applications, e.g. automated medical diagnosis systems, happens at a slow pace. One of the main reasons for this is that regular neural networks do not capture uncertainty. To assess uncertainty in classification, several techniques have been proposed casting neural network approaches in a Bayesian setting. Amongst these techniques, Monte Carlo dropout is by far the most popular. This particular technique estimates the moments of the output distribution through sampling with different dropout masks. The output uncertainty of a neural network is then approximated as the sample variance. In this paper, we highlight the limitations of such a variance-based uncertainty metric and propose an novel approach. Our approach is based on the overlap between output distributions of different classes. We show that our technique leads to a better approximation of the inter-class output confusion. We illustrate the advantages of our method using benchmark datasets. In addition, we apply our metric to skin lesion classification-a real-world use case-and show that this yields promising results.}},
  author       = {{Van Molle, Pieter and Verbelen, Tim and Vankeirsbilck, Bert and De Vylder, Jonas and Diricx, Bart and Kimpe, Tom and Simoens, Pieter and Dhoedt, Bart}},
  issn         = {{0941-0643}},
  journal      = {{NEURAL COMPUTING & APPLICATIONS}},
  keywords     = {{Deep learning,Bayesian networks,Uncertainty,Dropout Monte Carlo}},
  language     = {{eng}},
  number       = {{16}},
  pages        = {{10259--10275}},
  title        = {{Leveraging the Bhattacharyya coefficient for uncertainty quantification in deep neural networks}},
  url          = {{http://doi.org/10.1007/s00521-021-05789-y}},
  volume       = {{33}},
  year         = {{2021}},
}

Altmetric
View in Altmetric
Web of Science
Times cited: