Advanced search
2 files | 2.09 MB Add to list

Explainability through uncertainty : trustworthy decision-making with neural networks

Arthur Thuy (UGent) and Dries Benoit (UGent)
Author
Organization
Project
Abstract
Uncertainty is a key feature of any machine learning model and is particularly important in neural networks, which tend to be overconfident. This overconfidence is worrying under distribution shifts, where the model performance silently degrades as the data distribution diverges from the training data distribution. Uncertainty estimation offers a solution to overconfident models, communicating when the output should (not) be trusted. Although methods for uncertainty estimation have been developed, they have not been explicitly linked to the field of explainable artificial intelligence (XAI). Furthermore, literature in operations research ignores the actionability component of uncertainty estimation and does not consider distribution shifts. This work proposes a general uncertainty framework, with contributions being threefold: (i) uncertainty estimation in ML models is positioned as an XAI technique, giving local and model-specific explanations; (ii) classification with rejection is used to reduce misclassifications by bringing a human expert in the loop for uncertain observations; (iii) the framework is applied to a case study on neural networks in educational data mining subject to distribution shifts. Uncertainty as XAI improves the model's trustworthiness in downstream decision-making tasks, giving rise to more actionable and robust machine learning systems in operations research.
Keywords
Management Science and Operations Research, Educational Data Mining, Uncertainty Estimation, Neural Networks, Decision support systems, Explainable artificial intelligence, Monte Carlo Dropout, Deep Ensembles, Distribution shift, PREDICTION, ANALYTICS

Downloads

  • Explainability through uncertainty Trustworthy decision-making with neural networks.pdf
    • full text (Accepted manuscript)
    • |
    • open access
    • |
    • PDF
    • |
    • 940.49 KB
  • (...).pdf
    • full text (Published version)
    • |
    • UGent only
    • |
    • PDF
    • |
    • 1.15 MB

Citation

Please use this url to cite or link to this publication:

MLA
Thuy, Arthur, and Dries Benoit. “Explainability through Uncertainty : Trustworthy Decision-Making with Neural Networks.” EUROPEAN JOURNAL OF OPERATIONAL RESEARCH, vol. 317, no. 2, 2024, pp. 330–40, doi:10.1016/j.ejor.2023.09.009.
APA
Thuy, A., & Benoit, D. (2024). Explainability through uncertainty : trustworthy decision-making with neural networks. EUROPEAN JOURNAL OF OPERATIONAL RESEARCH, 317(2), 330–340. https://doi.org/10.1016/j.ejor.2023.09.009
Chicago author-date
Thuy, Arthur, and Dries Benoit. 2024. “Explainability through Uncertainty : Trustworthy Decision-Making with Neural Networks.” EUROPEAN JOURNAL OF OPERATIONAL RESEARCH 317 (2): 330–40. https://doi.org/10.1016/j.ejor.2023.09.009.
Chicago author-date (all authors)
Thuy, Arthur, and Dries Benoit. 2024. “Explainability through Uncertainty : Trustworthy Decision-Making with Neural Networks.” EUROPEAN JOURNAL OF OPERATIONAL RESEARCH 317 (2): 330–340. doi:10.1016/j.ejor.2023.09.009.
Vancouver
1.
Thuy A, Benoit D. Explainability through uncertainty : trustworthy decision-making with neural networks. EUROPEAN JOURNAL OF OPERATIONAL RESEARCH. 2024;317(2):330–40.
IEEE
[1]
A. Thuy and D. Benoit, “Explainability through uncertainty : trustworthy decision-making with neural networks,” EUROPEAN JOURNAL OF OPERATIONAL RESEARCH, vol. 317, no. 2, pp. 330–340, 2024.
@article{01HBR7FW0PDQ4XZ2HMVV8TRVGJ,
  abstract     = {{Uncertainty is a key feature of any machine learning model and is particularly important in neural networks, which tend to be overconfident. This overconfidence is worrying under distribution shifts, where the model performance silently degrades as the data distribution diverges from the training data distribution. Uncertainty estimation offers a solution to overconfident models, communicating when the output should (not) be trusted. Although methods for uncertainty estimation have been developed, they have not been explicitly linked to the field of explainable artificial intelligence (XAI). Furthermore, literature in operations research ignores the actionability component of uncertainty estimation and does not consider distribution shifts. This work proposes a general uncertainty framework, with contributions being threefold: (i) uncertainty estimation in ML models is positioned as an XAI technique, giving local and model-specific explanations; (ii) classification with rejection is used to reduce misclassifications by bringing a human expert in the loop for uncertain observations; (iii) the framework is applied to a case study on neural networks in educational data mining subject to distribution shifts. Uncertainty as XAI improves the model's trustworthiness in downstream decision-making tasks, giving rise to more actionable and robust machine learning systems in operations research.}},
  author       = {{Thuy, Arthur and Benoit, Dries}},
  issn         = {{0377-2217}},
  journal      = {{EUROPEAN JOURNAL OF OPERATIONAL RESEARCH}},
  keywords     = {{Management Science and Operations Research,Educational Data Mining,Uncertainty Estimation,Neural Networks,Decision support systems,Explainable artificial intelligence,Monte Carlo Dropout,Deep Ensembles,Distribution shift,PREDICTION,ANALYTICS}},
  language     = {{eng}},
  number       = {{2}},
  pages        = {{330--340}},
  title        = {{Explainability through uncertainty : trustworthy decision-making with neural networks}},
  url          = {{http://doi.org/10.1016/j.ejor.2023.09.009}},
  volume       = {{317}},
  year         = {{2024}},
}

Altmetric
View in Altmetric
Web of Science
Times cited: