Show
Sort by
-
Interpretable machine learning models for COPD ease of breathing estimation
-
Evaluating visual explanations of attention maps for transformer-based medical imaging
-
- Journal Article
- A1
- open access
Interpreting stress detection models using SHAP and attention for MuSe-stress 2022
-
- Journal Article
- A1
- open access
Evaluating feature attribution methods in the image domain
-
- Conference Paper
- C1
- open access
Towards interpretable multitask learning for splice site and translation initiation site prediction
-
- Journal Article
- A1
- open access
There are no minimal essentially undecidable theories
-
Special issue on feature engineering editorial
-
Multi-output machine learning models for kinetic data evaluation : a Fischer–Tropsch synthesis case study
-
- Journal Article
- A2
- open access
Opportunities and challenges in interpretable deep learning for drug sensitivity prediction of cancer cells
-
On a question of Krajewski's