Interpreting Deep Learning Models for Ordinal Problems
Authors
Abstract
Machine learning algorithms have evolved by exchanging sim- plicity and interpretability for accuracy, which prevents their adoption in critical tasks such as healthcare. Progress can be made by improving in- terpretability of complex models while preserving performance. This work introduces an extension of interpretable mimic learning which teaches in- terpretable models to mimic predictions of complex deep neural networks, not only on binary problems but also in ordinal settings. The results show that the mimic models have comparative performance to Deep Neural Net- work models, with the advantage of being interpretable.
Keywords
deep neural networks,interpretability,model tran
Related Project
Early-stage cancer treatment, driven by context of molecular imaging (ESTIMA)
Conference
European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning 2018
Cited by
No citations found