Computing persistent homology under random projection
Karthikeyan Natesan Ramamurthy, Kush R. Varshney, et al.
SSP 2014
With the advent of highly predictive but opaque deep learning models, it has become more important than ever to understand and explain the predictions of such models. Many popular approaches define interpretability as the inverse of complexity and achieve interpretability at the cost of accuracy. This introduces a risk of producing interpretable but misleading explanations. As humans, we are prone to engage in this kind of behavior [11]. In this paper, we take the view that the complexity of the explanations should correlate with complexity of the decision. We propose to build a Treeview representation of the complex model using disentangled representations, which reveals the iterative rejection of unlikely class labels until the correct association is predicted.
Karthikeyan Natesan Ramamurthy, Kush R. Varshney, et al.
SSP 2014
Prasanna Sattigeri, Jayaraman J. Thiagarajan, et al.
ACSSC 2014
Shivashankar Subramanian, Ioana Baldini, et al.
IAAI 2020
Jayaraman J. Thiagarajan, Satyananda Kashyap, et al.
ICMLA 2019