Cost-Aware Counterfactuals for Black Box Explanations
Natalia Martinez Gil, Kanthi Sarpatwar, et al.
NeurIPS 2023
Explainable AI (XAI) is more than just "opening" the black box --- who opens it matters just as much, if not more, as the ways of opening it. Human-centered XAI (HCXAI) advocates that algorithmic transparency alone is not sufficient for making AI explainable. In our fifth CHI workshop on Human-Centered XAI (HCXAI), we shift our focus to new, emerging frontiers of explainability: (1) participatory approaches toward explainability in civic AI applications; (2) addressing hallucinations in LLMs using explainability benchmarks; (3) connecting HCXAI research with Responsible AI practices, algorithmic auditing, and public policy; and (4) improving representation of XAI issues from the Global South. We have built a strong community of HCXAI researchers through our workshop series whose work has made important conceptual, methodological, and technical impact on the field. In this installment, we will push the frontiers of work in HCXAI with an emphasis on operationalizing perspectives sociotechnically.
Natalia Martinez Gil, Kanthi Sarpatwar, et al.
NeurIPS 2023
Divya Ravi, Renuka Sindhgatta
CHI 2025
Daniel Karl I. Weidele, Hendrik Strobelt, et al.
SysML 2019
Michael Hersche, Francesco Di Stefano, et al.
NeurIPS 2023