AI Explainability 360 Toolkit
Vijay Arya, Rachel Bellamy, et al.
CODS-COMAD 2021
Today, machine-learning software is used to help make decisions that affect people's lives. Some people believe that the application of such software results in fairer decisions because, unlike humans, machine-learning software generates models that are not biased. Think again. Machine-learning software is also biased, sometimes in similar ways to humans, often in different ways. While fair model- assisted decision making involves more than the application of unbiased models-consideration of application context, specifics of the decisions being made, resolution of conflicting stakeholder viewpoints, and so forth-mitigating bias from machine-learning software is important and possible but difficult and too often ignored.
Vijay Arya, Rachel Bellamy, et al.
CODS-COMAD 2021
Matthew Arnold, David Piorkowski, et al.
IBM J. Res. Dev
Jonathan Dodge, Q. Vera Liao, et al.
IUI 2019
Ashima Suvarna, Kuntal Dey, et al.
GHCI 2019