Preferences and ethical principles in decision making
Andrea Loreggia, Nicholas Mattei, et al.
AAAI-SS 2018
New decision-support systems are being built using AI services that draw insights from a large corpus of data and incorporate those insights in human-in-the-loop decision environments. They promise to transform businesses, such as health care, with better, affordable, and timely decisions. However, it will be unreasonable to expect people to trust AI systems out of the box if they have been shown to exhibit discrimination across a variety of data usages: unstructured text, structured data, or images. Thus, AI systems come with certain risks, such as failing to recognize people or objects, introducing errors in their output, and leading to unintended harm. In response, we propose ratings as a way to communicate bias risk and methods to rate AI services for bias in a black-box fashion without accessing services training data. Our method is designed not only to work on single services, but also the composition of services, which is how complex AI applications are built. Thus, the proposed method can be used to rate a composite application, like a chatbot, for the severity of its bias by rating its constituent services and then composing the rating, rather than rating the whole system.
Andrea Loreggia, Nicholas Mattei, et al.
AAAI-SS 2018
Biplav Srivastava, Francesca Rossi
AIES 2018
Avinash Balakrishnan, Djallel Bouneffouf, et al.
IJCAI 2018
Sara Berger, Francesca Rossi
AIofAI 2021