Managing the life cycle of plans
Biplav Srivastava, Jussi Vanhatalo, et al.
AAAI/IAAI 2005
New decision-support systems are being built using AI services that draw insights from a large corpus of data and incorporate those insights in human-in-the-loop decision environments. They promise to transform businesses, such as health care, with better, affordable, and timely decisions. However, it will be unreasonable to expect people to trust AI systems out of the box if they have been shown to exhibit discrimination across a variety of data usages: unstructured text, structured data, or images. Thus, AI systems come with certain risks, such as failing to recognize people or objects, introducing errors in their output, and leading to unintended harm. In response, we propose ratings as a way to communicate bias risk and methods to rate AI services for bias in a black-box fashion without accessing services training data. Our method is designed not only to work on single services, but also the composition of services, which is how complex AI applications are built. Thus, the proposed method can be used to rate a composite application, like a chatbot, for the severity of its bias by rating its constituent services and then composing the rating, rather than rating the whole system.
Biplav Srivastava, Jussi Vanhatalo, et al.
AAAI/IAAI 2005
Cristina Cornelio, Michele Donini, et al.
AAMAS 2022
Andrea Loreggia, Nicholas Mattei, et al.
AIES 2022
Rama Akkiraju, Biplav Srivastava, et al.
AAAI 2006