Assala Benmalek, Celia Cintas, et al.
MICCAI 2024
Machine learning (ML) models often perform differently under distribution shifts, in terms of utility, fairness, and other dimensions. We propose the Adversarial Auditor for measuring the utility and fairness performance of ML models under compound shifts of outcome and protected attributes. We use Multi-Objective Bayesian Optimization (MOBO) to account for multiple metrics and identify shifts where model performance is extreme, both good and bad. Using two case studies, we show that MOBO performed better than random and grid-based approaches in identifying scenarios by adversarially optimizing objectives, highlighting the value of such an auditor for developing fair, accurate and shift-robust models.
Assala Benmalek, Celia Cintas, et al.
MICCAI 2024
Lucas Monteiro Paes, Dennis Wei, et al.
NeurIPS 2024
Mateo Espinosa Zarlenga, Gabriele Dominici, et al.
ICML 2025
Claudio Santos Pinhanez, Raul Fernandez, et al.
IUI 2024