Hazar Yueksel, Ramon Bertran, et al.
MLSys 2020
We consider training machine learning models that are fair in the sense that their performance is invariant under certain sensitive perturbations to the inputs. For example, the performance of a resume screening system should be invariant under changes to the gender and/or ethnicity of the applicant. We formalize this notion of algorithmic fairness as a variant of individual fairness and develop a distributionally robust optimization approach to enforce it during training. We also demonstrate the effectiveness of the approach on two ML tasks that are susceptible to gender and racial biases.
Hazar Yueksel, Ramon Bertran, et al.
MLSys 2020
Megh Thakkar, Quentin Fournier, et al.
ACL 2024
Hongyi Wang, Mikhail Yurochkin, et al.
ICLR 2020
Kevin Gu, Eva Tuecke, et al.
ICML 2024