Workshop paper
Control Flow Operators in PyTorch
Yidi Wu, Thomas Bohnstingl, et al.
ICML 2025
Off-the-shelf pre-trained models are increasingly common in machine learning. When deployed in the real world, it is essential that such models are not just accurate but also demonstrate qualities like fairness. This paper takes a closer look at recently proposed approaches that edit a pretrained model for group fairness by re-weighting the training data. We offer perspectives that unify disparate weighting schemes from past studies and pave the way for new weighting strategies to address group fairness concerns.
Yidi Wu, Thomas Bohnstingl, et al.
ICML 2025
Natalia Martinez Gil, Kanthi Sarpatwar, et al.
NeurIPS 2023
Robert Farrell, Rajarshi Das, et al.
AAAI-SS 2010
Fan Feng, Sara Magliacane
NeurIPS 2023