Training individually fair ML models with sensitive subspace robustnessDownload PDF

Published: 20 Dec 2019, Last Modified: 05 May 2023ICLR 2020 Conference Blind SubmissionReaders: Everyone
Keywords: fairness, adversarial robustness
TL;DR: Algorithm for training individually fair classifier using adversarial robustness
Abstract: We consider training machine learning models that are fair in the sense that their performance is invariant under certain sensitive perturbations to the inputs. For example, the performance of a resume screening system should be invariant under changes to the gender and/or ethnicity of the applicant. We formalize this notion of algorithmic fairness as a variant of individual fairness and develop a distributionally robust optimization approach to enforce it during training. We also demonstrate the effectiveness of the approach on two ML tasks that are susceptible to gender and racial biases.
Code: https://github.com/IBM/sensitive-subspace-robustness
Original Pdf: pdf
8 Replies

Loading