Certifying and Removing Disparate Impact

Sorelle Friedler

Sorelle Friedler, Carlos Scheidegger, and Suresh Venkatasubramanian

What does it mean for an algorithm to be biased?

In U.S. law, the notion of bias is typically encoded through the idea of disparate impact: namely, that a process (hiring, selection, etc) that on the surface seems completely neutral might still have widely different impacts on different groups. This legal determination expects an explicit understanding of the selection process.

If the process is an algorithm though (as is common these days), the process of determining disparate impact (and hence bias) becomes trickier. Firstly, it might not be possible to disclose the process. Secondly, even if the process is open, it might be too complex to ascertain how the algorithm is making its decisions. In effect, since we don’t have access to the algorithm, we must make inferences based on the data it uses.

We make three contributions to this problem. Firstly, we link the legal notion of disparate impact to a measure of classification accuracy that while known, has not received as much attention as more traditional notions of accuracy. Secondly, we propose a test for the possibility of bias based on analyzing the information leakage of protected information from the data. Finally, we describe methods by which data might be made “unbiased” in order to test an algorithm. Interestingly, our approach bears some resemblance to actual practices that have recently received legal scrutiny.