Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, Rich Zemel
"Fairness Through Awareness" is an approach to fairness in classification, where the goal is to prevent discrimination against protected population subgroups in classification systems while simultaneously preserving utility for the party carrying out the classification (eg, an advertiser, bank, or admissions committee). We argue that a classification is fair only when individuals who are similar with respect to the classification task at hand are treated similarly, and this in turn requires understanding of sub-cultures of the population. In consequence, hiding information from a classifying algorithm can result in less fairness (and less utility): "privacy" does not yield fairness.
We obtain a computational solution that, given a similarity metric defining, for each pair of individuals, their similarity with respect to the given classification task, achieves our fairness goals. The metric should represent ground truth, but how can it be obtained? Can learning help?
We also discuss the crescendo of calls for "comprehensible" or "interpretable" classifiers that "explain" individual classifications, and suggest a new desideratum, which we call "negotiability," as a direction for future research.