Calibrated Fairness in Bandits

Goran Radanovic

Yang Liu, Goran Radanovic, Christos Dimitrakakis, David Parkes and Debmalya Mandal

We study fairness within the stochastic, multi-armed bandit (MAB) decision making framework. We adapt the fairness framework of “treating similar individuals similarly” to this settŠing. Here, an ‘individual’ corresponds to an arm and two arms are ‘similar’ if they have a similar quality distribution. First, we adopt a smoothness constraint that if two arms have a similar quality distribution then the probability of selecting each arm should be similar. In addition, we define the fairness regret, which corresponds to the degree to which an algorithm is not calibrated, where perfect calibration requires that the probability of selecting an arm is equal to the probability with which the arm has the best quality realization. We show that a variation on ŒThompson sampling satis€es smooth fairness for total variation distance, and give an O((kT)^2/3) bound on fairness regret. ŒThis complements prior work, which protects an on-average beŠer arm from being less favored. We also explain how to extend our algorithm to the dueling bandit seŠing.

Links: Video