This workshop aims to bring together a growing community of researchers, practitioners, and policymakers concerned with fairness, accountability, and transparency in machine learning.
The past several years have seen growing recognition that machine learning raises new ethical, legal, and technical challenges. In particular, policymakers, regulators, and advocates have expressed fears about the potential discriminatory impact of machine learning models, with many calling for research into how we can use automated decision-making tools without inadvertently encoding and perpetuating societal biases. Concurrently, there has been increasing concern that the complexity of machine learning models limits their use in critical applications involving humans, such as loan approval to recidivism prediction. Most recently, there is emerging concern that the standard emphasis in machine learning on prediction rather than causation inhibits the ability of data-driven tools to produce meaningful, actionable recommendations.
The goal of this workshop is to provide researchers with a venue to explore how to characterize and address these issues in ways that are computationally rigorous and scientifically defensible. We seek contributions that attempt to measure and mitigate bias in machine learning, to audit and evaluate machine learning models, and to render such models more interpretable and their decisions more explainable.
This year, the workshop is co-located with ICML, and will consist of invited talks, invited panels, contributed talks, as well as a poster session. We welcome paper submissions from researchers and practitioners that address any issue of fairness, accountability, and transparency related to machine learning. In particular, we will place a special emphasis on causal inference to address questions of fairness, and to create recommendation systems directed at altering causal factors. We will also focus on issues surrounding the collection, measurement, and mitigation of biased data.