Call for Papers

5th Workshop on Fairness, Accountability, and Transparency in Machine Learning (FAT/ML 2018)

Co-located with 35th International Conference on Machine Learning (ICML 2018)

15 July 2018, Stockholm, Sweden

Submission Deadline

1 May 2018, 23:59 Anywhere on Earth (AoE)

Overview

This workshop aims to bring together a growing community of researchers, practitioners, and policymakers concerned with fairness, accountability, and transparency in machine learning. The past several years have seen growing recognition that machine learning raises new ethical, legal, and technical challenges. In particular, policymakers, regulators, and advocates have expressed fears about the potential discriminatory impact of machine learning models, with many calling for research into how we can use automated decision-making tools without inadvertently encoding and perpetuating societal biases. Concurrently, there has been increasing concern that the complexity of machine learning models limits their use in critical applications involving humans, such as loan approval to recidivism prediction. Most recently, there is emerging concern that the standard emphasis in machine learning on prediction rather than causation inhibits the ability of data-driven tools to produce meaningful, actionable recommendations.

The goal of this workshop is to provide researchers with a venue to explore how to characterize and address these issues in ways that are computationally rigorous and scientifically defensible. We seek contributions that attempt to measure and mitigate bias in machine learning, to audit and evaluate machine learning models, and to render such models more interpretable and their decisions more explainable.

This year, the workshop is co-located with ICML, and will consist of invited talks, invited panels, contributed talks, as well as a poster session. We welcome paper submissions from researchers and practitioners that address any issue of fairness, accountability, and transparency related to machine learning. In particular, we will place a special emphasis on causal inference to address questions of fairness, and to create recommendation systems directed at altering causal factors. We will also focus on issues surrounding the collection, measurement, and mitigation of biased data.

Topics of Interest

Fairness:

  • How should we define, measure, and deal with biases in training data sets? Can we design data collection practices that limit the effect of bias? How can we use additional sources of information to assess and correct for bias?
  • What are meaningful formal fairness criteria? How do different criteria relate and trade-off? What are their limitations?
  • Should we turn to the law for definitions of fairness? Are proposed formal fairness criteria reconcilable with the law?
  • How can we use the tools of causal inference to reason about fairness in machine learning? Can causal inference lead to actionable recommendations and interventions? How can we design and evaluate the effect of interventions?
  • Can we develop definitions of discrimination and disparate impact that move beyond distributional constraints such as demographic parity or the 80% rule?
  • Who should decide what is fair when fairness becomes a machine learning objective?
  • Are there any dangers in turning questions of fairness into computational problems?
  • What are the societal implications of algorithmic experimentation and exploration? How can we manage the cost that such experimentation might pose to individuals?

Accountability:

  • What would human review entail if models were available for direct inspection?
  • Are there practical methods to test existing algorithms for compliance with a policy?
  • Can we prove that an algorithm behaves in some way without having to reveal the algorithm? Can we achieve accountability without transparency?
  • How can we conduct reliable empirical black-box testing and/or reverse engineer algorithms to test for ethically salient differential treatment?
  • Can we demonstrate the causal origins of the outcome predicted by a model?
  • What constitutes sufficient evidence to someone other than the creator of a model that the model functions as intended? Can we describe the goals of modeling effectively?
  • What are the societal implications of autonomous experimentation? How can we manage the risks that such experimentation might pose to users?

Transparency:

  • How can we develop interpretable machine learning methods that provide ways to manage the complexity of a model and/or generate meaningful explanations?
  • Can we field interpretable methods in a way that does not reveal private information used in the construction of the model?
  • Can we use adversarial conditions to learn about the inner workings of inscrutable algorithms? Can we learn from the ways they fail on edge cases?
  • How can we use game theory and machine learning to build fully transparent, but robust models using signals that people would face severe costs in trying to manipulate?

Paper Submission

Papers must be limited to 4 pages, including figures and tables, and should use a standard 2-column, 11pt format. An additional 5th page containing only cited references is permitted. We recommend using the ICML template.

Accepted papers will be posted on the workshop website and should also be posted by the authors to arXiv. Note that the workshop's proceedings will be considered non-archival, meaning that contributors are free to publish their work in archival journals or conferences. Accepted papers will be either presented as a talk or poster (to be determined by the workshop organizers). We only wish to consider papers that have not yet been published elsewhere. Dual submissions are allowed.

All papers must must be anonymized for double-blind reviewing, and submitted using via Easy Chair.

Paper Submissions Deadline

1 May 2018, 23:59 Anywhere on Earth (AoE)

Notification to Authors

22 May 2018

Camera-Ready Deadline

1 July 2018