FAT ML 2015
Fairness, Accountability, and Transparency
in Machine Learning
ICML 2015 Workshop
July 11, 2015
ICML 2015 »
July 28, 2015
: Check out the presentation slides
from the ICML 2015 workshop, reflections by Jeremy Kun
and Suresh Venkatasubramanian
, and new additions to the Resources
page (courtesy of the workshop participants).
July 2, 2015
: The schedule has been finalized
! Please join the workshop in person, follow on Twitter under the hashtag #FATML
, or subscribe to the mailing list
for future announcements.
June 7, 2015
: A tenative schedule
(with titles and abstracts) as well as the full set of accepted papers
for the upcoming FAT ML workshop at ICML 2015
are now available.
December 12, 2014
: The first FAT ML workshop took place at NIPS 2014
. See the archived website
for further details, including slides from and audio recordings of the presentations
September 26, 2014
: "How Big Data Is Unfair: Understanding Sources of Unfairness in Data Driven Decision Making
," a brief article by Moritz Hardt, provides further details regarding the motivation for the workshop and surveys some key issues in machine learning.
August 8, 2014
: In "Big Data's Disparate Impact
," Solon Barocas and Andrew Selbst provide a taxonomy of the many ways that machine learning can give rise to unintentional discrimination and explore the challenges these cases are likely to pose for discrimination law.
This interdisciplinary workshop will consider issues of fairness,
accountability, and transparency in machine learning. It will address growing
anxieties about the role that machine learning plays in consequential
decision-making in such areas as commerce, employment, healthcare, education,
Reflecting these concerns, President Obama at the start of 2014 called for a
90-day review of Big Data. The resulting report, "Big
Data: Seizing Opportunities, Preserving Values", concluded that "big data technologies can
cause societal harms beyond damages to privacy". It voiced particular concern
about the possibility that decisions informed by big data could have
discriminatory effects, even in the absence of discriminatory intent, and
could further subject already disadvantaged groups to less favorable
treatment. It also expressed alarm about the threat that an "opaque
decision-making environment" and an "impenetrable set of
algorithms" pose to
autonomy. In its recommendations to the President, the report called for
additional "technical expertise to stop discrimination", and for further
research into the dangers of "encoding discrimination in automated
Our workshop takes up this call. It will focus on these issues both as challenging constraints on the practical application of machine learning, as well as problems that can lend themselves to novel computational solutions.
Questions to the machine learning community include:
How can we achieve high classification accuracy while eliminating discriminatory biases? What are meaningful formal fairness properties?
How can we design expressive yet easily interpretable classifiers?
Can we ensure that a classifier remains accurate even if the statistical signal it relies on is exposed to public scrutiny?
Are there practical methods to test existing classifiers for compliance with a policy?
Participants will work together to understand the key normative and legal issues at stake, map the relevant computer science scholarship, evaluate the state of the solutions thus far proposed, and explore opportunities for new research and thinking within machine learning itself.