Contribute Media
A thank you to everyone who makes this possible: Read More

Fairness and transparency in machine learning: Tools and techniques

Description

This talk will try to answer a simple question: When building machine learning systems, how can we make sure that they treat people fairly and can be held accountable? While seemingly trivial, this question is not easy to answer, especially when using complex methods like deep learning. I will discuss tools and techniques that we can use to make sure our algorithms behave as they should.

Abstract

When working with personal data, we need to make sure that our algorithms treat people fairly, are transparent and can be held accountable for their decisions. When using complex techniques like deep learning on very large datasets, it is not easy to prove that our algorithms behave they way we intend them to and e.g. do not discriminate against certain groups of people.

In my talk, I will discuss why ensuring transparency and fairness in machine learning is not easy, and how we can use Python tools to investigate our machine learning systems and make sure they behave they way they should.

  • Introduction: Why you should care about this (EU Data Protection Directive)
  • What kind of problems can occur in machine learning systems (bias in the input data, leakage of sensitive information into the training data, hidden usage of protected attributes by the algorithm)?
  • How can we measure and correct for bias in our systems (certifying and removing disparate impact)?
  • How can we understand the decisions that our algorithms make (perturbation analysis, simplified modeling, blackbox testing)?
  • How can we design our machine learning systems to make sure they're compliant and accountable (anonymization of data, monitoring of outcomes, auditing of algorithms)?
  • Outlook: The future of transparency and accountability in machine learning

Details

Improve this page