Fairness and Data Science: Failures, Factors, and Futures

In recent years, numerous highly publicized failures in data science have made evident that biases or issues of fairness in training data can sneak into, and be magnified by, our models, leading to harmful, incorrect predictions being made once the models are deployed into the real world.

Fairness and Data Science: Failures, Factors, and Futures

January 21, 2021

In recent years, numerous highly publicized failures in data science have made evident that biases or issues of fairness in training data can sneak into, and be magnified by, our models, leading to harmful, incorrect predictions being made once the models are deployed into the real world. But what actually constitutes an unfiar or biased model, and how can we diagnose and address these issues within our own work? In this talk, I will present a framework for better understanding how issues of fairness overlap with data science as well as how we can improve our modeling pipelines to make them more interpretable, reproducible, and fair to the groups that they are intended to serve. We will explore this new framework together through an analysis of ProPublica's COMPAS recidivism dataset using the tidymodels, drake, and iml packages.

View Materials

Additional Videos

Grant Fleming, Alan Feder, Simon Couch, Chelsea Parlett and Richard Vogg Q&A


About the speaker

Grant Fleming is a Data Scientist at Elder Research, co-author of the Wiley book Responsible Data Science (2021), and contributor to the O’Reilly book 97 Things About Ethics Everyone in Data Science Should Know. His professional focus is on machine learning for social science applications, model explainability, and building tools for reproducible data science. Previously, Grant was a research contractor for USAID.