Reading 02. Gender Shades


In this assignment, you are tasked with reading and reflecting on a computer science paper featured in the Assocation of Computing Machinery (ACM) Conference on Fairness, Accountability, and Transparency. The paper is titled “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification”, and it explores issues surrounding automated facial analysis technology.

The original seed for the work was planted when the primary author was completing her Master’s thesis and discovered that the facial reconigition tool she was working with would not detect her darker face until she put on a white mask. This experience inspired her to dig deeper into the efficacy of these tools with different types of faces, and you will read all about her findings in the paper. You can watch a featured TED talk she gave about her personal experience with algorithmic bias here.

Why is this research important? Why should we care?

Artificial intelligence is infiltrating every aspect of society, in ways that are often hidden or opaque to those affected. We use algorithms to decide who is qualified for a loan, who deserves to be hired, who deserves to be fired, what ads you might like to see and even how long someone deserves to spend in prison.

In order for these predictive models to work well, we need to train them on data, and lots of it. This input or “training data” is how engineers and data scientists are able to develop robust models that we trust to help us make decisions that impact real people’s lives. However, social inequities that are embedded into our society inevitably may find their way into our training data, leading to models that only reinforce, rather than mitigate the biases already present.

Who are the paper’s primary authors?

Joy Buolamwini, PhD

“Joy Buolamwini is a poet of code who uses art and research to illuminate the social implications of artificial intelligence. She founded the Algorithmic Justice League to create a world with more equitable and accountable technology. Her TED Featured Talk on algorithmic bias has over 1 million views. Her MIT thesis methodology uncovered large racial and gender bias in AI services from companies like Microsoft, IBM, and Amazon. Her research has been covered in over 40 countries, and as a renowned international speaker she has championed the need for algorithmic justice at the World Economic Forum and the United Nations. She serves on the Global Tech Panel convened by the vice president of European Commission to advise world leaders and technology executives on ways to reduce the harms of A.I.” Link to Joy Buolamwini’s homepage.

Timnit Gebru, PhD

“I am currently a research scientist at Google in the ethical AI team. Prior to that I did a postdoc at Microsoft Research, New York City in the FATE (Fairness Transparency Accountability and Ethics in AI) group, where I studied algorithmic bias and the ethical implications underlying projects aiming to gain insights from data.” Link to Timnit Gebru’s Homepage.

Note: You may recongize Timnit’s names from recent headlines as she recently spoke about her experiences at Google and her controversial firing over a paper she wrote that highlighted the risks associated with Google’s large language models. If you’d like to read more, here is one article you can start with: https://www.technologyreview.com/2020/12/04/1013294/google-ai-ethics-research-paper-forced-out-timnit-gebru/

Where should you ask questions on readings?

Is there a term or concept used in the paper that you’re confused by? Please direct your questions to this form and we will respond to you, either directly or via a running Frequently Asked Questions page that responds to questions that arise multiple times.

Any questions asked on readings in office hours, oustide of logistical questions, will be redirected to the form above.

Read the Paper, Reflect, and Respond

You can find a PDF copy of the paper to read here: http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf.

You will find the assignment “Reading 02 - Gender Shades” on Gradescope. There are 8 possible prompts and you are asked to respond to any 2 prompts that you would prefer. Your responses should be short essays with a minimum word count of 200 words. For the prompts, rubrics, and other instructions please refer to the assignment on Gradescope.

We really appreciated how much care and effort when into your responses for RD00. We also acknowedge that some of the subject matter hits very close to home and your reflections may be quite personal, so Kris and Kaki will be the only ones reading your responses.

For those interested, IBM and Microsoft both issued responses to the paper which you are encouraged to read. Totally optional but a great example of how good research can effect real change in industry.

IBM’s Response

Microsoft’s Response

Want to learn more?

This paper is only scratching the surface of the issues of algorithmic bias and ethical computing. The primary author is doing a lot of great work, and if this paper has excited you, definitely check out her website linked above. To find out more about the Gender Shades project and the continuous work being done, check out their site here

Dr. Buolamwini is also featured in the Netflix documentary “Coded Bias” which gives a closer look into her journey as well as many other activists fighting for algorithmic fairness worldwide. If you are looking for a longer read, you can check out the book Weapons of Math Destruction by Cathy O’Neil. It is available for free through the UNC library portal.