Coded Bias
In her first semester at the MIT Media Lab, Joy Buolamwini faced a peculiar problem: commercial face‑recognition software detected her light‑skinned classmates but couldn’t “see” her. Only when she donned a white plastic mask in frustration did the system recognize her face.
Coded Bias is a timely, thought-provoking documentary which follows Buolamwini’s journey to uncover racial and sexist bias in face-recognition software and other AI systems. Such technology is increasingly used to make important decisions, but many of the algorithms are a black box.
The documentary, which premiered at the Sundance Film Festival earlier this year, features a band of articulate scientists, scholars, and authors—primarily women of colour—doing most of the talking. This casting is fitting, because studies, including those by Buolamwini, reveal that face-recognition systems have much lower accuracy rates when identifying female and darker-skinned faces compared with white, male faces.
Recently, due to the recognition of this problem of racial bias, there has been a backlash against the widespread use of face recognition. IBM, Amazon, and Microsoft have all halted or restricted sales of their technology. US cities, notably Boston and San Francisco, have banned government use of face recognition.
People seem to have different experiences with the technology. The documentary shows a bemused pedestrian in London partially covering his face while passing a police surveillance van. On the streets of Hangzhou, China, we meet a skateboarder who says she appreciates face recognition’s convenience, as it is used to grant her entry to train stations and her residential complex.
The film also explores how decision-making algorithms can be susceptible to bias. In 2014, for example, Amazon developed an experimental tool for screening job applications for technology roles. The tool, which wasn’t designed to be sexist, discounted résumés that mentioned women’s colleges or groups, picking up on the gender imbalance in résumés submitted to the company. The tool was never used to evaluate actual job candidates.
AI systems can also build up a picture of people as they browse the internet, as the documentary investigates. They can suss out things we don’t disclose, says Zeynep Tufekci at the University of North Carolina at Chapel Hill in the film. Individuals can then be targeted by online advertisers. For instance, if an AI system suspects you are a compulsive gambler, you could be presented with discount fares to Las Vegas, she says.
At the end of the film, Buolamwini testifies in front of the US Congress to press the case for regulation. She wants people to support equity, transparency, and accountability in the use of AI that governs our lives. She has now founded a group called the Algorithmic Justice League, which tries to highlight these issues.
The director Shalini Kantayya said she was inspired to make Coded Bias by Buolamwini and other brilliant and badass mathematicians and scientists. It is an eye-opening account of the dangers of invasive surveillance and bias in AI. In the European Union, the General Data Protection Regulation goes some way to giving people better control over their personal data, but there is no equivalent regulation in the US.
The film argues that society should hold the makers of AI software accountable. It advocates a regulatory body to protect the public from its harms and biases.