During Joy Buolamwini’s first semester at MIT, she had an idea to build a little art project; an “inspire mirror” that would enable her to visualize herself in different ways. The problem was that the computer vision software that powered her creation didn’t recognize her face at all—until she put on a white mask. Thus begins “Coded Bias,” the new Netflix documentary directed by Shalini Kantayya, which unpacks the risks and inherent biases in artificial intelligence systems, and proposes ways to make these technologies safer and ultimately more valuable for business and humanity.
Of course, there are thousands if not millions of constructive and valuable uses for AI—in medicine, architecture, finance, the environment, and elsewhere. AI can be used to better understand disease, improve accessibility for people with disabilities, design more sustainable buildings, detect money laundering, help us recover from natural disasters and many, many other things. But AI is fundamentally a technology multitool, and like any other powerful tool, it needs guardrails to ensure that we use it in ways that maximize the good and minimize the harm to people and institutions.
“Coded Bias” makes it abundantly clear where the primary harms lie. Vulnerable and marginalized groups, whether they are based on race, socioeconomic status, gender or other factors, tend to be underrepresented in the data sets and data models that are used to train algorithms to make predictions and, often, decisions. In other words, says Buolamwini, “data is destiny.” At the same time, newer technologies also tend to be used first on the very populations—also marginalized, also vulnerable—who have the least amount of power to shape how they develop.
This discrepancy, and the structural wrongs that it amplifies and perpetuates, is at the heart of “Coded Bias” and central to the work that Joy Buolamwini, Cathy O’Neill, Deb Raji, Safiya Noble, Zeynep Tufekci, Timnit Gebru, Meredith Broussard and so many other noted scholars have done during the past decade. In fact, what we think of as the “AI Ethics” or “Responsible AI” movement is a direct outgrowth of the work of these and other pioneering scholars.
Using AI responsibly requires us to focus not only on identifying harms but on investigating ways to remediate them and promote their fair, equitable and humane use at scale and across institutions and populations. This is as pragmatic as it is idealistic; while we fully expect regulatory guardrails to safeguard the food we eat, the medicines we take and the transportation systems we use, we have no such guardrails for these powerful technologies.
Is it technology’s fault entirely? Of course not. Says Cathy O’Neill, author of Weapons of Math Destruction, “AI is based on data, and data is a reflection of our history.” While technologists and scholars, policymakers and filmmakers don’t have the power to change history, they do have the potential to help us better understand the impacts of these technologies, and thereby help us change our future. The ultimate question “Coded Bias” poses is this: now that you know, what will you do?
“Coded Bias” is available to stream right this second on Netflix. And, if you’d like to learn more, here is a selected reading list:
- Joy Buolamwini and Timnit Gebru, Gender Shades
- Cathy O’Neill, Weapons of Math Destruction
- Safiya Noble, Algorithms of Oppression: How Search Engines Reinforce Racism
- Zeynep Tufekci, Twitter and Tear Gas: The Power and Fragility of Networked Protest
- Meredith Broussard, Artificial Unintelligence: How Computers Misunderstand the World
- Virginia Eubanks, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor