During the past year, there has been growing conversation about using artificial intelligence responsibly, and growing awareness of its human implications. We’ve seen these issues play out with autonomous vehicles, in the criminal-justice system, in financial services, healthcare, education and many other domains of public and private life.
But now, as the United States reckons with its history of systemic and structural racism, it’s even more urgent to examine the way we use technology and how its use can propagate inequity. This isn’t new; much of what we know today about the human impact of technology is based on the foundational research of the many scholars, data scientists and engineers—many women, many Black or people of color—who have done pioneering work to understand how societal norms and biases influence technology development and use.
For example, the Gender Shades Project, published in 2018 by Joy Buolamwini and Timnit Gebru, demonstrated the ways in which machine learning algorithms (in this case, facial recognition) can discriminate based on race and gender, raising awareness of the potential (which is now reality) for false arrests based on misidentification of people with darker skin tones. In this context, recent moves by Amazon, Microsoft and IBM to pause or suspend the use of facial recognition by police departments, and The Facial Recognition and Biometric Technology Moratorium Act of 2020, introduced into Congress last week, demonstrate the impact of such research and the urgency of suitable governance structures.
At the same time, COVID-19 is upending the ways we live and work, accelerating our use of digital and intelligent technologies and exposing vulnerabilities in organizational strategies, systems and processes. While these issues are not causally related, they spotlight the complex dynamics that come into play as we increasingly rely on technologies that learn from data and describe and classify human attributes and behaviors.
As intelligent technologies become even more intertwined with the way we live and move through the world, we need to do the work to understand their power, face their risks head-on, and actively govern their human impact.
This infographic, developed in partnership with Microsoft (disclosure: client), lays out a few fundamentals for using AI responsibly. It is intended simply as a thought-starter, to present some of the most common issues so that you can increase your understanding, spot potential issues, and know where to go for help.
- Ruha Benjamin, Race After Technology: Abolitionist Tools for the New Jim Code
- Safiya Noble, Algorithms of Oppression: How Search Engines Reinforce Racism
- Data 4 Black Lives
- Algorithmic Justice League
- Microsoft, AI Business School
- Altimeter, Innovation + Trust: The Foundation for Responsible Artificial Intelligence
- Salesforce, Responsible Creation of Artificial Intelligence Trailhead