In 2016, enterprise AI was still new, the word “ethics” made people squirm, and few people in business and technology thought much about the ethical implications of AI. It was a different matter in academia, however. Scholars of Science and Technology Studies (STS), historians of technology, anthropologists, sociologists and philosophers had been studying the impact of technology on society for decades, if not centuries.

That year, Dr. Cathy O’Neil published Weapons of Math Destruction, which detailed how misuse of big data amplifies inequality. People at conferences and meetups discussed research papers with titles such as Man Is to Computer Programmer as Woman Is to Homemaker? Debiasing Word Embeddings (2016) and Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints (2017) that sounded the alarm on the risks of biased algorithms.

2018 was something of a turning point. Dr. Safiya Noble published Algorithms of Oppression, which detailed the ways search engines reinforce discrimination, and Dr. Timnit Gebru (then at Microsoft Research) and Dr. Joy Buolamwini (MIT Media Lab) launched the Gender Shades project, which demonstrated that three leading commercial facial recognition products were notoriously poor at predicting the gender of people—particularly women—with darker skin tones.

None of this was surprising to the scholars, many Black, women, LGBTQ+ and people of color, who had been studying and personally experiencing the relationship between technology and society for years. But it was a wake-up call for businesses that had been commercially marketing or simply using these technologies: Was facial recognition technology racist?

And if this kind of bias exists in facial recognition, what about other types of AI tools? What about digital assistants? What about so-called predictive models? AI aside, what about soap dispensers and cameras? How should these technologies be used, and how shouldn’t they? How do we quantify and address the risks?

In the interim, there’s been important progress: more recognition of the ways in which technology reinforces racism and other types of discrimination related to healthcare, rideshare services and the criminal justice system, for example. In business, there has been meaningful progress in the areas of research, data science, product development and corporate governance as early leaders at Salesforce, Google and Microsoft grapple with their duty of care related to data, algorithms and technology use overall.

This is extraordinarily complex work, given the number of AI tools and possible use cases. But there has been enough momentum on issues of harms modeling, bias remediation, interpretability, ethical development, impact assessments, and corporate policy that it seemed reasonable to hope for a future that incorporated more considered and inclusive norms for AI and tech overall. (Of course there was a non-trivial amount of posturing going on as well; that comes with the territory).

Then in December 2020 the news came that Google had “accepted the resignation of” Dr. Timnit Gebru, the co-lead of the Ethical AI team and co-author of the Gender Shades project related to—but reportedly not entirely because of—a research paper she, Ethical AI co-lead Dr. Margaret Mitchell, and others had authored that exposed risks associated with large language models. In February 2021, Google then fired Dr. Mitchell and abruptly reorganized the team. It’s an involved and ugly story, and  one that is far from over.

The point of this post is not to attempt to explain or characterize what happened specifically at Google; that has been done extensively and with great care by VentureBeat reporter Khari Johnson, Verdict, and Fast Company deputy editor Katharine Schwab, among others. But it’s critical to recognize that Drs. Gebru and Mitchell are among the most highly respected scholars in their field, and that the arguments they made in their paper (and, to be clear, their work overall) are ones that industry needs to hear, consider and confront, as they have human rights, environmental, and yes, business implications.

So, where do we go from here? What can we conclude when one of the largest companies in the world effectively dismantles one of the most respected and diverse teams focused on some of the most critical topics in their industry, particularly during a period when the United States in particular is reckoning with its shameful history of racial injustice? Does it signal that any effort to combine business, technology and ethics is doomed to fail? And if not, how do we make sense of it?

So, a few thoughts on what other companies—and the industry overall—should consider.

  • Regulation is inevitable—and needed. As AI become more ingrained in our daily lives, and shapes our experiences, health, educational and financial outcomes, we need to reckon with the ways that it amplifies bias, usually at the expense of the most vulnerable and marginalized. Addressing these impacts transparently is part of organizations’ duty of care toward their customers. We’ve seen this dynamic over and over—in telecommunications, healthcare, workplace safety, automotive and other industries as they became industrialized and/or part of the mainstream. Why should tech be any different?
  • Ethical and/or responsible technology use is and should be considered part of the normal product development process—AI or not. The more we separate “ethics” or “responsibility” from product development, the more we undermine trust in the core product itself. Do we build “cars” and “responsible cars”? This isn’t a hypothetical; once you add sensors to everyday objects and someone starts analyzing and building predictive models based on that data, you’ve crossed into human impact territory.
  • There is upside opportunity for organizations that earn the trust of their communities. As digital interactions become more common and as generational and cultural norms shift, businesses that value human impact will earn a trust dividend, while those that value short-term profit over people will suffer. If you doubt that norms are changing, recall that last year Amazon put a moratorium on the sale of facial recognition to police in response to calls from civil rights and research organizations. And this is not just a question of how businesses treat the technology; AI ethics teams are the canaries in the coal mine; how organizations treat them will be under increased scrutiny.

Finally, we owe Drs. Timnit Gebru, Margaret Mitchell and their co-authors, Dr. Emily M. Bender and Angelina McMillan-Major, a huge debt of gratitude for asking tough questions at great personal cost. If it leads more organizations to reflect on and begin to address the human impact of their products and services, the Google AI Ethics team will have left us a remarkable legacy.