There has been so much attention lately on AI ethics, and from so many directions. Nearly every week there’s a new story on race, gender, and other types of bias in data and algorithms: image recognition technology that disproportionately misidentifies people of color as criminals , or recruiting algorithms that determine that the two most predictive characteristics for successful candidates are (1) that they are named Jared, and (2) that they play lacrosse.
It’s also clear that algorithms are just as effective at spreading and amplifying misinformation as they are at spreading more desirable content. New capabilities such as Google Duplex raise questions about whether, and when, machines such as robots and voice assistants should “act” human. It’s a lot to ponder, and it doesn’t help that technology moves so much faster than our ability to collectively process it. But the past six months have seen promising developments:
- AI Now released a framework for an Algorithmic Impact Assessment that public agencies can use to think through the issues related to AI in their communities. This could, and should, be customized for industry as well.
- Universities such as Stanford, Berkeley and MIT in the US, as well as Oxford in the UK, are doubling down on research on technology ethics and coursework containing ethics content.
- IBM released a set of trust and transparency capabilities for their cloud offerings, and Salesforce announced and kicked off an Office of Ethical Use to address the ethical issues related to AI and cloud technologies.
This is a topic I’m passionate about for a lot of reasons: I started my career as a fish out of water (a rhetoric grad in tech!), but now that machines have the potential to replicate some types of human capabilities, it’s even more important to get them right, and to bring stakeholders from technology, policy and, yes, the humanities together (philosophers, sociologists, linguists, ethicists, anthropologists) to think about and plan for an AI-enabled world: one that puts technology in the service of people, rather than the opposite.
Insights on AI Ethics and Customer Experience
Here are four “provocations” that shape my thinking about AI and ethics, or, if you prefer, AI and customer experience:
- AI increasingly shapes our (and our customers’) reality. This will only intensify over time as algorithms become more prevalent and control more business and civic functions.
- Legislation (such as GDPR) will never be able to account for or keep up with technology.
- The more human AI becomes, the more humane it needs to be.
- Ethics and innovation are not a zero-sum game; in fact, ethics can be an accelerator for innovation, because AI can reveal bias and give us the opportunity to address it.
Contemplating the Long-Term Effects of AI
At Dreamforce, Salesforce Chief Scientist Richard Socher speculated about what the world might be like today if the inventors of the internal combustion engine had imagined the long-term effects of their invention. Would we be faced with the same threats to our climate? Would we have planned our future differently? He argues that we’re at that very early stage with AI, and that it’s in our collective power to determine what we want our future to look like.
Our AI Ethics Discussion at O’Reilly’s AI Conference London
I spoke about many of these issues at O’Reilly’s AI conference in London last month, and had a chance to sit down with Paco Nathan to discuss just where we are with AI and ethics, and what it means for business and society. I hope you enjoy the conversation as much as I did.
Learn more about customer experience, innovation and long-term technological impacts.