The Foundation of Responsible Artificial Intelligence
This report by Susan Etlinger—based on interviews from top academics, industry leaders and companies—lays out the issues leaders need to know and the steps they must take to lay the foundation for AI in the enterprise.
During the past year, we’ve seen a dramatic increase in public conversation about the ethics of Artificial Intelligence (AI). Concerns about the impact of biased algorithms, lack of transparency, and how AI technologies are being and should be used have consistently garnered headlines in major news outlets around the world, part of a “techlash” against the large technology platform companies.
In response, some of these companies have instituted principles, advisory boards, checklists, and other tactics to prove that they are thoughtful and responsible stewards of not only data and algorithms, but of our broader experiences in the digital and physical world. But while news coverage of the techlash has raised awareness of ethical and experiential issues of AI, it generally hasn’t addressed the implications for the organizations that buy, build, and implement these technologies to power digital experiences.
What You Will Learn in This Report
This report, based on interviews with leaders from business, academia, and non-governmental organizations defines ethical issues resulting from the development and deployment of AI, lays out the approaches being tested and used, and proposes an approach to begin to address them. While this is a highly complex field and no single strategy is appropriate for all industries or companies, this report may be used as a guide to better understand the unique implications of AI, begin to socialize them within your organization, and build organizational capability to foster both trust and innovation among customers, employees, shareholders, partners, and the general public. In this report, you will learn about:
Five Key Issues of Responsible Artificial Intelligence
- Bias and Discrimination
- Authenticity and Disclosure
- Regulation and Governance
A Roadmap for Responsible Artificial Intelligence
- Identify Specific Issues to Address
- Empower a Core Team
- Define a Stakeholder List
- Develop a Strategy and an Action Plan
- Measure Impact, Test, Learn