While the idea of Artificial Intelligence (AI) — of machines that can replicate certain types of human capabilities — has been around for centuries, it’s really only in the past several decades that it’s evolved from science fiction to business reality. Today, the combination of massive amounts of data, inexpensive parallel processing, and continuously improving algorithms has made AI possible — from powering search engines to image recognition to language-understanding algorithms that enable conversational agents like Google Home, Siri, and Alexa.

We’re also seeing increasing adoption momentum among global companies, focusing on use cases such as optimization, enhanced decision-making and product, service and business model development. At the same time, we’re seeing a steady stream of news about the risks of AI and the need to use this extremely powerful set of technologies in a humane and responsible and way.

What does this mean for business leaders? Simply that if we intend to unlock the potential of AI both to optimize and even transform organizations, we need to be clear-eyed about the issues we need to manage, the risks we will likely encounter, and the opportunities that lie on the other side.

In this report, you will find:

  • A summary of the key AI ethics issues for business
  • A set of questions focused on ethical principles, practices and use of AI to inform strategic planning
  • A list of internal and external stakeholders, with key considerations for each
  • Six key recommendations to jump-start your planning process

Susan Etlinger

Industry Analyst