During the past year, there has been growing conversation about using artificial intelligence responsibly, and growing awareness of its human implications. We’ve seen these issues play out with autonomous vehicles, in the criminal-justice system, in financial services, healthcare, education and many other domains of public and private life.

But now, as the United States reckons with its history of systemic and structural racism, it’s even more urgent to examine the way we use technology and how its use can propagate inequity. This isn’t new; much of what we know today about the human impact of technology is based on the foundational research of the many scholars, data scientists and engineers—many women, many Black or people of color—who have done pioneering work to understand how societal norms and biases influence technology development and use.

For example, the Gender Shades Project, published in 2018 by Joy Buolamwini and Timnit Gebru, demonstrated the ways in which machine learning algorithms (in this case, facial recognition) can discriminate based on race and gender, raising awareness of the potential (which is now reality) for false arrests based on misidentification of people with darker skin tones. In this context, recent moves by Amazon, Microsoft and IBM to pause or suspend the use of facial recognition by police departments, and The Facial Recognition and Biometric Technology Moratorium Act of 2020, introduced into Congress last week, demonstrate the impact of such research and the urgency of suitable governance structures.

At the same time, COVID-19 is upending the ways we live and work, accelerating our use of digital and intelligent technologies and exposing vulnerabilities in organizational strategies, systems and processes. While these issues are not causally related, they spotlight the complex dynamics that come into play as we increasingly rely on technologies that learn from data and describe and classify human attributes and behaviors.

As intelligent technologies become even more intertwined with the way we live and move through the world, we need to do the work to understand their power, face their risks head-on, and actively govern their human impact.

This infographic, developed in partnership with Microsoft (disclosure: client), lays out a few fundamentals for using AI responsibly. It is intended simply as a thought-starter, to present some of the most common issues so that you can increase your understanding, spot potential issues, and know where to go for help.

This is an infographic, introducing the topic of Responsible Artificial Intelligence. Artificial Intelligence (AI) has enormous potential for business and society. But its potential also makes it necessary to understand its implications and to establish guardrails to ensure we use it productively and responsibly. WHERE DO WE MOST COMMONLY SEE AI TECHNOLOGIES? Computer Vision Systems that identify, classify, and interpret images and video. Conversational AI Systems that detect, understand, translate, analyze, and generate human language. Predictive Modeling Systems that use data to make predictions or recommendations. Robotics Machines like autonomous cars, drones, and robots that can move, react to sensory input, and perform high-precision tasks. WHAT ARE SOME OF THE PRIMARY ISSUES IN AI THAT WE NEED TO ADDRESS? Disclosure Being clear about when we use algorithms and autonomous agents such as chatbots, voice agents, and robots. Accessibility: Designing products and services for inclusivity and accessibility. Explanation: Providing people with explanations for decisions made by algorithms. Privacy: Ensuring policies and processes comply with applicable regulations related to privacy and data use. Bias: Using processes to reveal and address unwanted and/or potentially harmful bias in data models and algorithms. Data Security: Implementing processes that reduce the risk of compromising data security. LET’S TAKE A LOOK Responsible AI in recruiting • Language-understanding technology may not understand dialects or accents, higher-pitched voices, the elderly, or people with disabilities. • So-called “emotion-detection” technologies claim to understand people’s emotional state and predict their employability. In reality, these technologies may unfairly disadvantage non-native speakers, nervous candidates, and others based on demographic and cultural biases. • Predictive models tend to reinforce and amplify bias in data, which can lead to discrimination. • If there is a lack of diversity in the design process, the team may not have the range of perspectives needed to anticipate and address issues of bias. It’s critically important that we don’t take powerful technologies such as AI at face value. To use these technologies effectively and responsibly, we must understand the issues, address them comprehensively, and design appropriate governance processes to evaluate and correct for unintended consequences. HOW DO I GET STARTED? These steps can help you build and socialize a foundation for responsible AI. For more detail, see “Innovation + Trust: The Foundation for Responsible Artificial Intelligence”. 1: Identify Issues. 2: Empower a core team. 3: Identify Stakeholders. 4: Develop strategy & action plan. 5: Measure, impact, test, learn. Repeat. WHERE DO I GO FROM HERE? Read “Innovation + Trust: The Foundation for Responsible Artificial Intelligence” by Susan Etlinger, Altimeter Group, to guide your action plan for Responsible AI. • To learn more about how this piece relates to Microsoft’s AI principles, read Responsible AI, which talks about the primary issues, then find tools and resources from the Microsoft AI Business School. OTHER RESOURCES • The Partnership on AI to Benefit People and Society • Data and Society • AI Now Institute • World Economic Forum: Shaping the Future of Artificial Intelligence and Machine Learning

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *