Preventing Minority Report from Becoming a Reality

Yes, it’s true. Elements of Gattaca and Minority Report are here. But we can use them as a caution–not a blueprint–for the future. One of the more unsettling aspects of data ubiquity is that it it’s easier than ever to extend discrimination from the offline world to the online one.

One disturbing recent example, covered in Fast Company, was an app that used 23andme’s DNA API to block people based on race and gender. (23andme promptly shut down the app and blocked the developer from using their API).

But not every situation is as easy to spot.

A financial services company could use both public and private data to segment retirees with low savings, and offer them reverse mortgages or high-interest loans. Or they could target ads for payday loans based on race or gender or zip code. The data provider (unlike, in the former case, 23andme) may not always have direct insight into how their data is being used.

This is a topic of great interest to the Office of the President, which published a report entitled “Big Data and Differential Pricing” in February 2015. In it, the authors warn:

Big data may facilitate discrimination against protected groups, and when prices are not transparent, differential pricing could be conducive to fraud or scams that take advantage of unwary consumers.

It gives a whole new dimension to the phrase “caveat emptor.”

Of course, making assumptions based on demographic information isn’t new; it’s been around for hundreds of years. But what makes today’s issues different is that, for the first time, organizations have relatively easy access to multiple APIs, which, to be blunt, makes both constructive and destructive uses of data much faster and more efficient.

Consider applications such as predictive policing, which uses data and analytics to forecast how certain situations may develop over time, applications such as Telmate, which are used to predict criminal activity in correctional facilities, FaceFirst, facial recognition technology that helps retailers identify (and this is an odd list) shoplifters, felons, past employees and VIP customers) and so on.

All of these apps rely on assumptions about future behavior based on a combination of past behavior and attributes. In these scenarios, “false positives” can have discriminatory and even disastrous consequences.

All of these tools–like most any tool, really–can be used with the best of intentions. But what we so often forget in our amazement at technology is that the assumptions on which they rely come from human beings, and we are nothing if not fallible.

There are far too many recent examples of how incorrect assumptions culminated in tragic consequences, so if anything, this is a plea to ensure that as we build predictive systems, we build them with checks and balances, scenario plan them and examine whether the outcomes we see are respectful and fair to everyone they touch. This is only going to become harder over time, as the volume and variety of data increases.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *