Yesterday, Gizmodo ran an explosive story alleging that Facebook routinely suppresses conservative news, according to a former journalist who worked for the company. The journalist, who remains unnamed, said that:
“…workers prevented stories about the right-wing CPAC gathering, Mitt Romney, Rand Paul, and other conservative topics from appearing in the highly-influential section, even though they were organically trending among the site’s users.”
Other former Facebook “news curators” reportedly told Gizmodo that they “were instructed to artificially ‘inject’ selected stories into the trending news module, even if they weren’t popular enough to warrant inclusion.”
Since the story broke, the news has trended widely, prompting RNC Chairman Reince Preibus to demand that Facebook answer these allegations. Snopes, a site dedicated to researching rumors and urban legends, posted an update yesterday reminding readers that the allegations are still unproven.
And this morning, Facebook’s Tom Stocky, who leads the Trending Topics group, posted an update on Facebook stating categorically that:
“We do not insert stories artificially into trending topics, and do not instruct our reviewers to do so.”
Further, he states that:
“There are rigorous guidelines in place for the review team to ensure consistency and neutrality. These guidelines do not permit the suppression of political perspectives. Nor do they permit the prioritization of one viewpoint over another or one news outlet over another. These guidelines do not prohibit any news outlet from appearing in Trending Topics.”
So we’re back to “he said, they said”.
But, as my educator spouse would say, this story represents a “teachable moment” about what algorithms can and can’t do, and what we can expect as they become a more prevalent “curator” of digital information and experiences. To do this, I’d like to unpack this story a bit, as it illustrates some of these issues.
The first point, particularly relevant given that this is a story about journalism, is that unsourced stories are inherently problematic.
I can understand why the unnamed sources would want to avoid a defamation lawsuit, but at the same time, if there was enough evidence of ideology-based manipulation, why not come forward? It undermines the veracity of the story. (To be clear; this does not mean the story is false. It’s just impossible to verify.)
Algorithmic decision-making is not “automatic.” Algorithms still need to be told what is and isn’t “important.”
In paragraph 3, Stocky says that “popular topics are first surfaced by an algorithm, then audited by review team members to confirm that the topics are in fact trending news in the real world and not, for example, similar-sounding topics or misnomers.”
Analysts call that process “disambiguation,” meaning that it is intended to eliminate confusion or ambiguity caused by similar keywords that mean different things. One simple example from today’s news would be distinguishing stories about earnings targets from stories about Target the company. Same word, different meaning.
Humans are fallible, and we can’t pretend they don’t have biases. Related point: what’s important and what’s not is subjective.
Algorithmic decision-making systems need policies and internal controls as much as or more than humans do. In paragraph 4, Stocky addresses this as follows:
“We have in place strict guidelines for our trending topic reviewers as they audit topics surfaced algorithmically: reviewers are required to accept topics that reflect real world events, and are instructed to disregard junk or duplicate topics, hoaxes, or subjects with insufficient sources. Facebook does not allow or advise our reviewers to systematically discriminate against sources of any ideological origin and we’ve designed our tools to make that technically not feasible. At the same time, our reviewers’ actions are logged and reviewed, and violating our guidelines is a fireable offense.”
Let’s take these points one by one.
- First, anyone who’s wrongly believed a celebrity death hoax knows that it makes sense to validate that something that is trending is actually true, or, if you can’t independently verify it, (and this is where things get gray) likely to be true.
- The second sentence in this paragraph is interesting, especially in its use of the word “systematically”. If I had to guess, I’d say that Facebook would never categorically guarantee that a single person won’t inject his or her bias into the job. Judgment of what’s important to one person may look to others as proof of their bias.
- But the third and fourth sentences disclose that Facebook has put internal controls in place: technical barriers to discrimination (likely looking at other signals to see if they are consistent) and an audit trail to catch all activities and potentially enable them to investigate anomalies after the fact. Given that, I wonder whether the company is conducting an audit right now; I hope they are.
It’s also important to look at the allegation about #blacklivesmatter. The context of Stocky’s post suggests what may have happened here.
“There have been other anonymous allegations — for instance that we artificially forced #BlackLivesMatter to trend. We looked into that charge and found that it is untrue. We do not insert stories artificially into trending topics, and do not instruct our reviewers to do so. Our guidelines do permit reviewers to take steps to make topics more coherent, such as combining related topics into a single event (such as #starwars and #maythefourthbewithyou), to deliver a more integrated experience.”
This is a critical point to understand when looking at how social movements develop on social networks.
At the beginning, related trends may be fragmented until a single theme (or themes) emerges that unites them (as Stocky explained with the Star Wars example). The only way to truly understand how #blacklivesmatter trended is to look at the many different ways in which users expressed their opinions and feelings about what we now think of as a movement at the time (back to the earlier point about disambiguation). What makes this even more important is that it is generally acknowledged that much of the #blacklivesmatter movement started on Twitter.
Of course, naming something gives it credibility and power. Giving it a hashtag validates it as a trending topic. So the choice of uniting multiple conversations about a deeply felt subject (or, conversely, making the choice NOT to unite them) is in itself a political act, because it gives voice to a movement rather than allowing it to remain fragmented. So there is always some level of human judgment, even in the apparently sterile world of algorithms.
Finally, Gizmodo raised the issue of “caution” related to stories about Facebook itself; this is a sticky one, given that Facebook is both the news channel and the subject (and, in addition, a publicly-traded company). Facebook has to be careful not to run afoul of the SEC by being seen to “manipulate” news, so it makes sense that the company would have editorial practices in place that are similar to what news organizations do when faced with a story that concerns them.
A few final thoughts. Mark Zuckerberg’s political views are not a secret. Facebook is headquartered in a blue state. The company has a tremendous amount of power (and will arguably have more in the future) to set agendas of all kinds. It’s critically important for Facebook (and, as has been argued for years) other organizations with significant media power to reflect a diversity of viewpoints, whether based on race, gender, politics, geography or other factors.
Did Facebook contractors consciously suppress conservative viewpoints? We’ll probably never know, but the lesson for Facebook (and for other organizations beginning to infuse algorithms and artificial intelligence into their systems) are as follows:
- In technology as in human life, there is no such thing as complete objectivity.
- We need to include voices we may not agree with, and do this in a transparent way.
- Trust is the currency of the digital age. If Facebook (or any organization) wants to earn and retain the trust of its users, it needs to use this experience as a “teachable moment” to investigate its own practices. This should be a constant process–not an activity based on an unexpected critical news story.
A final word: Please note that this is my analysis (as is the rest of this post) and does not reflect any input from Facebook or elsewhere. Any errors are mine alone.