To delete or not to delete? Facebook’s rulebook at the time was clear, as New York Public Radio’s Radiolab podcast has it. “No insides on the outside” could be shown.
If Facebook wanted to continue being seen as a neutral tech company, the small team should have stuck to their rules and removed the graphic image.
Instead, an executive under Facebook CEO Mark Zuckerberg “sent down an order” – make an exception.
And so, by judging what is newsworthy, Facebook became a media company that day.
A project to reduce the spread of false news
It’s the morning after the party that celebrated the internet as the great educator and democracy-enabler. As dawn breaks, we can start surveying the mess left by two uninvited guests:misinformation, and disinformation.
In Facebook’s corner lies unfettered hate speech against the Rohingya minority in Myanmar and fuel for Philippine President Rodrigo Duterte’s extrajudicial war on drugs. Scattered around are breaches of personal information and cross-border meddling in elections.
Facebook has long shirked the duty of publisher, but now says it has “a responsibility to fix the issues” on its platform.
Its campaign against false news includes removing fake accounts, giving people more context about what they’re reading and reducing the spread of false news.
Fact-checkers don’t remove content
As part of its third-party fact-checking programme, Facebook allows its partners to see public articles, pictures or videos that Facebook’s machines, or regular users, have flagged as potentially inaccurate. (Here’s how to report something you suspect is false.)
Fact-checkers evaluate the content’s primary claim and give it one of eight ratings, such as “false”, “mixture” or “true”.
If the primary claim is found to be inaccurate, Facebook reduces the content’s distribution on the network. When it does show up in someone’s news feed, related articles by fact-checkers are appended to it. People who have previously shared the content will also be notified of the additional reporting.
“We believe that downranking misleading content strikes the right balance between encouraging free expression and promoting a safe and authentic community,” says Tessa Lyons, a Facebook product manager focusing on the integrity of information on the news feed.
But it’s important to note the limits of the campaign.
Content isn’t removed. If the inaccurate content was posted by a page you follow, or your significant other commented on it, you’ll likely still see it in your news feed. But you probably won’t see it if a former colleague “likes” it.
The rating can be overturned. If it’s a genuine mistake, the person or publisher can correct the content and have the strike – as Facebook calls it – removed.
Private content, satire and opinion are off-bounds. In these cases, fact-checkers rate the flagged content as such.
And yes, Facebook pays fact-checking organisations for the work we do. It’s only fair that one of the biggest companies in the world compensates the small teams that help it clean up.
Focus on content that can harm
Facebook’s fact-checking programme isn’t without problems. There are only so many fact-checkers and so many millions times more false content. The social network has also been slow or absent in helping to protect the fact-checkers in Brazil and the Philippines who have faced relentless harassment.
Then there are also accusations of bias, particularly the view through the “partisan goggles” of US politics. It doesn’t help that Facebook is yet to flesh out guidelines on the types of inaccurate information it wants its fact-checking partners to prioritise.
As Africa Check starts fact-checking for Facebook, we’ll focus on bogus health cures, false crime rumours and things like pyramid schemes – the kind of content that can lead to poor decisions and physical harm.
Is the fact-checking programme a perfect and permanent solution? Definitely not.
But the damage has been done and it’s time to put on the cleaning gloves.