As a moderator myself, nothing could sound more disturbing than the idea of a revised social media moderation policy presented with the caveat that more bad stuff will get through.
Recently, Mark Zuckerberg announced that Meta, the company that heralded and then fumbled the metaverse, will be dialing back their moderation on their various platforms. He has explicitly claimed that, “…we’re going to catch less bad stuff…”
You can watch his presentation here.
This is especially menacing because Zuckerberg identifies bad stuff as including drugs, terrorism, and child exploitation. He also specifically says Meta is going to get rid of restrictions on topics like immigration and gender. They’re going to dial back filters to reduce censorship. Oh, and he says they’re ending fact-checking.
This is a mess.
Moderation is challenging. That challenge varies in relationship to the zeitgeist, the societal character of the times, which is quite complex these days. It also varies by platform. The scope of the challenge of moderation on Facebook is greater than at Hypergrid Business, yet the core issues are the same. Good moderation preserves online well-being for contributors and readers, while respecting genuine alternative perspectives.
At Hypergrid Business we have discussion guidelines that direct our moderation. Primarily, we apply moderation principles on content that is likely to cause personal harm, such as malicious derision and hate-speech towards specific groups or individuals.
At Hypergrid Business, malicious derision, a kind of bad stuff, was driving away contributors. However, letting in more malicious derision would not have improved the discussions. We know this because once discussion guidelines were instituted that removed malicious derision, more contributors posted more comments. So when Zuckerberg says Meta intends to get rid of moderation restrictions on topics like gender and immigration, we know from experience that the bad stuff will be malicious derision and hate-speech towards vulnerable and controversial groups, and this will not improve discussions.
The unfortunate ploy in Meta’s new moderation policies is the use of the expression, “innocent contributors” in the introductory video presentation. He says that the moderation policies on Meta platforms have blocked “innocent contributors”. Although the word ‘innocent’ typically conveys a neutral purity of positive disposition, intent and action, Zuckerberg, uses ‘innocent’ in reference to contributors whether they are the victims or the perpetrators of malicious commentary. This confounding use of the word “innocent” is a strategic verbal misdirection. Zuckerberg attempts to appear concerned while pandering to any and all sensibilities.
Zuckerberg’s emphasis, however, is not limited to moderation filters. Rather, he is laser focused on how Meta is going to end third party fact-checking entirely. Zuckerberg pins the rationale for his position on the assertion that fact-checking is too biased and makes too many mistakes. He offers no examples of what that alleged shortcoming looks like. Nonetheless, he puts a numerical estimation on his concerns and says that if Meta incorrectly censors just 1 percent of posts, that’s millions of people.
Zuckerberg further asserts that fact-checkers have destroyed more trust than they’ve created. Really? Again there are no real world examples presented. But just as a thought experiment, wouldn’t a 99 percent success rate actually be reassuring to readers and contributors? Of course he’s proposing an arbitrary percentage by writing the 1 percent statement as a misleading hypothetical, so in the end he’s simply being disingenuous about the issue.
Facts are essential for gathering and sharing information. If you haven’t got an assurance you’re getting facts, then you enter the fraught areas of lies, exaggerations, guesses, wishful thinking… there are many ways to distort reality.
It’s fair to say that fact-checking can fall short of expectations. Facts are not always lined up and ready to support an idea or a belief. It takes work to fact-check and that means there’s a cost to the fact-checker. A fact used in a misleading context leads to doubts over credibility. New facts may supplant previous facts. All fair enough, but understanding reality isn’t easy. If it were, civilization would be far more advanced by now.
Zuckerberg, however, has an obvious bias of his own in all of this. Meta doesn’t exist to ensure that we have the best information. Meta exists to monetize our participation in its products, such as Facebook. Compare this to Wikipedia, which depends on donations and provides sources for its information.
Zuckerberg argues against the idea of Meta as an arbiter of truth. Yet Meta products are designed to appeal to the entire planet and have contributors from the entire planet. The content of discussions on Meta platforms impacts the core beliefs and actions of millions of people at a time. To treat fact-checking as a disposable feature is absurd. Individuals cannot readily verify global information. Fact-checking is not only a transparent approach for large-scale verification of news and information, it’s an implicit responsibility for anyone, or any entity, that provides global sharing.
Facts are themselves not biased. So what Zuckerberg is really responding to is that fact-checking has appeared to favor some political positions over others. And this is exactly what we would expect in ethical discourse. All viewpoints are not equally valid in politics or in life. In fact, some viewpoints are simply wish lists of ideological will. If Zuckerberg wants to address bias, he needs to start with himself.
As noted, Zuckerberg clearly seems uncomfortable with Meta in a spotlight on the issue of fact-checking. Well, here’s a thought: Meta shouldn’t be deciding whether something is true or not, that’s what fact-checking services take care of. It places the burden of legitimacy on outside sources. The only thing Meta has to arbitrate are the contracts with fact-checking organizations for their fact-checking work. When Zuckerberg derides and discontinues third-party fact-checking he isn’t just insulating Meta from potential controversies. He uncouples the grounding and responsibilities of Meta contributors. As a consequence, stated in his own words, “…we’re going to catch less bad stuff…”
What Zuckerberg proposes instead of fact-checking is something that completely undermines the intrinsic strength of facts and relies instead on negotiation. Based on the Community Notes system on X, Meta only allows “approved” contributors to post challenges to posts. But the notes they post will only be published if other “approved” contributors vote on whether those notes are helpful… then an algorithm further processes the ideological spectrum of all those voting contributors to decide if the note finally gets published. Unsurprisingly, it has been widely reported that the majority of users never see notes correcting content, regardless of the validity of the contributor findings. Zuckerberg argues for free speech, yet Community Notes is effective censorship for suppressing challenges to misinformation.
Clearly, getting to the facts that support our understanding of the realities of our world is increasingly on us as individuals. But it takes effort and time. If our sources of information aren’t willing to verify the legitimacy of that information, our understanding of the world will absolutely become more, rather than less, biased. So the next time Zuckerberg disingenuously prattles on about his hands-off role supporting the First Amendment and unbiased sharing, what he’s really campaigning for is to allow the sea of misinformation to expand exponentially, at the expense of the inevitable targets of malicious derision. Remember, Zuckerberg’s bias is to encourage more discussions by all means, a goal which, for a platform with global reach, is greatly aided by having less moderation. Moderation that protects you at that scale is being undermined. Remember, Zuckerberg said it himself: “…we’re going to catch less bad stuff…”