Social Media Content Moderation

From P2P Foundation
Jump to navigation Jump to search

Discussion

Henry Farrell:

"This is an inherently horrible cybernetic task in ways that Mike Masnick’s “Impossibility Theorem” captures nicely.

- "any moderation is likely to end up pissing off those who are moderated. … Despite some people’s desire to have content moderation be more scientific and objective, that’s impossible. By definition, content moderation is always going to rely on judgment calls, and many of the judgment calls will end up in gray areas where lots of people’s opinions may differ greatly. … If you assume that there are 1 million decisions made every day, even with 99.9% “accuracy” (and, remember, there’s no such thing, given the points above), you’re still going to “miss” 1,000 calls. But 1 million is nothing."

The academic literature on the history of moderation (e.g. Tarleton Gillespie) emphasize how much the companies that have to do it hate it, and how keenly they would love to hand over the messy, difficult decisions to someone else. And cybernetics provides a very clear understanding of why it is so horrible. Social media at scale is inherently unpredictable, which is another way of saying that there is an enormous variety of possible direction that millions of people’s interactions can take, and many of these directions lead to awful places. But stopping this is hard! Some problems involve people saying bad and horrible things that others will be upset by. Others involve scams and fraud. In both cases, the bad actors can display a lot of ingenuity in trying to figure out how to counteract moderation and propel things in bad directions, sometimes manipulating the rules, sometimes striking on unexpected strategies that dispose towards unwanted states of the world. The result is (a) enormous variety, and (b) malign actors looking to increase this variety, and to push it in all sorts of nasty directions. So how do you variety engineer content moderation so that it doesn’t devolve into an utter shitshow?

The initial approach of most social media companies was to just pretend that the founders were inspired by the ideal of a thriving “marketplace of ideas” where censorship was unnecessary, the good stuff would rise to the top, and everyone would police themselves in some happy but carefully unspecified decentralized fashion. No company could stick to this for long. Now, social media companies find themselves obliged to amplify (increasing their ability to moderate through hiring or investing in machine learning), to attenuate (limiting variety; e.g. by stifling political discussion as Meta’s Threads has done), or some combination (Bluesky and the Fediverse combine new tools with smaller scale and lesser variety in particular instances, each of which can have its own culture and rules).

Each of these is an unhappy outcome in its own special way. But if we understand that moderation in cybernetic terms, we can better appreciate why it keeps going wrong. For example: the spat the week before last over whether Threads had deliberately censored a critical story about Meta or not is really, as best as anyone can tell, the product of amplification techniques (machine learning applied to spam recognition) trying desperately keep up with the variety of ingenious tricks that spammers use, and misidentifying real content as fake.

This led Anil Dash to quote Stafford Beer’s most famous dictum, “The Purpose of the System is What It Does.” Anil was possibly just being sarcastic about the specifics. But Beer’s dictum is still a quite precise diagnosis of what happened, and points toward the actual underlying problem. Which is not, in this case, that Meta deliberately chose to silence its critics, but that Meta that owns the Means of Amplification, and the Means of Attenuation too.

For example: when Meta decides that Threads will deal with the problem of spiraling political disagreement by dampening down all political discussions on its platform, it is dealing with a cybernetic problem using cybernetic means. It is attenuating the variety of the system so that it is easier to deal with. But should it be Meta that is in charge of making such a profound and political decision? Cybernetics doesn’t provide any very specific answer to that question, but it makes it much easier to see the problem. We don’t need to believe that Meta is deliberately tweaking the algorithms to silence its critics to be worried that Meta is able to dampen down vast swathes of the human conversation in pursuit of its business model. Equally, we need to recognize that if we are going to have to regulate vast swathes of the human conversation, we are going to face some messy and unhappy tradeoffs."

(https://www.programmablemutter.com/p/cybernetics-is-the-science-of-the)