Musk and Moderation

Musk and Moderation. By Jim Rutt.

For over 40 years, I have been involved with moderating online communities, designing community software, and operating online community businesses. …

Let’s start with some examples of the criticism Musk’s takeover has elicited. From the Washington Post on April 17th: …

Elon Musk’s vision for Twitter is a public town square where there are few restrictions on what people can or can’t say on the Internet. … Critics say his ambition for what the platform should be — a largely unpoliced space rid of censorship — is naive …

Criticisms of this sort are examples of strawman argumentation. As Musk indicated in his TED talk on April 14th, he is well aware that moderation is needed on Twitter — he doesn’t appear to be against “basic rules,” nor is he in favor of “a largely unpoliced space.” …

The critical distinction is between moderation of “decorum” (some might alternatively call it “behavior”) versus moderation of “content.”

Decorum moderation:

Concerns about personal attacks, harassment, threats, bullying, and so on fall under “decorum.” Think of it as a set of rules for how users of a platform or service communicate, irrespective of what they are trying to communicate. Examples of decorum rules include bans on profanity and racial slurs. Facebook’s somewhat ludicrous “no nipples” rule is an example of decorum moderation.

Decorum moderation is analogous to “manners” in face-to-face society. …

Strong and sensible decorum moderation makes a wide and free marketplace of ideas more practical. It is the name-calling, flaming, mob harassment, and other kinds of personal attacks that make discussion of controversial issues so difficult and ugly online. Failure to enforce decorum leads to the “heckler’s veto,” whereby vicious personal attacks from dissenters make discussion so painful that reasonable people are driven away. …

The left has long tried to pass off political correctness as merely good manners. But of course it pushes their agenda and censors critics.

Content moderation:

Content moderation is moderation of the substance of posts and comments. Under content moderation, posts and comments on certain topics are banned or otherwise restricted no matter how decorously they are presented.

Many platforms ban “doxxing” or other violations of user privacy. Most communities ban direct threats of violence, advocacy of the more serious varieties of criminal or terroristic behavior, and libelous defamation. Many communities ban inherently dangerous content such as how to make bombs or poisons. …

Point-of-view moderation:

Where things get controversial, and where I believe Musk’s main concerns lie, is around the subset of content moderation based on “point-of-view.”

An example of point-of-view moderation was Twitter and Facebook’s censorship of QAnon content in 2020 and 2021, irrespective of its decorum. Hundreds of thousands of tweets were taken down, and thousands of Twitter users were banned. On Facebook, hundreds of groups were summarily shut down.

QAnon is an ideology comprised of bad ideas that are extremely unlikely to be true. But I could say the same about Christianity, astrology, and Marxism-Leninism, all of which have significant presences on Twitter and Facebook. …

Other examples of point-of-view moderation are perhaps less dramatic but nonetheless disturbing. An example with which I’m familiar is an idealistic political startup movement called Unity 2020, created to challenge the Democratic-Republican political duopoly in the United States with a proposed centrist slate for president and vice president in the 2020 elections. It was a quixotic project, albeit one that I thought might lead to something interesting in the future. I know the people behind it and it was certainly a good-faith contribution to our political discourse. Nevertheless, Twitter deleted the main Unity 2020 account in September 2020, and Facebook banned its founder. …

While point-of-view-based moderation of new ideas might help to suppress mad and bad ideologies like QAnon, it also risks suppressing the kind of fresh thinking that we need if humanity is to survive.

In a world in which the status quo is doing a pretty terrible job of dealing with our severe and worsening societal problems, it can’t be beneficial to our overall portfolio of live ideas to allow our public square platforms to pick and choose, especially not when those choices are informed by an apparent pro-status-quo bias.

Such platforms need to become “marketplaces of ideas” in which every good-faith voice gets a hearing — even if it is only from its own paltry following. Ideas should spread and prosper or fail and vanish based on their ability to convince and motivate others. Their legitimacy should certainly not be determined by the ideological biases of a small number of gatekeepers in Silicon Valley.

This is what I think Musk has in mind when he says he wants to “increase free speech on Twitter.” He appears to believe sincerely that open inquiry and free expression have been the greatest advantages democracies have enjoyed over their authoritarian competitors, and that we are in danger of squandering those advantages.

Anonymity:

Fixing moderation, however, is not, by itself, enough to make Twitter a fair and effective marketplace in which ideas can rise or fall on their merits. The platform should move away from easily obtained anonymity and require either real name ID or pseudonymous ID that limits one account to one real person and guarantees a “proof of humanity” behind all accounts. The last 40 years have demonstrated that anonymous discourse is generally worse discourse, and real-name or one-person-one-ID verification would help to substantially reduce the presence of bots and sock-puppet collusion networks. …

Don’t let advertising revenue dominate moderation:

As Musk has said, a move away from a nearly entirely advertising-supported model would also be a huge help in creating a healthier information ecosystem.

In an ad-based environment, the platform operator’s economic incentives keep users online for as long as possible to generate the largest possible ad inventory. This drives operators to preferentially offer users the most inflammatory and click-baity material to “increase engagement.” If it stirs up a big fight, so much the better.

In a subscription-based model, on the other hand, the platform operator’s incentive shifts towards providing the most utility to the user in the least amount of time online.