First Pavel Durov, the co-founder and CEO of Telegram, was arrested in France, in part due to a failure to comply with moderation requests by the French government. Now we have Brazil banning X/Twitter from the country entirely, also claiming a failure to moderate.
How much moderation should there be on social networks? What are the mechanisms for moderation? Who should be liable for what?
The dialog on answering these questions about moderation is broken because the most powerful actors are motivated primarily by their own interests.
Politicians and governments want to gain back control of the narrative. As Martin Gurri analyzed so well in Revolt of the Public, they resent their loss of the ability to shape public opinion. Like many elites they feel that they know what’s right and treat the people as a stupid “basket of deplorables.”
Platform owners want to control the user experience to maximize profits. They want to be protected from liability and fail to acknowledge the extraordinary impact of features such as trending topics, recommended accounts, and timeline/feed selection on people’s lives and on societies.
The dialog is also made hard by a lack of imagination that keeps us trapped in incremental changes. Too many people seem to believe that what we have today is more or less the best we will get. That has us bogged down in a trench war of incremental proposals. Big and bold proposals are quickly dismissed as unrealistic.
Finally the dialog is complicated by deep confusions around freedom of speech. These arise from ignoring, possibly willfully, the reasons for and implications of freedom of speech for individuals and societies.
In keeping with my preference for a first principles approach I am going to start with the philosophical underpinnings of freedom of speech and then propose and evaluate concrete regulatory ideas based on those.
We can approach freedom of speech as a fundamental human right. I am human, I have a voice, therefore I have a right to speak.
We can also approach freedom of speech as an instrument for progress. Incumbents in power, whether companies, governments, or religions, don’t like change. Censoring speech keeps new ideas down. The result of suppressed speech is stasis, which ultimately results in decline because there are always problems that need to be solved (such as being in a low energy trap).
But both approaches also imply some limits to free speech.
You cannot use your right to speech to take away the human rights of someone else, for example by calling for their murder.
Society must avoid chaos, such as runaway criminality, massive riots, or in the extreme civil war. Chaos also impedes progress because it destroys the physical, social, and intellectual means of progress (from eroding trust to damaging physical infrastructure).
With these underpinnings we are looking for policies on moderation in social networks that honor a fundamental right but recognize its limitations and help keep society on a path of progress between stasis and chaos. My own proposals for how to accomplish this are bold because I don’t believe that incremental changes will be sufficient. The following applies to open social networks such as X/Twitter. A semi-closed social network such as Telegram where most of the activity takes place in invite-only groups poses additional challenges (I plan to write about this in a follow-up post).
First, banning human network participants entirely should be hard for a network operator and even for government. This follows from the fundamental human rights perspective. It is the modern version of ostracism, but unlike banishing someone from a single city it potentially excludes them from a global discourse. Banning a human user should either require a court order or be the result of a “Community Notes” type system (obviously to make this possible we need some kind of “proof of humanity” system which we will need in any case for lots of other things, such as online government services, and a “proof of citizenship” could be a good start on this – if properly implemented this will support pseudonymous accounts).
Second, networks must provide extensive tools for facilitating moderation by participants. This includes providing full API access to allow third party clients, support for account identity and post authorship assertions through digital signatures to minimize impersonation, and implement at least one “Community Notes” like system for attaching information to content. All of this is to enable as much decentralized avoidance of chaos, starting with maintaining a high level of trust in the source and quality of content.
Third, clients must not display content if that content has been found to violate a law either through a “Community Notes” process or by a court. This should also allow for injunctive relief if that has been ordered by a court. Clients must, however, display a placeholder where that content would have been, with a link to the reason (ideally the decision) on the basis of which it was removed. This will show the extent to which court-ordered content removal is taking place.
What about liability? Social networks and third-party clients that meet the above criteria should not be liable for the content of posts. Neither government nor participants should be able to sue a compliant operator over content.
Social networks should, however, be liable for their owned and operated recommender algorithms, such as trending topics, recommended accounts, algorithmic feeds, etc. Until recently social networks were successfully claiming in court that their algorithms are covered by Section 230, which I believe was an overly broad reading of the law. It is interesting to see that a court just decided that TikTok is liable for suggestions surfaced by its algorithm to a young girl that resulted in her death. I have an idea around viewpoint diversity that should provide a safe harbor and will write about that in a separate post (related to my ideas around an “opposing view” reader and also some of the ways in which Community Notes works).
Getting the question of moderation on social networks right is of utmost importance to preserving progress while avoiding chaos. For those who have been following the development of new decentralized social networks, such as Farcaster and Nostr some of the ideas above will look familiar. The US should be a global leader here given our long history of extensive freedom of speech.