GamerGate has laid bare a fundamental fault line for online discourse.
A. Words matter. Anyone dismissing tweets threatening violence along the lines of “sticks and stones can break my bones but words will never harm me” is trivializing the power of words. We all know that words can cause immediate emotional harm. And beyond that immediate effect, words lead to thoughts and thoughts (can) lead to actions. I am not claiming some deterministic causality between a specific tweet and a specific action but if this relation didn’t exist overall we wouldn’t see oppressive regimes trying so hard to control what people can say (see B). Recipients of online threats suffer an immediate emotional toll and may face physical, financial and other danger down the road.
B. Pseudonymous and anonymous expression are essential to the functioning of society. There are a great many power imbalances in the world, whether between oppressive regimes and their citizens or companies and their employees or social group insiders and outsiders. Requiring all expression to be identified with the real name of its author would dramatically curtail expression and hence further engrain existing power imbalances.
One can have discussions about the details of each of these but I am convinced that they are both foundational. That poses a major challenge for online discourse since pseudonymous and anonymous expression make verbal violence and abuse much easier.
Whenever you have a fundamental conflict I don’t think there are any perfect answers. Everything will involve some degree of trade-off. So what is to be done?
1. We can and should be creating systems that make anonymous and pseudonymous expression possible. Ideally these are systems that are completely decentralized and hence not controlled by a government or corporation.
2. Corporations operating centralized systems should reduce verbal violence and abuse on their systems. Some, like Facebook, may choose to do this via real identity. Others, like Twitter, could use some combination of human flagging and machine learning to combat verbal abuse and violence. When I say “should” here I mean by force of competitive pressure from participants (see the next two points).
3. We should work on new systems that let us discover and appreciate each others humanity. For instance, video provides much more emotional connection to someone else than a short text. We also know that how systems are seeded and what values they reflect in their features and governance has great impact on actual behavior. So much still remains to be built and we should not take the existing systems as having exhausted what can be done and how we can behave.
4. Wherever we can we should all contribute to reducing power imbalances. We can do this through individual actions but we also need public policies. For instance, a basic income guarantee, would go a long way to reduce power imbalances between corporations and employees (including new labor marketplaces and their freelancers) and ditto for abusive relationships. Similarly the right to be represented by a bot — which I previously called the “right to an API Key” — would greatly reduce the power imbalance between centralized networks, such as Facebook and Twitter, and their participants.
The combination of these four efforts will result in more and more human attention being spent in positive environments (#2-4) but leave plenty of room for free expression to hold power in check (#1).
What should we avoid? Giving government more power to regulate speech. No good will come from that much as it may seem like a convenient and quick fix. If you skipped #4 above you should read it now to see what government can and should do.