Free Speech in Online Communities
Mumsnet had (has?) a transphobia problem.
A small minority of members made consistently offensive comments.
After toeing the free speech line resulted in a severe backlash, Mumsnet published a revised moderation policy with a view to removing hurtful and offensive posts.
How can you encourage conflicting viewpoints while protecting minorities who may be targeted through those viewpoints?
The worst approach is to wait for a problem to arise to take action.
If you only react when people declare offense you’re going to end up with a policy slanted towards whichever groups complain the loudest.
Worse yet, offensive to who? How many people have to declare they are offended for you to take action? 1? 100? 1000? What happens when people are offended by fairly innocuous opinions which weren’t intended to be offensive?
A similarly bad approach is waiting for bad publicity to take action. That shows members you don’t care about the issue, you only care about how you look.
There tend to be four broad approaches:
1) The Free Speech Approach
This is typically where all posts tolerated by law are also tolerated within the community. Reddit, Twitter, and (to a degree) Mumsnet abided by this approach. The problem with the free speech approach is it attracts people booted off every other site. Be the last bastion of free speech only if you’re willing to endure non-stop criticism and bad publicity.
2) The ‘One Click Away’ Approach
One of our clients recently took this approach with political discussions. They moved all political discussions to a private category with a public password and only lightly moderate posts. It’s an awful place to be, but no-one has to visit it. This lets all other discussions continue more positively. The downside is you could be providing a shelter to extremists within your community.
3) The ‘Values’ Approach
You declare your values and use your judgment to enforce them. The specifics of how rules are enforced is kept vague, but moderators are empowered and trusted to protect minorities and remove speech they consider hurtful. The downside of this approach is it leads to considerable inconsistency, outrage from members who feel they have been treated unfairly, and doesn’t answer the key questions of what’s offensive, to who, and when?
4) The ‘Strict Rules’ Approach
The final approach is to clearly define what’s offensive and then enforce those rules ruthlessly. Facebook is largely following this approach for moderators. In this approach, you define specifically what’s offensive and what isn’t. You give examples, you try to be consistent, and you adapt the rules as you go. The downside is this doesn’t allow for nuance and the more you try to consistently enforce any rule, the more glaring the inconsistencies. You also need to be prepared to accept some criticism when people disagree with your rules.
There isn’t a universal solution and some solutions only work with communities up to a specific size.
As you grow, you’re almost certainly going to need firmer values and solutions you can work with. But decide your policies and approaches early. Don’t leave problems to fester under an illusion of free speech.