A year ago, I was invited on Al-Jazeera to explain why Facebook was so secretive about their rules.
The answer is they’re not. Nor is Twitter, Snapchat, LinkedIn, or any other large organization. Some form of terms and conditions, community standards, and rules have been publicly available since the very earliest days of each platform.
They are secretive about how the rules are enforced.
For example, violent content is banned but what counts as violent content is down to Facebook’s discretion.
Hate speech is banned but what counts as hate speech is only loosely defined. Racism is clearly wrong, but what happens when it’s within a broader context, perhaps as a joke by a member of the same minority? Who gets to define what’s a harmless joke and what’s a cruel joke?
There’s a good reason why you don’t explain how rules are enforced. It would tell bad actors exactly how to beat the system.
In my earliest gaming community, we once banned swearing and used a language filter to prevent it. Members naturally began to use ‘Fook’, ‘f|_|ck’, or “FCUK” instead. We clamped down on them too and they just use rhyming words instead (“you plucker!”).
Imagine this on a global, far more sophisticated, scale. Add in some machine learning and you see the problem.
Once people know the tripwires they can easily avoid them and still have the same harmful impact.
It’s far better to train moderators well, help them get it right 99% of the time, and do a better job of addressing the perceived inconsistencies.