Mike's Guide to Censorship
ver.1 - 2019 Aug 6th
Don't like censorship at all? Read the blurb at the bottom.
RULE #1: Censor, don't ban.
Recognize that banning and deplatforming have multiple effects. On the plus side, banning...:
- ...prevents the widespread dissemination of hateful ideas, which might entice new
people into a hateful ideology.
- ...sends a signal that you will not tolerate such ideas. This may head off potential
future attempts to spread hateful ideas on your platform.
On the minus side, banning...:
- ...radicalizes users via the significant negative emotional response that comes from social rejection.
It sends a
signal of rejection. Everyone has a deep emotional need to be accepted by their
peers, and rejection is devastating. Many of the people who get banned are already
expressing bitterness due to rejection and discontent with society. This expression
of bitterness is often the prima facie cause for them being banned. Banning someone's
account gives them a very significant additional emotional push in the wrong
direction. As such, it directly contributes to the emotions that psychologists
have determined are a primary motivator among mass shooters. 
- ...radicalizes users by driving them into pockets of selection bias.
Banned users don't simply cease to exist. They turn elsewhere, often to isolated
pockets of other users previously banned, and due to selection bias most people in
these isolated pockets lean towards extreme ideas. Places like gab, bitchute, 4chan
and 8chan contain pockets of extremism via selection bias, because they are the only
places that extremists are permitted, not because those platforms court them or want
them (nor are these platforms primarily extremist, a common misunderstanding).
In these very biased pockets,
their ideas get amplified and normalized, radicalizing them even further.
- ...causes a community backlash and may cause a significant loss of users
sympathetic to those being banned, or to the principle of free expression itself.
- ...has a chilling effect. The signal that suppresses future hateful posts also has
the chilling effect of suppressing posts that doesn't quite meet that threshold,
and thus the Overton window only narrows, similar to a boa constrictor.
- ...ultimately divides society.
Due to the danger and significance of the detrimental effects of banning users, it is
recommended to never permanently ban a person (account), but rather to shift the focus
away from the person and towards the objectionable material itself by censoring specific
posts as required. To manage repeat offenders, throttle their ability to post in an
exponential way (temporary bans).
RULE #2: Engage ideas whenever possible
The appropriate response to objectionable material depends on the severity:
- For ideas presented in a way that indicates the presenter of the idea is sincere and
willing to engage in a discussion, openly counter the ideas with better ideas. All
listeners can be edified in this way. This is good for society. Do not fear bad
ideas, think of them as opportunities.
- For ideas which appear to not be open for discussion, potentially censor the content
according to your policies. However, a better solution is described in rule 3 below.
- Illegal content should be immediately removed and dealt with in accordance with the
RULE #3: Respect and empower your users
Moderation is a very difficult business. It requires significant manpower, and will
never be viewed as having done it's job well. Humans disagree about what should be
censored in very significant ways often based on political views.
Consider mechanisms whereby users can select what material they wish to see, such as
options to enable/disable/blur NSFW content. Present users with warnings and
disclaimers about the kind of content they are choosing to see, so they can
retreat if they so choose.
The following proposed example system of moderation respects users as much as possible:
- Moderators can tag posts with tags like 'nsfw', 'spam', 'hate speech', 'personal attack',
- Users can choose which kinds of posts they wish to see (for example, someone might want
to filter spam, gore and nsfw posts but to show everything else).
- Users can subscribe to moderators (e.g. I trust Susan's opinion, I don't trust Mark's
opinion). A libertarian moderator and a critical theorist moderator will have very
different ideas of which posts should be tagged as 'racist'.
- Users can choose to be moderators (within reason of technical limits), in case none
of your organization's provided moderators suit a certain sub-community's ethic.
- Posts are only taken down if they are illegal. Even spam can remain (unless that
presents an undue technical burden).
- In a system like this, users have control over their own experience.
Rule #4: Leave Authoritarianism to the Authorities
The system described above does not satisfy the authoritarian desire to control what kinds
of content other people are allowed to see. This is by design. Such an authoritarian stance
is only appropriate for parents to take with their own children, or for society at large to take
via the legal system, or for an individual to take over their own property. As the controller
of a web system you are an authority over it and can do what you like, but I strongly
caution against authoritarian measures for all of the reasons listed above, for the satisfaction
of your user base, and for the stability of society.
Don't Like Censorship at all?
On a society-wide level, I am a staunch supporter of free speech, with the standard
exceptions of speech which directly cause damage such as libel, incitement to violence,
etc. One must be held responsible for the consequences of their speech. As such, I
prefer the American treatment of speech over the British one.
But on a platform level, censorship is critical. If you allow everything that is legal,
you will have to allow spam. Are you fine having your system overwhelmed with spam?
Probably not. Once this sinks in, hopefully you can move past the idealism of absolute