Censorship Is Mostly About Money
Social Media Content Moderation Decisions Are Largely Driven By Cash Considerations, Not Morality
Substack rolled out Notes this week, their new microblogging service. They say imitation is the sincerest form of flattery and so Twitter’s designers and engineers must feel quite flattered.
As Twitter replacements go, it’s not bad. It has a lot of the same look, feel and functionality and it even has an algorithm to feed content to my gaping maw, something lacking from my current, favorite microblogger, Mastodon.
But I didn’t come here today to talk about how Notes works. I came here to talk about censorship or, if you prefer, content moderation. As I’ve discussed before, content moderation is difficult work.
Substack CEO Chris Best went on The Decoder podcast where Nilay Patel asked him the most important question you can ask about a new social media platform, how did they plan to deal with all the racists?
Substack has been under attack since its inception for their commitment to free speech and open discourse. So his response to these inquiries, that the free speech principles that guide their blogging platform would carry on into their new social media was not unexpected. Neither was the general losing of shit over this revelation by Nilay and a lot of others.
Over at techdirt, Mike Masnick argues that Notes risks becoming The Nazi Bar if they don’t ramp up their censorship content moderation game. He argues that many social media sites, like Twitter, began with a strong commitment to free speech and later realized that it was unworkable.
But is that true? Twitter was big on open discourse until a few years ago and it was hardly the Nazi Bar. Even now, with Twitter’s content moderation policies outsourced to a random number generator, the place is hardly overrun with either literal or figurative Nazis. As long as the servers stay up and Elon doesn’t ban you for dissing his hairstyle, it’s still pretty usable. So what’s going on here?
The truth is, from a user perspective, kicking out the bad guys doesn’t make a big difference in the individual user experience. You can always block whoever annoys you. Some super-helpful people even compile lists of people they suggest you block and share these lists freely. I, myself, am on at least some of these list for having followed the wrong person or liking the wrong tweet. From a user perspective, bad posters are mostly a solved problem.
So, who cares about this stuff? Advertisers do. They care a lot about jerks on social media because the last thing corporate overlords want is an ad for their Yummy Cola showing up next to some bit of vile racism. Even worse if some helpful soul screenshots that juxtaposition, shares it out to their followers and has it go viral.
This is what happened to YouTube in 2017, when some of the world’s biggest brands found their ads playing next to racist, homophobic and anti-Semitic content. We can imagine that the same issues were in play in June of 2019 when Reddit purged a huge number of problematic (or “problematic”) subreddits. Reddit went from a platform where you could freely post hateful, racist screeds and stolen celebrity nudes to a place where the incorrect opinion on certain topics can result in subreddits being banned with no warning.
So, will Substack give in and crank up the moderation on Notes? Maybe a bit. The point that a social media platform is somewhat different than a blogging platform and it’s not unreasonable that they may want to get ahead of things.
But they don’t have to. Unless they want to sell ads.
You cut through a lot of brush and weeds here.