I don't know if this is what OP meant, but I really like your interpretation
Mods exist and can ban/lock/block people and content but users can see everything that was banned, removed or locked, as well as the reason why; what policy did the user violate?
I think the only exception would be actually illegal content; that should be removed entirely, but maybe keep a note from the mods in its place stating "illegal content".
That way users can actually scrutinise what the mods do and one doesn't wonder whether or not the mods removed a post because they are biased or for ligit reasons, and opinions are not entirely removed, as they are still readable, but you can't respond to them
In addition crap floods? If I submit half a billion posts do you really want that handled by moderation?
Being a server operator I've seen how bad the internet actually sucks, this may be something the user base does not have to experience directly. When 99.9% of incoming attempts are dropped or ban listed you start to learn how big the problem can be.
Spam can work the same way — that's how our email spam filters work. To use the OP's "censorship vs moderation" dichotomy, the current "censorship" regime would be like if your email filter marked an email as spam, and not only did not give you the option to disagree with the filter (i.e. mark as "Not Spam"), but didn't even give you the option to see the offending message to begin with.
Spam may still leak into our inboxes today, but the level of user control over email spam is generally a stable equilibrium, the level of outrage around spam filters — and to be clear, there are arguments to be made that spam filters are increasingly biased — is much MUCH lower than that around platform "censorship".
Spam is advertising, right? That doesn't need special protection. Flooding is like the heckler's veto so that could also be against the rules, it doesn't need special protection either.
As to moderation, why not be able to filter by several factors, like "confidence level this account is a spammer"? Or perhaps "limit tweets to X number per account", or "filter by chattiness". I have some accounts I follow (not on Twitter, I haven't used it logged in in years) that post a lot, I wish I could turn down the volume, so to speak.
What is spam... exactly? Especially when it comes to a 'generalized' forum. I mean would talking about Kanye be spam or not? It's this way with all celebrities, talking about them increases engagement and drives new business.
Are influencers advertising?
Confidence systems commonly fail across large generalized populations with focused subpopulations. Said subpopulations tend to be adversely affected by moderation because their use of communication differs from the generalized form.
That spam is advertising does not make all advertising spam.
We already have filters based on confidence for spam via email, with user feedback involvement too, so I don't need to define it, users of the service can define it for me.
Simply put we keep putting more and more and more filtering on the user with complete disregard for physical reality here, and ignore the costs.
The company that provides the service defines the moderation because the company pays for the servers. If you start posting 'bullshit' that doesn't eventually pay for the servers and/or drives users away money will be the moderator. There is no magic free servers out there in the world capable of unlimited space and processing power.
> Simply put we keep putting more and more and more filtering on the user
Who would be put upon by this? The average user doesn't have to be, they could use the default settings which are very anodyne. The rest of us get what we want, that's what the article stated. Who's finding this a burden?
As to the reality of things, Twitter's just been bought for billions and there's plenty of bullshit being posted there. That's the reality, and several people who've made a lot of money by working out how to balance value and costs think it can do better.
The old Something Awful forums did something similar. If someone posted something that was unacceptable, the comment would generally stay and the comment would get a tag saying “the user was banned for this post”. They also had a moderation history so you could go back and see mod comments on why they gave bans/probations.
here is the thing. this has been discussed on fediverse before and the general consensus is, if a post is deleted by a mod or something, that is "gone". they record a log of "deletion" but not what because it should not exist.....
Mods exist and can ban/lock/block people and content but users can see everything that was banned, removed or locked, as well as the reason why; what policy did the user violate?
I think the only exception would be actually illegal content; that should be removed entirely, but maybe keep a note from the mods in its place stating "illegal content".
That way users can actually scrutinise what the mods do and one doesn't wonder whether or not the mods removed a post because they are biased or for ligit reasons, and opinions are not entirely removed, as they are still readable, but you can't respond to them