I agree. I get that free speech is important, but humans on the internet are vile. That is a fact and it hasn't changed in the decades of online communities we have trialed.
Only communities small enough, or moderated enough, to not be interesting to a troll or nefarious person are spared.
The idea of a completely self governed haven of mass free speech is a wondeful one, but no community large enough stays uncorrupted. It has never worked.
It is the ideals and application of those ideals through moderation that make any community bearable, just like in real life.
If I am to be part of a community I would rather it moderated, otherwise the people of the internet ruin all things in time.
I just want to have useful conversations, not circlejerk over freedom of speech while being interrupted by adolescent screaming.
Just to clarify, I believe free speech communities should exist and I am glad they do. I just find that the trolls inevitably take over, and that's no fun. I feel like reddit is currently battling this, and their decision seems to be to hold course, moderate more, and appeal to advertisers.
I disagree. The beauty of reddit is that it has subcommunities. That way, like-minded prople can flock together and even enforce strict censorship rules that make their community better, and reddit as a whole can still be a bastion of free speech (because they allow such diverse communities). Too bad that's not how it actually is...
That is the idea definitely. It mostly works too, just like you said it's a diverse group of subcommunities all with their own rules. But they found that free speech platform wide meant some truly nefarious subcommunities could exist on their infrastructure, these subcommunities didn't self regulate enough. Quite reasonably they have decided they don't want that to happen and have stepped up the platform wide censorship.
The contention seems to be where they draw the line, and just how free their version of free is.
I got an idea. Current machine learning techniques should be advanced enough to detect trolls. What if we have a community that is completely moderated by an unsupervised AI?
The good thing is that the AI can be completely open: how is it trained? what are the parameters? This AI can still have bias, but that bias will be obvious to anyone joining this community.
> Current machine learning techniques should be advanced enough to detect trolls.
So your idea to counteract people playing psychological games on others is to put something without the common sense of a three year old in charge of moderation. That's just glorious.
Only communities small enough, or moderated enough, to not be interesting to a troll or nefarious person are spared.
The idea of a completely self governed haven of mass free speech is a wondeful one, but no community large enough stays uncorrupted. It has never worked.
It is the ideals and application of those ideals through moderation that make any community bearable, just like in real life.
If I am to be part of a community I would rather it moderated, otherwise the people of the internet ruin all things in time.
I just want to have useful conversations, not circlejerk over freedom of speech while being interrupted by adolescent screaming.