It's not "want", Twitter (or youtube, or facebook, etc. etc.) can't in a million years afford content moderation because of the sheer firehose of content. They'd literally have to employ everyone on the planet to get enough moderators in every language for every tweet, post, video, etc. on the platform.
And then only one company would have enough moderators. Moderators are a finite resource, there aren't even enough humans alive to moderate all content on all platforms.
But, the law goes "well figure something out, you have to moderate", and now we've by law guaranteed that whatever solution that gets implemented to follow the letter of the law is going to be shit, because it's by definition going to be automated content moderation, and given the volumes involved, it's modern neural-net based because that's the only technology that's even remotely shown it has an over 50% success rate.
You don't need human eyes on every single tweet, just need enough people to review the flagged content. You could still have algorithms flag things and rank them by confidence levels. You could also hide the content first and queue it for human review instead of auto-banning the user. It's a lot of work, but I feel like there's a lot more that could be done.
There's under a billion tweets per day. If you need to employ every human to moderate this you're saying it takes about 9 working days to moderate a single tweet.
If a moderator had to approve every tweet before it goes live, a terrible idea and yes, very expensive, I think a moderator could do a dozen per minute, rather than one every 72 hours like your logic implies.
We're taking the "every human on the planet" seriously instead of considering it hyperbole for "we would need to hire an insanely order of magnitude higher of people than are qualified for that job"?
And humans aren't robots: you can't moderate dozens of tweets a minute and still meaningfully moderate. You can do that for maybe a few minutes before any pretense of moderation has been replaced by mindlessly clicking "okay" until the light turns on and the pellet-dispenser goes "ding!"
My frustration isn't really rooted in legality and has more to do with my personal beliefs about tech and unregulated capitalism.
I've worked in tech a bit - I know how this goes. Some company wants to get a massive valuation by hitting DAU's in the millions, but you can't do maintenance/support with that many users in a single org even if you wanted to. As an engineer, I can totally relate to the feeling of "fuck support - that sounds awful", and I struggle to blame _them_ because, well - like I said, even if they tried they couldn't possible cover that much work due to an inability to scale their organization.
Psychologically? It's pretty easy to get pissed off at these companies when this happens. These companies are growth-obsessed and the ostensible philosophy is always "making the world a better place". STFU and do your jobs or don't build a product that can't scale and try to pretend it's scaling just fine.
Your reply assumes that these businesses must continue to exist at all costs, effective moderation be damned. If it’s true that they cannot afford good moderation, then maybe it’s not a sustainable business model?
I know online content platforms are wildly different than other industries, which is why these problems exist at all. We don’t seem to have good solutions yet. But we wouldn’t have any issue declaring that any other industry that couldn’t afford to do the “quality control” well enough isn’t a sustainable industry.
It does not: it assumes that corporations have their own interest in mind, and are kept alive at the cost of "as long as they are profitable for (most of) the owners".
Whether we, the people who have to suffer them, want them to exist or not does not particularly enter into this. If that's the part we want to take aim at (and maybe we should? or maybe we shouldn't?) then that's an important but completely separate issue.
What if there is no solution? In my opinion Twitter, YT, FB and others are, at this moment, simply too big for this world and our society. We've created something that should not exist and we have no way of controlling it and no way of incorporating it into our society and our minds without causing a great amount of damage and introducing a lot of unfairness and inequality.
There are other areas of life where the “firehose” is simply against the law because of safety concerns. We might decide at some point that one Facebook was too many and get off this merry-go-round.
Why would crowdsourcing be more reliable than initial NN filtering? And of course, let's consider application bias: do you think the reasonable folks would volunteer to moderate twitter, or only the people who think it's a way for them to manipulate the content that shows up on the platform?
They volunteer to vote just fine. Could a moderation system be built on that?
I mean, you vote. And the people you vote also vote. And those you vote highly can maybe be trusted to vote accurately and so on. A lot could be done with that voting data.
And maybe you could even get them to vote twice or something.
What we're shooting for is totally user driven (because moderation can't be trusted) and low effort (because users are lazy)
And then only one company would have enough moderators. Moderators are a finite resource, there aren't even enough humans alive to moderate all content on all platforms.
But, the law goes "well figure something out, you have to moderate", and now we've by law guaranteed that whatever solution that gets implemented to follow the letter of the law is going to be shit, because it's by definition going to be automated content moderation, and given the volumes involved, it's modern neural-net based because that's the only technology that's even remotely shown it has an over 50% success rate.
Well done, us.