> These private companies should be considered access points to the social graph, not owners of it, and therefore regulated as such.
Okay, but how? Every time I see calls for regulation, I don't see any specifics.
Regulation isn't a magic wand that solves all problems you don't like while keeping the parts you want to keep. We got regulation for cookies, and now we're all doomed to click cookie banners on every website we visit forever.
What exact regulations would you apply to Twitter to solve this problem without forcing the companies out of business? Any solution that ends up requiring Twitter to fend off constant legal battles from people angry about being banned or requires scores of humans to moderate content to some government-stated standards or face expensive fines just doesn't work. If US websites were suddenly subject to onerous legal standards that weren't required elsewhere, companies would move their headquarters to other countries ASAP.
Agreed, dealing with bots, fake accounts, troll farms, and the scums of the universe actually posting child abuse, revenge porn, or other such things, is a hard problem, especially at the scale of Twitter.
Designing and deploying mechanism that are effective at preventing those and have zero false positives is very hard.
Having humans in the loop at that scale is very hard.
It's all a tricky balance. You can force people to have zero false positives but it's futile, in practice they still will. You can force them to manually verify each posts, but humans also have false positives, and this might come at a cost that Twitter can't afford. It's complicated.
What's less complicated is running your own website where you can be your own moderator...
I think an alternative is to give people some rights. Like establish a category of things that is always allowed. Then use the normal court system. Of course, it's a terribly slow and expensive appeal process, but also the best, and publicly funded.
Similar to a case where you claim you've been refused access to a job or business because of a protected characteristic for example. You'd need to sue.
Then pass a law that says any company with a user count exceeding 100,000 must not prevent access or disable an account unless the user in question breaks the law on the platform.
Then establish a simplified claims court, where for a nominal fee, say £500, you could apply to have your case reviewed by an independent judge.
If you lose, you forfeit the funds, and have to pay the costs to the company you claim against, capped at a reasonable fee. Say another £500.
If you win, the company pays the costs, nominal damages, and must also reinstate your account.
If you cannot run your business at scale without externalising the costs on innocent users through false positives, then the business shouldn't be running at all.
Okay, but how? Every time I see calls for regulation, I don't see any specifics.
Regulation isn't a magic wand that solves all problems you don't like while keeping the parts you want to keep. We got regulation for cookies, and now we're all doomed to click cookie banners on every website we visit forever.
What exact regulations would you apply to Twitter to solve this problem without forcing the companies out of business? Any solution that ends up requiring Twitter to fend off constant legal battles from people angry about being banned or requires scores of humans to moderate content to some government-stated standards or face expensive fines just doesn't work. If US websites were suddenly subject to onerous legal standards that weren't required elsewhere, companies would move their headquarters to other countries ASAP.