Endless reminder that having multiple layers of moderation protects free speech, it doesn't restrict it.
You do not want the government to be the sole arbitrator of what content should be online. That is exactly how you end up with laws like SESTA/FOSTA.
Our government exists to set a baseline of unacceptable speech that private services can build on top of. As we move futher up the stack to the network level, and then the hosting level, and then the forum level, we allow more moderation -- each level refines its definition of acceptable content a little, and then the next level builds on top of that.
In this case, I actually do agree that Kiwi Farms probably crossed that government baseline; it was such an egregious case that it probably should be addressed in law in some way. But in general it is a bad idea to say that we're going to solve every decision about what content is and isn't acceptable by hauling someone in front of a judge. That's a recipe for chilling speech, not expanding it.
> Endless reminder that having multiple layers of moderation protects free speech, it doesn't restrict it.
Perhaps. I'm skeptical of concentrations of power wherever it is: government, Cloudflare, Facebook, etc. At least the former is theoretically accountable for choices.
Also, Cloudflare asserts their position is that they largely do not want to restrict speech beyond that government baseline and they won't act themselves against speech. Here they claim they are forced to (and they probably were).
> I'm skeptical of concentrations of power wherever it is: government, Cloudflare, Facebook, etc.
Not to hammer the point to hard, but de-concentrating power is the exact reason why it is better to have moderation decisions across multiple layers of the network stack rather than in level 0 (the government).
Forum messes up on moderation? Not a big deal.
Web host starts making bad decisions? Tons of options.
Clouldflare banning you? Tougher, but there are multiple CDN services, if Cloudflare becomes evil it's not necessarily the end of the world.
ISP banning you? Now we start getting pretty dangerous, there are fewer options available to services and if moderation decisions are made poorly, that can have effects across the entire network for everyone.
The government prosecuting you? This is level 0 of the network stack.
The way that we guard against concentration of power is by de-concentrating it. Cloudflare (and to be fair, other large Internet companies too) are arguing for the opposite of that. In the specific case of Kiwi Farms, maybe this example is so egregious that it does make sense to have some new laws. I kind of agree with that. But no good law will be enough on its own to get rid of Cloudflare's responsibility, the only law responsive enough and fast enough to do that would be one that violated free speech rights.
> Not to hammer the point to hard, but de-concentrating power is the exact reason why it is better to have moderation decisions across multiple layers of the network stack rather than in level 0 (the government).
I'm not disagreeing with your entire argument, just that portion.
Multiple layers of moderation are only safer for free speech to the extent that none of them have too central of a role and there's some degree of visibility as to what is happening.
One of the things that has made social media so toxic to speech is that it has A) gathered so much of the "share" of being a conduit of speech at scale, and B) creates a false feeling of consensus by creating playing fields that are tilted in various ways without the tampering being obvious.
I heavily agree with you that centralization on a single layer is problematic for free speech.
I suspect where I differ from the CEOs of companies like Cloudflare/Facebook, is that I think the solution to that isn't to get rid of moderation, but rather to enforce antitrust and break up their companies. :)
Facebook in particular has this problem; it's constantly asking the government to tell it what to do because it doesn't want to be in charge of speech, and yet it has no problem buying competitors and trying to take over markets. It makes me wonder how concerned about speech these companies actually are, since they had no problem growing their companies to this size and putting themselves into situations where their moderation decisions carry so much weight.
> I suspect where I differ from the CEOs of companies like Cloudflare/Facebook, is that I think the solution to that isn't to get rid of moderation, but rather to enforce antitrust and break up their companies. :)
I'm not sure whether Cloudflare deserves antitrust action. But, yes, Facebook is concerning.
Of course, you've got to acknowledge the flip-side. Sometimes it should be left up to the government. If your internet service is a natural monopoly (perhaps augmented with protections of a franchise agreement from the government), it's especially problematic for them to be making moderation decisions.
> it's constantly asking the government to tell it what to do because it doesn't want to be in charge of speech
This is a bad thing, according to you? Should Facebook instead make those decisions on its own?
The thing is, no matter what Facebook decides there will be criticism. If they make the decisions on their own, bad. If they ask for government to decide what speech is lawful and what isn’t, bad. If they don’t block fake news, bad. If they block fake news and realise months later it was real, bad.
And your genius solution is to break up the company. But any network, regardless of size will have this issue. The rise of Tiktok makes this very obvious. There’s tonnes of misinformation on Tiktok but it doesn’t get the same coverage because that’s not what aligns with the NYT’s priorities. Tiktok gets around the content moderation problem by simply saying and doing nothing, hoping no one notices.
So what’s your solution to TikTok? Break it up as well? Into what pieces?
The problem with people who come up with simplistic, unrealistic solutions to hard problems is that when the obvious flaws are pointed out in their thinking, they’ll double down.
The weight of any moderation decision is directly proportional to the size of the platform making that decision. This is a generally well understood principle in a lot of free speech circles, I'm surprised that of everything I've written about Kiwi Farms here, this is the thing that is getting the most pushback from people.
The principle behind breaking up platforms is that individual moderation decisions do become harder the more people that they impact. It's also not just a free speech thing, this is the same reason why it's dangerous to have a browser monoculture. If I tell you that having one company in charge of the entire web makes their individual decisions about the web more impactful and more dangerous, that's something you understand, right?
Same deal for moderation.
Of course, see mlyle's other sibling comment -- sometimes we genuinely can't do anything about a natural monopoly and we just need to recognize what they are. But in instances where we can, decentralizing power decreases the overall risk of moderation mistakes for the entire network.
> Also, Cloudflare asserts their position is that they largely do not want to restrict speech beyond that government baseline and they won't act themselves against speech. Here they claim they are forced to (and they probably were).
CloudFlare acts against speech all the time. They'll sell you a service to screen the speech of others and then pass it onto you or not, at their decision.
CloudFlare's own terms of use for their Email Forwarding product is very clear that they will squelch your speech as well, in many conditions that don't come anywhere approaching "organizing an international manhunt to intimidate a minority": https://www.cloudflare.com/supplemental-terms/#email-routing
They should stop talking about this like it's "pure speech" because it's not that at all, and even to the extent that it is, they already limit actual "pure speech" in many other scenarios not nearly as threatening as this.
OK, you're willfully missing the point because we're talking about the position relating to Cloudflare's security services, not hosting or other products that have have a more restrictive TOS.
As far as they do mention activity, they do say they ban content related to activity that is, for example, "libelous". So they'll block you for publishing insults about someone, without any further malicious activity.
They also say that they ban content used as part of malware command and control, which seems to cover spamming, meaning that they should have no problem blocking spammers trying to use their "security protection" service.
Of course it turns out I don't even have to use the analogy with spam because CloudFlare's own post that you linked to clearly states they can remove access to content that is "... harmful, or violates the rights of others, including content that discloses sensitive personal information, incites or exploits violence against people or animals ...".
That's literally been KF's modus operandi for years now. Unless CF changed their terms very recently, that behavior of KF has always been proscribed. Yet CF saw fit in their discretion to make a conscious choice to continue aiding and abetting KF in its campaign of doxxing and incitement of violence, something far worse than libel or C2.
> As far as they do mention activity, they do say they ban content related to activity that is, for example, "libelous".
You're again missing the distinction between their hosting policy and their security product policy. This was the important distinction that I first pointed out to 2 comments ago, and that I posted this document which explains clearly 1 comment ago.
> Hosting products are subject to our Acceptable Hosting Policy. Under that policy, for these products, we may remove or disable access to content that we believe:
...
> has been determined by appropriate legal process to be defamatory or libelous.
...
> Our conclusion — informed by all of the many conversations we have had and the thoughtful discussion in the broader community — is that voluntarily terminating access to services that protect against cyberattack is not the correct approach.
“They'll sell you a service to screen the speech of others and then pass it onto you or not, at their decision.”
The problem with your logic here is that you’re considering the voluntary filtering of messages by a party as being the same as stifling someone’s ability to say something. The filtered party can still say what they want but the intended recipient should always have the ability to ignore that if they so choose.
“CloudFlare's own terms of use for their Email Forwarding product is very clear that they will squelch your speech as well”
The difference between controlling what gets sent out by their email service is more a question of legal liability than free speech. They are not limiting anyone’s ability to give free speech within the confines of the law here.
To make a stronger argument maybe you need to create a stronger definition of free speech than what is defined by law to prove any violations on CF’s part.
In the case of KF, CF has only suspended them on what they could identify as undealt-with legal violations. This is fundamentally different from revoking services to silence unsavory takes.
I also imagine the doxxed information on the platform (KF) is removed after a time so attacking the whole platform at this point just seems like an effort to stifle a community with subjectively unpleasant ideologies.
"Go start your own" is not a great response to someone being concerned about corrosive effects of the concentration of market power -- especially when those concentrations are brokering critical speech and political discourse.
Whoops you started your own, now I see all the methods of payment you use have been shut off by their various vendors. Oh you used crypto? Hope you know every single address that has interacted with a sanctioned address and never accidently accept payment from them...
Have you ever considered that if nobody wants to touch your content - nobody wants to even consider allowing it over their network - then it might actually be your content that is the problem?
Have you ever considered that you're straw-manning and not engaging with what I've actually said?
I'm not supporting Kiwifarms having a platform.
I'm saying speech being effectively squelched by a small number of powerful parties is problematic. If the small party is the government, this is obviously problematic. If Facebook is a huge part of people's discourse, and subtly tilts the playing field in various ways, this is problematic, too.
I can insulate myself (mostly) from the effects of Facebook's curation. But there are still profound social costs.
I feel like it is thoroughly explained above and my other counterparts in conversation understand the point.
I am also not sure you're conversing in good faith. You're tossing out pithy one-liners that demand greater effort to respond to than to say them. This was also my experience a long time ago when we used to discuss things on IRC (including, I believe, this exact topic).
Again, there's a difference between your free speech being "squelched", and no-one wanting to entertain your nonsense, and you haven't really explained why you think there isn't.
> Okay, but there is no "concentrated power". You are just as free as anyone else to host KF.
We're not talking about hosting KF. There's lots of hosts. And Cloudflare was not hosting KF, but instead providing DDoS protection services.
But there's approximately 2-3 services that can reject DDoS at high scale. Or maybe slightly more. This is right at the threshold of concern.
Here, I think the decision that was made was a good one, but at the same time a very small number of unaccountable parties making this kind of determination is worrying.
So either don't host stuff that gets you DDoSed, or work out a way to spin up something else that copes with rejecting DDoS at scale.
Either way, if you're saying something so reprehensible that no-one will allow you to use their platform to say it, maybe you should look at what you're saying.
It's really at the point where I need to repeat the same thing:
> > > Many parties deciding independently whether to "entertain my nonsense" is good. One critical party in the path (governmental or commercial) is bad.
>Endless reminder that having multiple layers of moderation protects free speech, it doesn't restrict it.
My issue is that the Kiwi post in question - which (to my reading) was a very VERY stupid bomb “joke” obliquely referencing the Belfast Troubles - appears to have been quickly moderated and the user banned. Which is, I thought, how this was all supposed to work.
The screenshot going around Twitter of the idiotic post was tweeted out within literal minutes of said post being made. I have no idea how long it took the KF moderators to delete the post and ban the user but, from a perusal of the following pages in that thread, it doesn’t seem like it was up very long.
So is moderation an issue? It doesn’t seem to be. Perhaps that post was the final straw, but CloudFlare is framing their action as having to step in and “moderate” specifically because of THAT post - and yet the post in question had already been (correctly) nuked from orbit by the KF mods.
Edit: here’s where I do the obligatory “I didn’t vote for Trump, however” mea culpa: I do not have a KiwiFarms account and honestly I find it to be fairly distasteful in a 2004 FYAD sort of way.
One thing I don't understand: if Kiwifarms is subject to very big, very expensive DDoS attacks - and I've seen no one denying that it is, that's the whole thing Cloudflare is needed for, after all - why would we even think an illegal threat on Kiwifarms originated with a regular Kiwifarms user? It seems a lot cheaper to make an account and post the illegal comment than to run a DDoS operation.
No Kiwifarms account here either, but I have read it and I do appreciate that some of the people wanting them shut down are... not very nice people themselves.
You might be right. Posted by a KF user regarding the threat:
"It's a 2020 account that wasn't active till a month ago with 1 post in the CWC forum and the other 42 in the keffals thread. The post was deletedly nearly instantly, yet within 10 minutes of it being posted Keffals had contacted CF, CF pulled the plug, and articles (which you can find in A&N right now) were being posted. Also it's notable that Keffals removed the quote/reply portion of the post which he accidently revealed before indicating he has an account here. This was so obviously coordinated, it glows more than nuclear blast."
Anecdote: in a Discord server I was one of the moderators at, we had a user post porn images from OnlyFans while the moderators were asleep, then report the server to Discord for hosting stolen content. The server got deleted by Discord. The user's account did not.
> You do not want the government to be the sole arbitrator of what content should be online.
No, we want the government to clandestinely meet every week with representatives of major internet companies and instruct them who to ban and what information to suppress, while pretending it's independent action of the same companies driven by their love of free speech. Or maybe we don't want that, but who cares - it's what we've got.
This is kind of exactly what I mean when I say that people haven't really though this through.
You intend this to be a gotcha, but yes, unironically getting pressured by a political representative has fewer free speech implications than the government openly threatening to throw people in prison. It does have implications; it's not ideal. But are you really arguing that the government leaning on people is worse than it would be for them to just outright force people to censor content?
I've brought up SESTA/FOSTA a few times already, but they're kind of an ever-green example. The government has been pressuring companies to deplatform sex workers for ages, but SESTA/FOSTA were still a worse outcome. I don't want the government trying to do run-arounds to the First Amendment in the first place, but if you're drawing a comparison then the world where they were privately pressuring companies was less censorious than the world where they started openly threatening website operators with felonies.
> are you really arguing that the government leaning on people is worse than it would be for them to just outright force people to censor content?
No, I am not arguing that the government asking Zuckerberg for a regular friendly chat where it tells him who to ban and he complies is worse than the government shooting Zuckerberg in the head as a traitor and nationalizing Facebook. The latter would be worse. But both are very bad and should not happen in free democratic society where freedom of speech is valued.
> if you're drawing a comparison then the world where they were privately pressuring companies was less censorious than the world where they started openly threatening website operators with felonies.
It's the same world. If the operators would not comply "voluntarily", that exactly what would happen. But the censorship by it's nature does not like exposure, so the less overt means can be used, the better. If they can do it without loud clashes, just by everybody "consenting" to it "privately" - much better. If somebody dares to step out - the pressure would be increased, up to, ultimately, using the force of violence, if necessary. That has happened many times to journalists that dug in wrong places. So far none of the companies has been dangerous enough to employ such level of pressure - usually there's always somebody in the lower levels that can help with the problem, like CF, or Amazon, or Google - but we're just getting ramped up. We'll get to felonies eventually. Unless we manage to stop it somehow.
So, does Cloudflare in this blogpost actively asking governments to take a more active role in moderation decisions across the board make it more likely for the scenario you're describing to happen, or less likely?
We can disagree about which outcome is worse or about whether they're equivalent, but other than that disagreement it doesn't sound like you're arguing that Cloudflare is being prudent or helping advance freedom of speech when it asks governments to make these decisions for it.
> But are you really arguing that the government leaning on people is worse than it would be for them to just outright force people to censor content?
Not the poster, but-- I'm not so sure either way. Both are pretty bad. The government convincing private parties to do their bidding while acting like it's just the private sector making choices blinds us all to what's happened, and gives the illusion that the decision to squash the speech is a popular, voluntary one by individual actors.
So, the government forcing it is directly more harmful but at least it is visible.
> The government has been pressuring companies to deplatform sex workers for ages, but SESTA/FOSTA were still a worse outcome. I don't want the government trying to do run-arounds to the First Amendment in the first place, but if you're drawing a comparison then the world where they were privately pressuring companies was less censorious than the world where they started openly threatening website operators with felonies.
Passing SESTA/FOSTA didn't require any "help" from free speech advocates. The government has had a longstanding policy to go overboard against prostitution and prostitution-adjacent material long before Section 230 or SESTA/FOSTA existed. Legislating run arounds against the First Amendment (e.g. Cosmtock Laws) and pressuring private industry (e.g. Hays Code), have been goto strategies since the country's founding, if not before. The solution has always been to fight it out in the courts (as in Ashcroft v. Free Speech Coalition) or find ways around the letter of the law (Backpage pre-2018).
Pushing the issue to government at least provides consistency, rather than leaving the issue to fairweather service providers and perfidious content policies.
> at least provides consistency, rather than leaving the issue to fairweather service providers and perfidious content policies
Once again, I think this is a perfect example of what I mean when I say that people who advocate for more government involvement haven't thought about this issue enough.
Consistent censorship results in more censorship than you would see with inconsistent censorship by fairweather services.
If you want any argument about that, consider that Cloudflare dropped a number of sex sites specifically after SESTA/FOSTA was passed and not before.
Of course, it would be better to have neither situation, but an inconsistent patchwork of censorship is obviously less censorship than a consistently applied standard that even free-speech-absolutists like Cloudflare have to follow.
> or find ways around the letter of the law (Backpage pre-2018).
Once again, light legislation leads to more wiggle room for companies to interpret the law, which tends to result in less censorship overall. As proven by Backpage pre-2018.
You can still fight inconsistent censorship in the courts. You can still have laws struck down. You can still work to change public perception of censored speech or normalize it. Unless you're aiming for an acceleration of censorship (which is a usually a bad strategy), then "at least" and "provides consistency" shouldn't be chained together in the same sentence. If you believe that something is a negative outcome, a consistently negative outcome is worse than an inconsistently negative outcome.
This isn't just a free speech thing, it's just a general principle that accelerationists don't always completely grasp: the scenarios where accelerationism works to produce preferable outcomes are kind of narrow and rare. I don't want to get stabbed at all, but I prefer a world where I might get stabbed over a world where I definitely will get stabbed. Making the stabbings more consistent isn't an improvement.
Well the courts view is that the government pressuring a private entity to censor is no different than the government censoring by itself.
And I would argue the lack of transparency and ability to hide the true driver of the censorship is far worse than if the government just comes out and does it themselves.
If you think US federal government has "little to no power" over the company whose main business, headquarters and most of the workers are all located in the US - you really misunderstand just how vast US federal government powers are.
I trust you can find many more links about this topic. As a side note, this is the part we have just learned. Is that all of it? FBI just told us it routinely instructs social media companies about which content they'd like suppressed. And, as we know, they get their wishes.
> what consequences did they threaten them with if they didn't
How would I know? I wasn't there. I know which consequences US Federal Government can visit on you if it really hates you, and that's a real lot of bad consequences. How it went on those meetings - I have no idea. Maybe they didn't even need to threaten - though they certainly did in public - the President accused Facebook of "killing people". Do you thing if the Supreme Commander of the US Army and the head of US Federal Executive tells you you're killing people and need to stop it - it's not something you need to think really really hard about?
> Just as a reminder, we do not live in the USSR
I know, I've been there. We're not. But we're inching closer and closer to there. When it'd become obvious, it'd be too late to complain - by then, any complaint outside of the boundaries of your private kitchen will land you is a big trouble. Better complain while it's still allowed.
And yet, none of this information was remotely suppressed, which suggests that the government is largely toothless with regard to these requests. Perhaps corporations acquiesce due to a gentleman's agreement, or even because they think it's the right thing to do, in which case your beef should be with them more than the government. On the whole, this feels like a "think about it, man!" kind of argument to me.
Also, these sources seem sketchy at best. Do you have reporting from a reputable newspaper? I'm not saying this didn't happen, but the way this reporting is presented definitely doesn't pass my sniff test.
FWIW, I believe that the government shouldn't be threatening corporations to censor things, but I also don't think that's what's happening here. (Though I could be wrong — waiting to read some credible investigative journalism about it.) I also don't know what the precedent is for this kind of public-private coordination.
In any case, the information still gets out, whether on social media or elsewhere.
> none of this information was remotely suppressed
But of course, state censorship is rarely 100% airtight. Neither it needs to be - it only needs to hinder the information enough to make those who dissent be unable to change anything and give those that are willing to delude themselves plausible deniability (thanks for providing the example for the latter point). In the USSR, which you previously mentioned, a lot of people knew what's going on. A lot of people listened to Western "voices" and read "prohibited" literature. And talked about it - in the confines of their kitchens. They couldn't do anything more. The KGB was powerless to eliminate the "voices" and the samizdat - but they were powerful enough to not let them have any effect for quite some time.
> Perhaps corporations acquiesce due to a gentleman's agreement,
There's no such thing as "gentleman's agreement" with the federal government that can destroy your business and your life. It's like a mafia boss "asking" you for a "favor". You both understand it's not "favor" and he's not really "asking". "Or else" doesn't need to be said explicitly - everybody knows it. But nevertheless, it has been said explicitly many times, so to believe there can be some kind of "gentleman's agreement" is naive bordering on willfully blind.
> because they think it's the right thing to do
I'm sure some think that'd the right thing to do to suppress dissent to the government, because the government is only acting for our own good and thus everyone who dissents is evil, extreme and terrorist. In fact, we've heard the government explain it to us on multiple occasions. That's not an excuse.
> Also, these sources seem sketchy at best
Come on, not this BS. Just read the freaking emails, they are right there. If you are going for "unless The Pravda publishes it, it's all libelous lies and I'm not going to read it" - you are either grasping at straws or are willing to blind yourself for partisan reasons. I can lead you to sources, I can't make you read them - if you are willing to crimestop on it, go ahead. It's still not mandatory, but many are already using it at full force - they are only willing to think about subjects pre-approved by their betters and only consume information pre-processed by the approved sources, which never would deliver anything unexpected or diverging from the prescribed doctrine. Your choice.
> I believe that the government shouldn't be threatening corporations to censor things, but I also don't think that's what's happening here
It's not "threatening", it's plain telling them now. We're way past threatening - we're in the place where the government just tells, and they jump.
> waiting to read some credible investigative journalism about it
Because you are going to ignore people who are actually willing to investigate things, and only believe "reputable" ones - i.e. ones who by definition are part of the system that implements the censorship - you're going to be waiting for a long time. About as much as Soviet citizen would wait for Pravda to publish genuine critique of the Communist Party and its General Secretary.
> In any case, the information still gets out, whether on social media or elsewhere.
The information gets around even in North Korea. That's not a reason to become one.
> If you are going for "unless The Pravda publishes it, it's all libelous lies and I'm not going to read it" - you are either grasping at straws or are willing to blind yourself for partisan reasons.
It's called vetting your sources. Also, funny you should mention Pravda, when Zero Hedge is actually pretty close to that caliber of publication from what I can tell.
> Just read the freaking emails
It's not about the e-mails, but the context around them. I can't trust a far-right rag that peddles conspiracy theories to provide analysis with any degree of nuance, and without omitting key facts. "Doing your own research" will more often than not just lead you into the dark, unless you have training and experience to select good sources, weed out BS and half-truths, and follow up on leads where necessary.
> Because you are going to ignore people who are actually willing to investigate things, and only believe "reputable" ones - i.e. ones who by definition are part of the system that implements the censorship
How are reputable investigative journalists "by definition" part of the system that implements censorship? There has been plenty of reputable investigative journalism of government wrongdoing over the past few years — even in the "MSM". Unless you believe that the government and media act as one giant, unanimous bloc? That's a bit crazy.
> There's no such thing as "gentleman's agreement" with the federal government that can destroy your business and your life.
Can you cite any examples of the government crushing a private company in recent times due to not acquiescing to their demands? Based on your tone, you seem to believe that the federal government is in a position to do something drastic like imprison a CEO or revoke a corporate charter when faced with resistance. I don't think that's remotely plausible, unless we're talking National Security Letters or something. (Which are a big issue, but not directly relevant here.)
Thanks for illustrating how the censorship reaches its goals. "Reputable" sites won't publish anything that the government disapproves because they have "gentleman's agreement" and you're not going to read sources they say are "right wing rags", because they are full of "conspiracy theories", as "reputable sources" tell you. So nobody needs 100% censorship - you'll censor yourself the rest of the way. Just trust the experts and be happy.
Do we still call it “speech” if it’s a threat for physical violence? This seems even more clear when an established past of threats being actualized exists.
Edit-Maybe I’ll make this personal. I’ve been a victim of both verbal and physical bullying. At some point words cross a boundary from speech to violence. You could even see this with the audio simulations used to simulate schizophrenia. I’d say speech crosses the boundary into violence when it hurts another person and cannot be “muted” by the other. Ie doxxing someone-once it’s on the internet it’s out there for all time. Etc.
>Endless reminder that having multiple layers of moderation protects free speech, it doesn't restrict it.
Cloudfalre claims to be an infrastructure company. Now we're somehow discussing "multiple layers of moderation". This was fast. No limiting principles in sight either.
> Cloudfalre claims to be an infrastructure company.
Very obviously Cloudflare is operating at a higher level of infrastructure than ISPs or the government.
If they actually believe that they are infrastructure that people have a human right to access and that is so fundamental to the Internet that they should be treated as level 0 infrastructure, then they should consider dissolving the company and forming a public org instead.
Otherwise, yes, of course Cloudflare should have stricter standards. Even under Net Neutrality (which I support) ISPs have more moderation power than the government does. Banks arguably have far too much moderation power (I do think people should have a right to banking access), but I don't know anyone who would argue that banks should have no moderation powers at all, it would make it impossible for them to prevent fraud or abuse if that was the case.
Cloudflare obviously should not have as strict moderation as a web forum, but this isn't a binary choice. The limiting principle here is having multiple layers of infrastructure. It's choosing not to have a single company in charge of DDOS protection for 20% of the web.
“
You do not want the government to be the sole arbitrator of what content should be online”
That’s exactly what I think should be the case. The US government is supposed to reflect the will of the people and having a representative democracy is a way to achieve that decentralization. If the power structures that arise from this model threaten this process then the first goal should be fixing it rather than introducing a new process where a smaller ideological group gets to harass those within companies into acquiescing to their moral guidelines.
You do not want the government to be the sole arbitrator of what content should be online. That is exactly how you end up with laws like SESTA/FOSTA.
Our government exists to set a baseline of unacceptable speech that private services can build on top of. As we move futher up the stack to the network level, and then the hosting level, and then the forum level, we allow more moderation -- each level refines its definition of acceptable content a little, and then the next level builds on top of that.
In this case, I actually do agree that Kiwi Farms probably crossed that government baseline; it was such an egregious case that it probably should be addressed in law in some way. But in general it is a bad idea to say that we're going to solve every decision about what content is and isn't acceptable by hauling someone in front of a judge. That's a recipe for chilling speech, not expanding it.