> When it starts to get dangerous, yes. People are inspired toward violence when they believe...
I see this argument a lot, but it fails to address the clear distinction between beliefs and actions. If people are actually violent, we have clear laws to deal with those actions.
If the argument is that certain beliefs shouldn't be allowed because they could be construed as "inspiring violence", then I'd love to hear about how tolerant you are towards Islam's idea of jihad or countless others who believe violence is justified in circumstances that you disagree with.
>I see this argument a lot, but it fails to address the clear distinction between beliefs and actions. If people are actually violent, we have clear laws to deal with those actions.
Why outlaw threats and fighting words then? They aren't violence.
Some of us want to stop easily predictable violence before it gets to the point of actual violence.
>If the argument is that certain beliefs shouldn't be allowed because they could be construed as "inspiring violence", then I'd love to hear about how tolerant you are towards Islam's idea of jihad or countless others who believe violence is justified in circumstances that you disagree with.
It is curious that you use Islam as your example here. Various religions preach violence. The Old Testament establishes the death penalty for people who break the Sabbath. What matters is the actual practice and how likely they are to inspire violence. The QAnon conspiracies are more dangerous in this regard than thousand plus year old religions.
> Some of us want to stop easily predictable violence before it gets to the point of actual violence.
Are you being serious? I honestly can't tell. This has played out in countless movies and books, and the result is never good. It has also played out in real life, and the result is even worse.
> What matters is the actual practice...
Bingo! Sounds like maybe you're beginning to see the error in trying to police thoughtcrime. It's the actions that matter, not the beliefs alone.
Yes, I am serious. The problem is there is no clear delineated line between "thoughtcrime" and plain old crime prevention. Where is the line for you when a threat of violence is equivalent to violence? When does a thought become a plan? Threats are just words, so I imagine I can threaten to kill you. What about if those threats are through deliberate and premeditated actions like mailing you a death threat? Is it any different if I tell other people to attack you? Those are just words, right? Is it different if I pay them? Can I brandish a knife if I am 20 feet away from you? I don't pose an immediate threat in that instance. Can I pull a gun on you without any fear of reprisal? That isn't a direct act of violence either yet. Do I need to pull the trigger before you respond?
> Where is the line for you when a threat of violence is equivalent to violence?
The line is "imminent lawless action" [1], with case law clarifying that "advocacy of illegal action at some indefinite future time" is not considered "imminent" (and therefore protected free speech). It's a pretty clear line, and one that most of the censored material being discussed objectively does not cross.
Google, Twitter, Facebook, etc. are within their rights as private companies to enforce content rules as they wish, but these recent censorship actions have strong implications as to their protections under Section 230, and are alarming insofar as they represent a trend that crosses the line of free speech protections normally recognized by the government and content platforms.
> Google, Twitter, Facebook, etc. are within their rights as private companies to enforce content rules as they wish, but these recent censorship actions have strong implications as to their protections under Section 230,
No, they don't; 230 exists to promote censorship, it does not involve a bar to it.
> and are alarming insofar as they represent a trend that crosses the line of free speech protections normally recognized by the government and content platforms.
They aren't the government, and there has never been a set of free speech protections “normally recognized by content platforms”, especially since 230 was adopted specifically to remove legal disincentives to active moderation.
I never stated or implied that Section 230 barred censorship. It does, however, protect service providers from the liability that a publisher would take on for publishing content that otherwise should be censored. As these companies voluntarily embrace more censorship, they are calling into question their status as "service providers" since they are effectively operating as publishers; i.e., not protected under 230.
> there has never been a set of free speech protections “normally recognized by content platforms”
I agree; legally there hasn't been anything like that, but in the past, those platforms were demonstrably more reluctant to censor political content (e.g., views that didn't align with the company's political views) because they knew that more active involvement might jeopardize their classification as neutral platforms (along with their protections under 230 as described above). In effect, they stayed out of politics not by law, but out of fear of being forced to censor all content if they became "publishers". Now that machine learning has made the censoring part easier, they're less concerned about that happening. However, at the moment they want to have their cake and eat it too – controlling content as they wish while also enjoying the protections of 230.
>I never stated or implied that Section 230 barred censorship. It does, however, protect service providers from the liability that a publisher would take on for publishing content that otherwise should be censored. As these companies voluntarily embrace more censorship, they are calling into question their status as "service providers" since they are effectively operating as publishers; i.e., not protected under 230.
No. That's not what section 230 says.
There is no distinction in section 230 between "platform" and "publisher."
This has been noted and detailed repeatedly in this discussion.
Please see this[0] which will explain, in explicit detail, why you are wrong about section 230.
The objections you're raising (and repeated on sites like the one you posted) are a matter of interpretation of the law, and people on both sides of the political spectrum are now realizing that the law needs clarification. It is not a settled matter by any means, and our lawmakers are still debating the issue.
When a company like Twitter censors the president of the United States, while also embedding their own editorial comments over the content he posted, those actions could easily be seen as falling outside 230 (even if courts haven't decided that in the past). No one denies the fact that the internet today is very different from when 230 was drafted, and from a moral standpoint, we absolutely need more clarification codified into the law.
If your town's public square were seized by one of the richest companies in the world, and they began exerting political control over who was allowed to speak in the town square, it would certainly raise some red flags and likely encourage legal changes (even if, for a time, it was perfectly legal).
The 230 debate isn't even the core of my argument (if you read my previous comments). The point is, whether through legal means or simply by way of market pressure, we should not be allowing these companies to control the political discussion in such heavy-handed ways. Diversity of opinion is diversity, and we need more of it - not less (it's ironic how some push so hard for diversity, yet seem to think we can't handle it when it comes to speech).
I'm sure it's hard to imagine, but if they started silencing liberal views, there's no doubt there would be an uproar among democrats. Apart from any legal changes that may come, we vote with our clicks and platform usage, and there's a growing number of people who are tired of these political censorship games, so they're leaving for other platforms with less political bias. As censorship increases, that will likely accelerate.
> The objections you're raising (and repeated on sites like the one you posted) are a matter of interpretation of the law,
No, they are a matter of clear and unambiguous historical fact.
> and people on both sides of the political spectrum are now realizing that the law needs clarification.
No, subsets within each major party are adopting preferences for regulation with opposed purposes to those for which CDA Section 230 was originally adopted. Which we could debate the merits of, but it's simply factually wrong to describe actions of the type that both the plain text and the legislative history of Section 230 show clearly to be exactly what 230 was adopted to remove existing barriers to are somehow in conflict with Section 230’s protections or purpose.
>I'm sure it's hard to imagine, but if they started silencing liberal views, there's no doubt there would be an uproar among democrats. Apart from any legal changes that may come, we vote with our clicks and platform usage, and there's a growing number of people who are tired of these political censorship games, so they're leaving for other platforms with less political bias. As censorship increases, that will likely accelerate.
Please remember that Section 230 doesn't just apply to the big players. It applies to any internet resource that allows third-party content. Including any site that you may host/own.
I suggest you actually read the sharp end of Section 230 (section (C)(1), which pretty much all litigation around it has been resolved). I present it here for your review[4]:
"No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."
That's it. Full stop.
I don't care about platforms like Twitter, Facebook, YouTube, etc. I don't use them (well, okay, sometimes I listen to songs on YT and once in a while I'll dig up an amusing or enjoyable clip from a movie or tv show) because I find their business models personally offensive.
But you attack Section 230 at your own (and everyone else's) peril. And there are a bunch of reasons for this.
The impetus for Section 230 came out of the court decision in Stratton, Oakmont v. Prodigy[0], where the court ruled that if Prodigy did any moderation at all, they were then liable to be sued for third-party content they hosted.
But that didn't just apply to Prodigy. It applied to any connected device that hosted any content, whether that content originated from the owner or a third-party.
Try to imagine what the world would look like under such a legal regime:
A site like HackerNews, if they (as they do now) allowed the upvote/downvote/flag moderation system, would be liable to be sued for just about any post that someone didn't like, or for a submission that wasn't sufficiently up or down voted.
In such an environment, HackerNews (and every single other website, mailing list, Usenet group, Mastodon instance, Github repo, etc., etc., etc.) would be liable to be sued for just about anything that anyone posted if they did any form of moderation (like blocking spam, porn or any content unrelated to the purpose of the site).
If there was no Section 230, you could be sued if you hosted a mirror of the lkml[1] list and someone didn't like a snarky response from Linus about a rejected patch merge request.
You could also be sued just for forwarding an email that contained statements that someone didn't like. In fact, Section 230 protections stopped just such a lawsuit[2] in 2006.
Companies with deep pockets like FB, Twitter, YT, Reddit, etc., have the resources to fight most such lawsuits, but what about sites like HackerNews?
Do you really think we'd be having this pleasant conversation right now if YC could be sued for any post or submission on this site?
YC would run for the hills, because they don't want that sort of liability. If they moderated anything (and that includes user up/downvotes/flags), they could be sued for any content hosted here. The only alternatives they would have would be to shut down or not make or use any moderation tools at all.
Which would quickly turn this site into a cesspit of spam, porn, irrelevant postings and other garbage (essentially, 4chan/8chan/8kun).
Do you have a github repo? If there were no Section 230, and you blocked even one PR that contained spam, porn, discussions about placentas and/or other irrelevant content, you are now liable to be sued for any statements made by others in that repo.
As such, the result of removing Section 230 protections would create two kinds of Internet resources:
1. Sites which do not allow any third-party content;
2. Sites which allow all third-party content without any limit (think gay, midget furry porn plastered all over a knitting website)
And so, no. I wouldn't mind at all if a particular site moderated in favor of a political (or any other) view with which I disagree. If I don't like it, I'll go elsewhere.
Because of all this, I say that Section 230 is essential to free speech, not a hindrance to it.
> It applies to any internet resource that allows third-party content.
Also to users, on sites where user action can affect the visibility of other content. Were 230 not in place, users making use of such features (not just site operators) could face civil liability.
I didn’t go far back enough other than parent. So apologies if this is out of context.
But I don’t think the goal should be to attack 230, but a desired goal would be stop platforms like Twitter and Facebook and YouTube to stop acting like a publisher. They are simply abusing 230 privileges while still acting like a publisher with editorial muscle.
>But I don’t think the goal should be to attack 230, but a desired goal would be stop platforms like Twitter and Facebook and YouTube to stop acting like a publisher. They are simply abusing 230 privileges while still acting like a publisher with editorial muscle.
The term "publisher" has no legal meaning in the context of section 230.
I (and at least a half-dozen other folks) have explained this repeatedly in this discussion.
I won't do so again, but in the interest of expanding knowledge, I'll point you over here[0] so you can understand the deal as it stands.
If you (or anyone else) would like to see changes to Section 230, that's perfectly fine with me. I suggest you write your congressperson/senators and demand the changes for which you advocate.
That said, what you are describing is not the law as it is now. Whether you (or I for that matter) agree or disagree, that's irrelevant to current jurisprudence.
But we have ways to change our laws and we should take advantage of them where we feel it appropriate.
> As these companies voluntarily embrace more censorship, they are calling into question their status as "service providers" since they are effectively operating as publishers; i.e., not protected under 230.
230 was expressly adopted to let service providers (and users!) of interactive computer services take actions that would otherwise make them publishers without the liability that goes with that, with regard to content that is created by someone else. That's it's whole purpose. Key operative text: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider” and ”No provider or user of an interactive computer service shall be held liable on account of [...] any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected”
I wasn't asking you a legal question. We all know that QAnon isn't literally illegal. I was asking you a series of moral questions, many of which can't be answered with "imminent lawless action". For example, is it considered a "thoughtcrime" if the danger isn't imminent? If someone is working on detailed plans to kill the president, but the plan would take multiple years, should this person be stopped or should they be allowed to continue their plans until the danger is imminent?
I see this argument a lot, but it fails to address the clear distinction between beliefs and actions. If people are actually violent, we have clear laws to deal with those actions.
If the argument is that certain beliefs shouldn't be allowed because they could be construed as "inspiring violence", then I'd love to hear about how tolerant you are towards Islam's idea of jihad or countless others who believe violence is justified in circumstances that you disagree with.