Curious if you feel torn supporting the US highway infrastructure? It can clearly be used for to traffic drugs, humans, blackmarket weapons, etc. It can be used to flee justice, evade police, abets vehicular manslaughter, etc. The list goes on and on. Is it even controversial to support the highway system as is? Do we loose sleep over it?
I feel like we've all been a bit brainwashed by the government in our notion that "free speech" must have limits. I very much doubt that that is true. I think the speech part should always be 100% free. Of course any crimes that derive from it are and will always been fully enforceable. I just question whether or not the speech itself should be viewed as illegal, or something that should be regulated.
Obviously all of the insidious planning and hatred that presumably occurred on Parler is abhorrent. I think I can hate all of those things without believing that the site should be censored.
"Brainwashed" is not a very honest way to frame this discussion. You can be convinced of something without being brainwashed.
I genuinely believe that free speech should have limits in order to maintain the cohesion of our societies and protect people from mobs. I don't think I've been brainwashed into it, I've just seen what unbridled and unchecked hate speech can lead to.
Of course there's the problem of where the line should be drawn and who should draw it, but in order to have this discussion we need to move away from these strawmen (strawpersons?) and accept that maybe people just have convictions they haven't been brainwashed into.
After all, I'm sure you wouldn't be very happy if I erected billboards along the highway featuring hardcore pedopornography with your faced photoshopped in. One way or an other we all have limits to what we consider acceptable expression, it's all about figuring out how this should be codified and enforced.
And I want to add that having taboo topics and forms of expression is probably a good thing overall. For our lives to have meaning we need "sacred" things to protect, things to fight against, things to think about. We need to be able to shock, we need to be able to be transgressive, to make revolutions and counter-revolutions, to express frustration.
The belief that free speech must have limits is necessarily equivalent with the belief that a large part of the people are stupid and they must be protected by the smart people by preventing them to hear anything that might influence their feeble minds and make them act in a wrong way.
Maybe this belief about most people being stupid is correct, therefore free speech must indeed be limited, but I do not see any of the advocates of limiting free speech having the courage to tell what they really think in the face of those whom they want to protect.
I think it just comes down to Popper, who puts it elegantly enough:
> ... In this formulation, I do not imply, for instance, that we should always suppress the utterance of intolerant philosophies; as long as we can counter them by rational argument and keep them in check by public opinion, suppression would certainly be most unwise. But we should claim the right to suppress them if necessary even by force; for it may easily turn out that they are not prepared to meet us on the level of rational argument, but begin by denouncing all argument; they may forbid their followers to listen to rational argument, because it is deceptive, and teach them to answer arguments by the use of their fists or pistols. We should therefore claim, in the name of tolerance, the right not to tolerate the intolerant.
>The hobbit is simply embarrassed into compliance by his elven betters. The ideas he believes become a dangerous mental disease. This diagnosis is written into history. The sooner he gives up this nonsense, the better. To help convince him, we'll make this idea quasi-illegal. The sooner he gives it up, the less his life will suffer. Eventually he can be fired for staying an idiot. Everyone will agree that he deserved it.
>This is Popper’s paradox of tolerance. Popper discovers that every real regime must have the apparatus of the Inquisition in its back pocket. If it hesitates to deploy its intellectual rack and thumbscrew, it will be replaced by a regime with no such qualms.
>Popper, read logically, advises the Nazis to repress the Communists, the Communists to repress the Nazis, the liberals to repress both and both to repress the liberals. From his “open society” he comes all the way around to Hobbes, Schmitt and Machiavelli. Next he will tell us, in Esperanto, that “the earth is nothing but a vast bloody altar.”
Popper's point is that if someone wants to curtail your right to free speech, you have the right to curtail theirs. On the other hand, if a political opponent respects your right to speak freely, you should do the same for them. It's not really a paradox; it's about symmetry.
By analogy, imagine if someone commits murder. Would putting them to death as punishment also be murder? No. You have the right to life as long as you respect other people's right to the same.
The problems always begins with grey and ends in black. Who defines when, where and how someone is curtailing their right to free speech? There is always asymmetry in power.
This is correct. You can't have democracy without free speech. If you don't trust your fellow citizens to be able to deduce the truth, you shouldn't trust them to cast their own vote.
This is the most unfortunate consequence of the idea that free speech must be limited.
If it is accepted that a part of the people cannot be trusted to not do wrong things when others tell them to do so, then an unavoidable consequence is that to that part of the people the right to vote must also be denied, because if they may be convinced by lies to do very wrong things, like violence, then it is even more certain that they will be easily convinced by lies to do a minor mistake, like casting a wrong vote.
Any proposal to deny the right of voting to stupid people, or to give different weight to the votes, depending on the "intelligence" of the voters, would rightly generate huge protests.
However, any proposal to restrict the free speech without also restricting the right to vote is logically inconsistent, even if many seem to not notice this.
> If you don't trust your fellow citizens to be able to deduce the truth...
Do you trust people who live in an echo chamber overrun with disinformation to deduce the truth about the outcome of the US election? If so, can you speak to the mechanism by which such people can determine the truth? And could you speak to the empirical failure of this population to discover the truth?
With almost every technological advance, destructive power arrives long before the protective powers. It's much easier to destroy something with a nuclear weapon than it is to build a nuclear power plant. Likewise, we arrived at muskets before the combustion engine. Disinformation is much cheaper (and profitable for media companies surviving on outrage driven clicks) than delivering self-verifiable empirical information. This will change in time.
There are no single criteria that can be used to judge how democratic a country is.
Many European countries have more restrictions on free speech than USA, so yes, they are less democratic by this criterion.
By other criteria, e.g. by evaluating how many abusive laws they have that favor a few rich individuals that own some large companies against the majority of the citizens, most European countries are more democratic than USA.
The same conclusion comes from other criteria, like how easy is for most citizens to access education or health services.
>The belief that free speech must have limits is necessarily equivalent with the belief that a large part of the people are stupid and they must be protected by the smart people by preventing them to hear anything that might influence their feeble minds and make them act in a wrong way.
You're leaving out some very well-established restrictions on free speech, including slander, libel, copyright infringement, obscenity, privacy violation... in short, absolutism hasn't been a prevailing philosophy for centuries. This sudden resurgence of it feels like a refusal to engage with the very real, very difficult debate on what speech deserves censorship.
It doesn't have anything at all to do with intelligence, but the observed consequences of certain kinds of speech. Lies, for example, that manipulate peoples' emotions. This is not unique to "stupid people."
Some other poster already mentioned that what you list are actions that are punishable by various laws, at least in most countries.
There is a huge difference between punishing someone for something already done, e.g. slander or libel, and denying him access to publication media because you believe that in the future that person might say something that might have who knows what effect on other people, who might do some crimes.
I completely agree that whoever abuses the free speech right to do something punishable by law must be judged and punished if found guilty.
On the other hand, I do not agree with any of these "deplatforming" actions based on vague beliefs about the future actions of some people.
If Trump or anyone else is expected to do a speech crime, then watch him and, as soon as he does that, fine him or arrest him.
If he already did such a crime, then also fine him or arrest him.
Otherwise, "deplatforming" him has no basis in facts.
> denying him access to publication media because you believe that in the future that person might say something that might have who knows what effect on other people, who might do some crimes.
Is this the case here?
The statements in question are already made. Typically, the deplatforming happens after a violation has already been made. Which seems to be the case here, unless I am misunderstanding.
In the case of banishing someone like Trump from Twitter, where that is justified by many previous misleading messages, I agree that this is right.
However, this discussion thread started about the actions done against Parler.
I had not previously heard about Parler, but I understand that the efforts to stop its activity are based on claims that it does not perform adequate censorship of the content published there, unlike Facebook or Twitter.
If there are people that have published there things that are punishable by law, they should be punished. If Parler itself has done something illegal, then they should be punished.
However, if some private companies sabotage Parler based on the fact that Parler does not have the same censorship rules as themselves, then that is clearly wrong.
From what I have heard here, it might be good if Parler disappears, but I cannot accept that the end justifies the means.
And of course, those advocating limiting speech are sure that they are not among the stupid. They're always advocating limiting someone else's speech, not their own, because they're smart people who are not fooled by the wrong things, and who listen to and believe the right things.
> Maybe this belief about most people being stupid is correct, therefore free speech must indeed be limited, but I do not see any of the advocates of limiting free speech having the courage to tell what they really think in the face of those whom they want to protect.
I don't know about this part. the narrative seems to be more about protecting the vulnerable from the stupid, not so much protecting the stupid from themselves.
Nobody can be so vulnerable to the words of other people that they will do obviously bad things, unless they are stupid.
Normal people are vulnerable to lies only in the sense that when presented with deliberately false information that they cannot verify immediately, they may trust the liar and make a wrong decision to do something that they cannot know yet whether it is right or wrong, e.g. buying something cheap for a high price or being the victim for another kind of fraud.
Only someone stupid will beat someone or burn a house because of some false accusations.
All the arguments for these deplatforming actions were that the people, who would have heard the propaganda of those to whom the access is denied now, would have been easily convinced to do stupid things.
> but I do not see any of the advocates of limiting free speech having the courage to tell what they really think in the face of those whom they want to protect.
It's not them I want to protect (though I don't have any explicit desire for them to _not_ be protected). It's me I want to protect. Your right to swing your fists ends at the tip of my nose, and your right to yell "Fire!" or "Stop the steal" or "Storm the Bastille!" are likewise constrained when they infringe on my rights.
The practical implementation and realisation of rights is always a trade-off of rights vs rights. What is under discussion is where the balance of those trade-offs lay.
Having said that, there is a very strong case to be made that we need to address people's propensity to listen to, invest in, and act on, obvious bullshit (e.g. flat-earthers, reptilians etc). More than education is required. My brother-in-law - a highly functioning, tertiary educated small business owner and nice guy - is a dyed-in-the-wool conspiracist, believing the most outrageous things. Having a rational discussion with him has not budged him from his beliefs one iota. I believe it's a psychological condition as common as depression or anxiety.
There are no easy answers nor quick fixes for this problem.
>The belief that free speech must have limits is necessarily equivalent with the belief that a large part of the people are stupid and they must be protected by the smart people by preventing them to hear anything that might influence their feeble minds and make them act in a wrong way.
I don't see why this is true; intelligent people can be harmed by speech just as much as stupid people can. Everyone can certainly be harmed by the immediate follow-on effects of speech. Some words can harm in a way it is unreasonable to expect guard against, or those for which it is impossible to guard against.
The scholarly literature on speech, harm, and legality has dozens of such examples.
>but I do not see any of the advocates of limiting free speech having the courage to tell
I can courageously say now that I'm not in favor of restrictions on speech for reasons of "stupidity" but rather the demonstrable harm speech can cause.
I don’t agree, perfectly rational otherwise smart people can be duped by lies, to suggest otherwise is to deny the evidence of the entire advertising industries existence. We need to protect everyone from predatory actors, propaganda and lies irrespective of their intellect because we are all susceptible.
I'm not sure you have actually shown the equivalency.
But if you did, how is that different from worker protections or consumer protections or environmental protections or mandatory seatbelt or million other laws?
> Maybe this belief about most people being stupid is correct, therefore free speech must indeed be limited, but I do not see any of the advocates of limiting free speech having the courage to tell what they really think in the face of those whom they want to protect.
Maybe people don't actually espouse that stated position because it's a strawman.
Intelligent people can be fooled and manipulated without being stupid - they have been for ages. What's different now is the speed and concentration of misinformation.
Platforms of mass misinformation and manipulation are curious beasts, and susceptibility to radicalization != stupidity. Something somewhat novel appears to be happening due the new ways we communicate, and it's not unreasonable to suggest that "something" should be done about it.
edit: it seems people have misunderstood my comment to take a position. I' m taking issue with the idea that "free speech doesn't need limits" followed by a listing of limits applied to free speech. If we can't agree that there even exists such limits and that perhaps they're necessary any discussion below is fruitless.
I think parent was pointing out that the question of "where should the line be drawn and who should draw it" has already been settled. Those things are already illegal, so we don't need to impose further restrictions on speech in order to prevent those things.
> I think parent was pointing out that the question of "where should the line be drawn and who should draw it" has already been settled. Those things are already illegal, so we don't need to impose further restrictions on speech in order to prevent those things.
I'm skeptical that such a line can ever be truly "settled." Sure, it can be settled in a particular social and technological context, but when those latter things change, the line may need to be adjusted.
See Trope 9: "This speech may be protected for now, but the law is always changing."
TLDR: yeah, the law can change, but it's highly unlikely because "the United States Supreme Court has been more consistently protective of free speech than of any other right, especially in the face of media sensibilities about "harmful" words"
> See Trope 9: "This speech may be protected for now, but the law is always changing."
It's worth noting that no legally forbidden censorship has been happening with regards to the recent insurrection against congress.
> TLDR: yeah, the law can change, but it's highly unlikely because "the United States Supreme Court has been more consistently protective of free speech than of any other right, especially in the face of media sensibilities about "harmful" words"
But that's not a fixed fact of nature, it's a reaction to a particular social and technological context.
For instance, if someone discovers an idea instantly turns 10% those who hear it into murderous zealots (sort of like the poem in "The Tyranny of Heaven" by Stephen Baxter), that idea is going to censored hard and the Supreme Court will be like "yup, ban it."
Likewise, if some social change or technology renders the legal regime that the Supreme Court has created a cause of serious dysfunction, then Supreme Court is going to have to change that regime to accommodate. Idealism's great, but not when it doesn't work.
> no legally forbidden censorship has been happening with regards to the recent insurrection against congress.
Yes, this is true as far as I am aware as well. But I find myself in a conundrum; had this "inciting" speech taken place in the town square or a public park, much of it likely could not have been censored because it would have been protected by the 1st amendment and the last 100 years of case law. Where, then, is the town square and public park of 2021?
Despite the fact that the legal protections of public speech haven't changed much in decades, the practical protections of public speech (as I discuss in greater detail in [1]) have indeed been eroded, because social media platforms and, apparently, web hosting and device makers are now the arbiters of the vast majority of speech. Free speech that only applies where virtually noone can hear you is a very limited free speech indeed.
The town squares and public parks are still there.
The existence of Twitter, Facebook, etc. have accustomed people to the ability to air their opinions globally free of charge, however that is a very novel phenomenon. It's hard for me, having come of age in the 1980s/1990s, to see this as an inalienable right.
Of course they are still there! But the conversations increasingly aren't taking place there. If free speech is essential to a liberal democracy, we're moving to a place where the majority of speech takes place in non-free-speech areas, and that does not bode well for the health of our democracy.
> Of course they are still there! But the conversations increasingly aren't taking place there. If free speech is essential to a liberal democracy, we're moving to a place where the majority of speech takes place in non-free-speech areas, and that does not bode well for the health of our democracy.
You have a right to speak, but not a right to reach.
The majority of speech always took place in areas with some kind of limits. For instance: in some guy's tavern or in the pages of a local newspaper.
Furthermore, what's happening to Parler could also be conceived as a kind of self-defense exception: the factions it embraced have recently attempted to literally attack (in the name of a selfish demagogue) the heart of the liberal order that enables free speech, and they cannot be tolerated if toleration is to survive.
> If free speech is essential to a liberal democracy, we're moving to a place where the majority of speech takes place in non-free-speech areas, and that does not bode well for the health of our democracy.
The majority of conversation in democracies has always taken place in private venues that were free to control who had access and did absolutely do so based on political viewpoint.
That these private spaces are now virtual rather than physical doesn't change the essence of that fact.
>the question of "where should the line be drawn and who should draw it" has already been settled
I think it should at least be a line open to challenge without being accused of being brainwashed. If that line cannot be questioned, we're skating on dogmatism. There are very few good reasons for a special guarantee of free speech (versus, say, a special guarantee to be able to eat fries) which stand up to closer scrutiny.
The only convincing reason for a constitutional guarantee to freedom of speech is mistrust in the government, but again, that depends where you draw the line. Food regulation is arguably just as important in our lives, but few mistrust the FDA as to call for its abolition, or propose a constitutional amendment banning all regulation of foods.
This isn't a matter of what the law is, it's a matter of what the law should be - whether it's a constitutional law or not.
no it is not. Saying “covid is a hoax” should be protected by free speech because it is an expression of (stupidly false) opinion. Saying “we storm Capiton at 8:00am on Jan 6” is a call to violent action, not an idea, thought, or opinion and obviously must be taken down ASAP
> no it is not. Saying “covid is a hoax” should be protected by free speech because it is an expression of (stupidly false) opinion.
But I shouldn't be obligated to let someone put a sign saying that on my lawn, nor should I be obligated to remain friends with someone who is pushing that lie.
Most of the people who are complaining about free speech being limited are really arguing for things like the above.
You preventing signs on your lawn are fine, you preventing specific messages on systemically important communications infrastructure you happen to own is not. It’s the same reason that AT&T was heavily regulated back when it carried 90% of telecom traffic and before it was ultimately broken up via antitrust.
You running a corner store the way you want is fine, you running the only store in the country the way you want is not.
> You preventing signs on your lawn are fine, you preventing specific messages on systemically important communications infrastructure you happen to own is not. It’s the same reason that AT&T was heavily regulated back when it carried 90% of telecom traffic and before it was ultimately broken up via antitrust.
You know, you don't need AWS to run a website, right? Similarly, newspapers have often been local monopolies, but as far as I know, they've always been able to decline to publish a letter to the editor.
Whoever wants to stick a sign on my lawn is going to come up with some rationale to force me to do it, but that doesn't mean it holds any water.
We got to the latter because of years and years of the former.
"You're allowed to talk people into believing that they need to violently rebel, but you're not allowed to actually do the rebelling" is not a particularly reasonable position.
I'm working on a blog, where users will post about their experience with a particular drug and its side effects. Since, I am paying for hosting and I created the blog, I will NOT allow any pseudo science. Am I limiting free speech? No.
There is a good reason twitter, facebook, youtube does remove certain content. They have the right to remove whatever they want.
All speech is free speech. Avoid hyperbole here because it doesn't help. Your examples both kinds of speech that people think should be limited, trying to discard one as not speech rather than focusing at hand on what speech should be limited does nothing but rile up those that disagree with your examples.
If someone publishes fake news about vaccines, it takes a lot of effort then for people to keep explaining to other people how this is not true. It is harmful to society, and unfair - it takes less time to invent a new hoax than to fact-check it.
Just like loitering on the ground is considered an offence, so should be publishing fake news. It doesn't hurt one person, but it hurts society.
There also are objective criteria for determining if something is fake or not - so it is possible to create laws that forbid it and don't limit a freedom of opinion.
Why is everyone debating on whether they should be limits on free speech when that is irrelevant? Free speech is something provided by the government, not by private companies. Any private company, such as a restaurant, can throw you out for any reason outside of discriminating against a protected class.
What seems to be under attack here is the right of individual companies and people to decide who they wish to work with. Everyone who is criticizing big tech for choosing not to work with certain people is forgetting that that same principle can be applied to them. Do you want to be forced to work with companies you abhor?
> What seems to be under attack here is the right of individual companies and people to decide who they wish to work with. Everyone who is criticizing big tech for choosing not to work with certain people is forgetting that that same principle can be applied to them. Do you want to be forced to work with companies you abhor?
You mention protected classes in your first paragraph, but then act like it's self-evident that it's bad to "force people to work with (and serve) people they abhor". What else is the concept of a protected class if not this?
It's clear that we already don't have full freedom of association, and the question is where the line should be drawn. When people talk about big tech regulation, it's undergirded by many of these platforms' unique amount of market power. This isn't a novel concept; utility companies are an example of a natural monopoly: benefiting from scale, considered critical infrastructure, and legally prohibited from cutting off power to its customers, even if they don't like their politics. The topic under discussion here is whether the "new public square" (or things like payment infrastructure!) are considered critical enough to society that we want to protect access to them.
I'm constitutionally (not "Constitutionally") disinclined against ill-considered regulation, and most of the conversation by government about tech regulation is pants-on-head stupid. But the dissonance between the two paras in your comment are a good indication that this discussion isn't nearly as simple as you're framing it.
> The topic under discussion here is whether the "new public square" (or things like payment infrastructure!) are considered critical enough to society that we want to protect access to them.
That's one core question. Another is whether it should be up to these companies to police their own platforms. Inciting violence is illegal. They're banning people and platforms inciting violence.
"Repeal section 230" seems to be about making these companies responsible for policing their own platforms. When people incited violence/genocide on Facebook in Myanmar, some people held Facebook partially responsible. Now, there are people are inciting violence on Facebook in the US, and it's still an open question whether Facebook should be held liable.
When people incited violence/genocide on <radio> in Myanmar, some people held <radio> partially responsible. Now, there are people are inciting violence on <radio> in the US, and it's still an open question whether <radio> should be held liable.
There are important differences, but the parallels between the Rwandan genocide and the growth of talk radio in the US in the 90's have always struck me as interesting.
That being said, I think that the new public square argument is strong, and if we're going to have internet monopolies, then they probably need to be regulated similarly to the utilities.
Alternatively, they can be broken up. I don't think the current state is sustainable over the longer term.
I agree that this is a limit on corporations’ free speech. Monopolistic corporations do have a well founded legal limit to their free speech rights.
For example, it’s a form of free speech for Microsoft to decide how they write their own software. One of those decisions was to bundle a free web browser in with their OS and tie the OS function tightly together with that browser. Microsoft Corporation was almost broken up by the government because they did that.
The issue that Greenwald is raising is similarly rooted in anti-trust;
> If one were looking for evidence to demonstrate that these tech behemoths are, in fact, monopolies that engage in anti-competitive behavior in violation of antitrust laws, and will obliterate any attempt to compete with them in the marketplace, it would be difficult to imagine anything more compelling than how they just used their unconstrained power to utterly destroy a rising competitor.
Freedom of speech is not absolute. Just like individuals’ free speech rights have limits such as incitement to violence, corporations also have limits to their “free speech” rights based in anti-trust law and anti-racketeering laws in how they can attack potential competitors.
It isn't really free speech, you are right. It is more of an anti-trust issue, that a couple companies could get together to completely ban another one. We should consider if too much power has been concentrated in the hands of a few tech companies, if essentially their content moderation policies can so easily be misinterpreted as free speech issues.
That the outcome here is banning a community that was apparently mostly used for hate speech (never actually checked it out) is... maybe a red herring? I mean, they obviously didn't build these massive companies with the primary goal of banning niche hateful websites.
If we were to, say, break up social media and internet infrastructure giants, then this sort of website would probably be able to persist by hopping from host to host until they found one without any morals. But could consider if losing the ability to perform this kind of deplatforming would be worth it, in exchange for a much more competitive marketplace.
Keep in mind that sites like parler are not actually banned. They could simply hook up their own computer to the internet and run their site if they wished. No one has some natural right to be able to use a convenient service like AWS. And if AWS refuses to do business with you, there are hundreds of other hosting companies that you can choose from.
Ultimately, if not one of the hundreds of hosting companies out there wants to work with you, that should be a very strong indication that the community is not something we want. But if you really really want this community anyways, just hook up your computer to an internet connection and host the site yourself.
> Free speech is something provided by the government, not by private companies
It is not provided by the government. Congress is prohibited from passing laws that abridge freedom of speech; Congress is not the fountain that free speech springs from.
It is perfectly relevant to discuss freedom of speech in contexts where someone else might be doing the abridging besides the U.S. Congress.
So when a politician or pundit cherry-picks one sentence out of a larger statement and spins that to imply something other than what the speaker meant, perhaps even the complete opposite of what he meant, is that "fake news" or is that "opinion?" And who decides?
However, I want to repeat what I have already replied to another similar post.
All these speech crimes should be punished according to the law, as soon as they are committed.
Restricting the speech of someone, by denying access to publication media, just because it is believed that they might commit some speech crime in the future, that is clearly an arbitrary and baseless restriction of the free speech right.
The problem is people are idiots who will believe someone who calls themself Q and claims the deep state is trying to take down the president. Once you convince people of that you don't need to use illegal speech to inspire violence, they're already inspired.
If I threaten to kill you, or commit fraud, in person, you call the police, give them what information you have about me, and ideally I get a knock on the door. If I do it online, well, you don't have much recourse.
If the author of such an online message cannot be identified, then the recourse is what is already common practice, to delete the offending message or possibly to replace the deceiving information with correct information.
If the author can be identified, which is frequently true, then it should be the same for online as for in person.
If you want to frame it that way then fine by me: "there are no limits to free speech, but there are limits to what can be described as free speech". I'd argue that it's effectively exactly the same problematic seen from a slightly different angle.
Saying things like "I support Nazis" could be considered a valid political opinion protected by free speech in some countries and illegal hate speech in others.
I'm not American and I don't have a strong opinion on the events you refer to (I actually had to read the replies to understand what you were getting at), so I definitely would've told you exactly the same thing in June of 2020. Feel free to ask me again whenever you see fit.
Protesting anything is fine. It's how far you take that protest that is the problem. When does a protest become a riot?
When you block traffic?
When you enter a secured space?
When you break into a federal building?
When you set fire to a federal building?
When you set fire to cop cars?
When you break windows of local businesses?
When you loot local businesses?
When you spray paint hate speech?
When you threaten cops families with death?
When you throw fireworks at people?
When you throw Molotov cocktails at people?
These all occurred in large numbers during between the death of George Floyd and the Capitol Riot. How many times did big tech step and and stop the coordinating efforts for those protests/riots?
Hm. I think a lot of people reasonably draw the line on speech somewhere after the protests & property damage that happened during 2020, but before action coordinated to take control of the seat of government/potentially kidnap or kill elected representatives.
Or, people are just inconsistent and not thinking about things beyond their politics.
People will praise the Arab Spring organizing on Twitter without considering the implications for the events at the Capital on Jan 6th.
People are fine with Parler getting banned by all their vendors for not moderating violence and threats. But people would loose their minds if the same thing happened to Facebook for their failure to moderate violence around the Rohingyan genocide.
> People will praise the Arab Spring organizing on Twitter without considering the implications for the events at the Capital on Jan 6th.
The reason why someone might hold these competing beliefs is simple: they strongly value democratic institutions. Violence, in the name of promoting democratic institutions, and ideally expanding human rights, is justifiable. Violence in the name of authoritarian insurrection is not.
Now, of course this gets really tricky, because many people on Parler, and in the capitol riots, fully believed that they were protecting democracy from massive voter-fraud. No clear answer to address that issue, but it is something that democratic societies will need to reckon with. How does one preserve democratic ideals (including promoting free speech, to whatever extent possible), while still maintaining a healthy society that doesn't tear itself apart?
> People will praise the Arab Spring organizing on Twitter without considering the implications for the events at the Capital on Jan 6th.
This is a great point. It's also key to consider that some of the groups that praised the Arab Spring were the Obama State Department which was led by Hillary Clinton at the time.
It appears the threshold is "support violent insurrection in other countries but stamp out the discussion of it here".
Or, perhaps, "support violent insurrection after peaceful protests against authoritarianism, human rights violations, political corruption have failed, when there is no further peaceful opportunity for opposition."
(The United States had an election, right? One with no more than the usual, minor, issues, right? One where legal actions were taken and weighed appropriately, right? One where one specific loser seems only to be complaining about losing, right? One where all of the other contemporaneous votes were not objected to, right? One that will be revisited in 2 to 4 years, right?)
Serious question: Which of the lawsuits went into discovery and were heard to weigh those claims? I'd love to read the details as that could dispel rumors and bs.
"In this case, the district court issued an emergency temporary restraining order at the plaintiffs’ request, worked at a breakneck pace to provide them an opportunity for broader relief, and was ready to enter an appealable order on the merits of their claims immediately after its expedited hearing on December 4, 2020. But the plaintiffs would not take the district court’s “yes” for an answer. They appealed instead. And, because they appealed , the evidentiary hearing has been stayed and the case considerably delayed. For our part, the law requires that we dismiss the appeal and return the case to the district court for further proceedings."
Georgia's a pain in the butt. Apparently, their official court documents site wants $.50 / page for the PDFs of filings. That's not happening.
Many of the court records can apparently be found on democracydocket.com, but my browser is complaining about the site. Sorry.
Trump v. Kemp, 1:20-cv-5310 (N.D. Ga.) (https://www.courtlistener.com/recap/gov.uscourts.gand.285271...) (from https://www.brennancenter.org/our-work/court-cases/voting-ri...) is interesting, though. The plaintiff's first claim is that the election was not conducted in accord with election laws established by the GA legislature. The court decides "Therefore, Plaintiffs Electors Clause claim belongs, if it belongs to anyone, only to the Georgia General Assembly" and since none of the plaintiffs are members of the assembly, they don't have standing. (That's rather fine logic chopping, but....)
The rest of the ruling seems to be that the governor and secretary of state of the state are not the ones legally responsible for verification of ballots, so the second claim cannot apply to them.
Ok, now I've been sucked in. I started from https://en.wikipedia.org/wiki/Post-election_lawsuits_related..., which lists a number of the cases, mostly in federal court, and mostly (I think) appeals, which don't deal with matters of fact. I'm having to dig through those to the original cases, in state courts.
For example, Bowyer et al. v. Ducey et al., which has a currently unresolved (https://www.govinfo.gov/app/details/USCOURTS-azd-2_20-cv-023... is the latest federal dismissal) but the dismissal refers to Ward, CV 2020-015285 (Ariz. 2020); (Doc. 81-1) (and has a short explanation of the ruling that goes into the evidence).
So, that brings me to Ward v. Jackson et al (CV2020-015285) (https://www.clerkofcourt.maricopa.gov/records/election-2020/...). https://www.clerkofcourt.maricopa.gov/Home/ShowDocument?id=1... is the minutes of the first evidentiary hearing and https://www.clerkofcourt.maricopa.gov/Home/ShowDocument?id=1... is the minutes of the second evidentiary hearing and ruling. (The minutes don't include the evidence, just who gave testimony and what the evidence is.) The ruling is 1) Background, 2) The Burden Of Proof In An Election Contest ("The Plaintiff in an election contest has a high burden of proof and the actions of election officials are presumed to be free from fraud and misconduct."), 3) The Evidence Does Not Show Fraud Or Misconduct (see below), 4) The Evidence Does Not Show Illegal Votes, 5) The Evidence Does Not Show An Erroneous Vote Count, and 6) Orders.
P: "A.R.S. § 16-672(A)(1) permits an election contest “[f]or misconduct on the part of election boards or any members thereof in any of the counties of the state, or on the part of any officer making or participating in a canvass for a state election.” Plaintiff alleges misconduct in three respects. First is that insufficient opportunity was given to observe the actions of election officials."
C: "The observation procedures for the November general election were materially the same as for the August primary election, and any objection to them should have been brought at a time when any legal deficiencies could have been cured."
P: "Second, Plaintiff alleges that election officials overcounted mail-in ballots by not being sufficiently skeptical in their comparison of signatures on the mail-in envelope/affidavits with signatures on file."
C: "Maricopa County election officials followed [the Secretary of State’s Election Procedures Manual, with multiple verification steps] process faithfully in 2020. Approximately 1.9 million mail-in ballots were cast and, of these, approximately 20,000 were identified that required contacting the voter. Of those, only 587 ultimately could not be validated.[...] The Court ordered that counsel and their forensic document examiners could review 100 randomly selected envelope/affidavits to do a signature comparison. [...] Of the 100 envelope/affidavits reviewed, Plaintiff’s forensic document examiner found 6 signatures to be “inconclusive,” meaning she could not testify that the signature on the envelope/affidavit matched the signature on file. She found no sign of forgery or simulation as to any of these ballots. Defendants’ expert testified that 11 of the 100 envelopes were inconclusive, mostly because there were insufficient specimens to which to compare them. He too found no sign of forgery or simulation, and found no basis for rejecting any of the signatures. [...] None of them shows an abuse of discretion on the part of the reviewer. Every one of them listed a phone number that matched a phone number already on file, either through voter registration records or from a prior ballot. The evidence does not show that these affidavits are fraudulent, or that someone other than the voter signed them. There is no evidence that the manner in which signatures were reviewed was designed to benefit one candidate or another, or that there was any misconduct, impropriety, or violation of Arizona law with respect to the review of mail-in ballots."
P: "Third, Plaintiff alleges errors in the duplication of ballots. Arizona law requires election officials to duplicate a ballot under a number of circumstances. One is where the voter is overseas and submits a ballot under UOCAVA, the Uniformed And Overseas Citizens Absentee Voting Act. Another is where the ballot is damaged or otherwise cannot be machine-tabulated."
C: "The Court ordered that counsel could review 100 duplicate ballots. Maricopa County voluntarily made another 1,526 duplicate ballots available for review. [...] Of the 1,626 ballots reviewed, 9 had an error in the duplication of the vote for president. Plaintiff called a number of witnesses who observed the duplication process as credentialed election observers. There was credible testimony that they saw errors in which the duplicated ballot did not accurately reflect the voter’s apparent intent as reflected on the original ballot. This testimony is corroborated by the review of the 1,626 duplicate ballots in this case, and it confirms both that there were mistakes in the duplication process, and that the mistakes were few. When mistakes were brought to the attention of election workers, they were fixed. The duplication process prescribed by the Legislature necessarily requires manual action and human judgment, which entail a risk of human error. Despite that, the duplication process for the presidential election was 99.45% accurate. And there is no evidence that the inaccuracies were intentional or part of a fraudulent scheme. They were mistakes. And given both the small number of duplicate ballots and the low error rate, the evidence does not show any impact on the outcome."
The Arizona Supreme Court decision (https://www.clerkofcourt.maricopa.gov/Home/ShowDocument?id=1...) makes for good reading. It's pretty clear, with a summary of the evidence (for the parts that were appealed). Weirdly, it takes the three claims in reverse order. I particularly liked the statements that the Secretary [of State, of Arizona] represented an error of 0.37%, while the appellants say it represented an error of 0.55%; the trial court accepted the appellants' number but it and the Supreme Court note that extrapolated to the total number of duplicated ballots, that doesn't come close to what would be required for a recount. The appellant offered no evidence that 1626 ballot sample was inadequate. The court accepts that there were irregularities, but that they did not even render the result uncertain.
So, there's one. I'll go fishing for more.
From what I know about law (NOT A LAWYER!), they're probably all in state courts---elections are state procedures---and thus not reported like the federal cases (which seem to fall into two classes: appeals, which deal with laws and procedures, not evidence, and them as were dismissed due to lack of standing since state procedures are a matter for state laws.
Let's talk AI. In this case, Clark county used a machine to sort mail-in ballots and to do a first pass validation of signatures. For 453,248 ballots. For reasons, including that the signature exemplars from the DMV were less than 200DPI, 70% of the ballots were viewed as sketchy, while 30% were found machine-okey-dokey and passed on without further review. The complaint wants those 130,000 votes invalidated.
The interesting thing is that the 70% that required manual review, 1-1.5% were initially rejected due to signature mismatch. Most were "cured" by voters (presumably by officials actually contacting the voters), leaving 0.3% to be completely rejected. The court notes that Washoe county (not using the magic machine) had a pre-cure reject rate of 1.53%. [So, if I have my sums right, the complaint wants 130,000 (30% of mail-in votes) plus some unknown number (greater than 1%) of the remainder invalidated, full stop, because between 1,400 and 6,900 were invalid. Or something.]
The complaint also includes Nevada's electronic voting machines ("not less than 1000 illegal and improper votes" counted and "not less than 1000 legal and proper votes" not counted). And no less than 15,000 illegal and improper mail-in votes from out-of-state. And the USPS, which was "directed" to deliver mail-in ballots where the addressee was deceased, moved, or had no known affiliation with the address. No less than 500 votes from dead people. There are also allegations of fraud in vote counting and observation. (It's a laundry list.)
According to the court, the Agilis sorting machines were determined to be ok in a previous case, Klaus v. Cegavske (which I have yet to look up).
As far as evidence, the court ordered the contestants to disclose all witnesses and evidence they intended to use by 5:00 pm Nov. 25. The contestants did not issue their first deposition notices until Nov. 27. Therefore the contestants' evidence consists of non-deposition witness declarations, with no cross-examination. The court considered these hearsay.
Michael Baselice offered expert evidence on the incidence of illegal voting based on a phone survey. He was unable to identify the source of his data and conducted no quality control. Similar questions faced Jesse Kamzol's analysis of various databases of voters. Scott Gessler's report lacked citations of facts and evidence and did not include any exhibits in support of his conclusions; his conclusions are based on a handful of affidavits. The complainants' experts were found to be of little or no value, but were not excluded from consideration, although given little weight.
The defendants provided a stack of testimony that court judged credible (including Dr. Michael Herron and the president of the company manufacturing Agilis).
Based on Herron's testimony, the court finds no evidence of a higher rate of voter fraud associated with mail-in voting. He also found that an illegal vote rate of 0.00054 percent between 2012 and prior to the 2020 general election. Herron testified that the contestant's implied double voting rate was 89 times larger than a conservative academic estimate. He finally testified that the contestants provided no persuasive evidence that fraudulent votes affected the presidential margin of 33,596 votes. (Gessler also testified that he had no personal knowledge of any voter fraud.)
The court found that the record does not support allegations of problems with provisional ballots.
The court found that the Agilis machine did not accept any signatures that should have been rejected and that the record did not support a finding that ballots with improper signatures were counted.
And on, and on. The court finds that the record does not support the allegations of the complainants, hearsay declarations included, under "any standards of evidence". The contestants failed to meet their burden and the case was dismissed.
J claims that election workers coached voters and was instructed not to ask for photo ID from voters. However J does not name the location, provide the number of incidents, or name the employees. J never told a supervisor of the incidents nor took steps to address them. (J only came forward after reports of Biden winning.) J also claims, at the TCF center, she was directed not to compare signatures and to "pre-date" absentee ballots received at TCF on Dec. 4. The State Elections Director, CT, answers that signatures were previously verified at the Detroit Election Headquarters and that the "pre-dating" involved completing a data field inadvertently left blank during that earlier process. (I wonder if Michigan use the envelope/affidavit and anonymous ballot approach like Arizona, which ideally would have stripped off the signature part before vote counting for privacy. Michigan may need to do some work, if vote counters can match signatures with ballots.)
State Senator RJ wasn't there and makes claims based on other affidavits.
AS was a Republican challenger that didn't attend the training. AS claims out-of-state license plates brought "tens of thousands" of ballots in at 4:30 am, and that every ballot after that was for Biden. CT responds that rental trucks with out-of-state plates were used, all ballots were brought in the same way, the number of ballots he claims is speculation, and that 220,000 more votes were made for Biden.
DG claims large numbers of ballots were delivered in unsealed containers. Plaintiffs never supplied any legal requirement that sealed containers were required.
PC claims that computers were connected to the internet, based on an icon on one of the computers, but provides no other evidence. CT asserts that only the computers that needed to be connected were. (The Court notes that CT, in a Facebook post prior to the election, claims that Democrats were using COVID to commit election fraud (They see you when you're sleeping, they know when you're awake. (I'm getting punchy here.)) and that the predilection to see fraud undermines his claims.
MC was an IT consultant from Dominion Voting Services at TCF. MC claims witnessing "nothing but fraudulent events take place" including tabulating machines that jammed a lot (?) and a cover-up of the loss of vast amounts of data. No one else corroborates MC's claims, and by no one the court says the other complaintants.
Ex-Assistant Attorney General ZL claims that he was mistreated, that ballots were processed without confirmation of eligibility, that he was unable to observe because he was required to stand 6 feet away, and that he was excluded from the room after leaving to get something to eat because he was a Republican. However, two Democratic observers were also excluded, with the reason being the maximum occupancy of the room. Further, as mentioned above, voting eligibility was determined elsewhere. A large monitor was provided to allow observation at a safe distance. ZL also did not file any complaints at the time.
There's a bit about injunctive relief, irreparable harm, and legal remedy. (Michigan law provides for the Secretary of State and county clerks to audit races, but that doesn't seem satisfactory for the plaintiffs.)
The court finds that the plaintiffs have legal remedies and suffer no harm without an injunction, but that the defendants would be harmed by an injunction, as well as the public interest. The plaintiff's affidavits are contradicted by the State Election Director, whose account is corroborated by five other affidavits. An injunction and independent audit is denied.
This one is kind of interesting, even though the "independent audit" thing seems only like some kind of delaying tactic, rather than trying to get the election discarded.
On one hand, there's the assumption that election procedures and officials are legal and fair, and the further consideration that a ruling for the plaintiffs would be a big deal. (Judges apparently don't like to be backed into a corner and forced to make rulings that result in big deals, particularly on short notice. The phrase "judicial activism" shows up, for example.) On the other, these are serious accusations that are being considered seriously.
In this case, the court decided that the plaintiffs had other options they could use, and that the affidavits were not credible enough to override other considerations.
It's been a decade, I think people's opinions of the Arab Spring have been revised since then. The Arab Spring worked out best for the actual country it originated in, Tunisia.
But you know that people only think about the narrative. If its BLM everything goes. If its the other side, its evil, has to be stopped. And the best thing is that they are completely oblivious to their double standards.
That is one perspective, although it’s very limited in its nuance. A lot of people supported BLMs pre-violence protests because they wanted police held accountable. And a reasonable person can discuss whether the violence would have escalated if the police hadn’t been so aggressive.
Compare that to the Capitol insurrection, where the goal was to overturn the results of an election. To overturn the government. Where the people inciting the violence were in the same tent.
There was never good intent on the side of the insurrection it’s, and they escalated to violence on their own.
> where the goal was to overturn the results of an election
If that was really the goal you'd think they would have gunned up a little more than they did. Instead, very few people were actually armed, so that seems rather disorganized for an actual coup.
> A lot of people supported BLMs pre-violence protests because they wanted police held accountable.
So protesting for police violence makes it OK to loot stores and burn cars that have nothing to do with it? How is that EVER ok? If you condemn violence you need to condemn violence no matter who instigates it, not only when it's very convenient for one to do so. because this is exactly what was happening 6 months ago, people cheer-leading the violence on Twitter if it was BLM related.
I do not think that it is very hard to determine when a protest becomes a riot, but I think that it is extremely difficult to determine who is guilty for transforming a protest into a riot.
I have no idea about what has really happened last year in USA, because the truth cannot be discovered just from video transmissions at TV or on the Internet.
Nevertheless, I have seen much more closely a large number of peaceful protests in other countries, which eventually became riots.
However it became clear later, that in most or in all cases, the transformation of the protests into riots was done by undercover police agents or secret service agents, who had infiltrated the protests and who had done this in order to discredit the protests so that their demands could be ignored and their organizers punished.
> These all occurred in large numbers during between the death of George Floyd and the Capitol Riot. How many times did big tech step and and stop the coordinating efforts for those protests/riots?
They did closed accounts that called for violence. I have literally seen that. Both twitter and facebook. Not perfectly, but they did not refused to delete tweets or whole accounts.
There are dozens of accounts for groups in Portland that aren't specifically calling for violence because they use code words. Despite violence happening constantly for 5+ months at the events being organized...
I haven't actually seen any proof that the Capitol riot was anything other than a protest that got out of hand (like what was described every time there was violence and riots after BLM protests across the country). It only takes a few dozen agitators to get a mob mentality going.
Facebook suspended #WalkAway, a group of 500K people that joined to support leaving the Democrats because they were being alienated by their policies (their words, not mine). No threats, no violence. Straight up deleted the group with no recourse by the organizers. All Facebook said was that the page allegedly ran afoul of “hateful, threatening, or obscene” content, but no proof was actually given.
because they didn't call for a violent overthrow of the government but a peaceful jazz fest like they had 9 years prior. they are calling for an occupy wallstreet 2.0
> Fifty days — September 17th to November 3rd.
>
> Let us once again summon the sweet, revolutionary nonviolence that was our calling card in Zuccotti Park.
If your stated intent explicitly calls for ‘non-violence’, I expect this doesn’t violate ToS despite potential inferences from the sensational branding.
I never heard of this before so I don’t even know what happened on Sept. 17. Was it violent?
Parent was obviously referring to the 34 deaths, theft, forcing people to comply with the requests (raise your first or face the mob) and millions of damage in private property, due to the BLM rioting.
Still, it's not relevant because they weren't exercising freedom of speech, just incitement of violence. Same as the people at the Capitol.
The large Black Lives Matter protests all over the country were overwhelmingly lawful and peaceful. The main exceptions were the scenes in many places of cops beating the shit out of people, etc. https://www.nytimes.com/2020/06/02/style/police-protests-vid...
Did you read your Wikipedia list? We have a whole bunch of people shot by cops, a few looters shot by store owners, people shot in unrelated murders that happened near protests, people run over by cars that drove into crowds, some people shot when groups of armed racists started gunfights with groups of armed antiracists, etc.
This list does not at all support the thesis that organized BLM protests were intentionally violent.
* * *
Yeah, there was a time that a group of white BLM sympathizers heckled another white BLM sympathizer who was eating at a restaurant table on the sidewalk, and the heckling was caught on video. The people involved are obnoxious jerks (organizers and most others in the BLM movement also agree they are jerks).
Similar heckling by all sorts of groups of jerks happens all over the country on a regular basis. For example a bunch of MAGA folks were following and heckling Lindsay Graham at an airport a few days ago.
But you really think heckling at a restaurant should be compared to an armed mob breaking into the Capitol building, chanting for the Vice President's execution and for the overthrow of the US government, beating cops to death, ransacking offices, stealing sensitive national security materials, and literally shitting all over?
“If this country doesn’t give us what we want, then we will burn down this system and replace it. All right? And I could be speaking figuratively. I could be speaking literally. It’s a matter of interpretation,”
> Hawk Newsome has no relation to the Black Lives Matter Global Network (“BLM”) founded by Patrisse Cullors, Alicia Garza, and Opal Tometi — and is not the “president” of BLM or any of its chapters. Only BLM chapters who adhere to BLM’s principles and code of ethics are permitted to use the BLM name. The reason for this is simple: unaffiliated uses of BLM’s name are confusing to people who may wrongly associate the unsanctioned group and its views and actions with BLM. As BLM has told Mr. Newsome in the past, and as is still true today, Mr. Newsome’s group is not a chapter of BLM and has not entered into any agreement with BLM agreeing to adhere to BLM’s core principles.
If you look hard enough you can find unaffiliated yahoos of every ideological persuasion and self-proclaimed identity (libertarians, stoics, Christians, vegans, Canadians, chess players, computer programmers, stamp collectors, minivan owners, ...) spouting militant nonsense. Such statements should be condemned (and have been by those nonviolently protesting police brutality), but cannot be taken as sweeping evidence that everyone with similar self-proclaimed ideology or identity is a supporter of hateful violence.
If you intend to apply this kind of standard, then surely every organization that calls itself "conservative" should be similarly held responsible for the actions of the MAGA insurrectionists, right?
Conservative groups still exist outside Parler. And they can and do coordinate there.
The same platforms were closing accounts calling for violence and preparing it during BLM protests. The difference is that while people on Parler claim that violent subgroups don't represent all Trump supporters, platforms that don't allow calls for violence are not good for them.
"Of course there's the problem of where the line should be drawn and who should draw it"
That's not "a problem", that's the only problem.
Of course hatred and bigotry and false scientific claims and calls to violence, etc., are negative and of course we wish they would vanish.
But a Ministry of Truth would be worse.
I am willing to build and maintain mental, emotional and psychological armor against very negative, harmful speech if it helps avoid erecting a Ministry of Truth.
But...we did that. And the answer arrived out through representative democracy thus far is:
(1) For matters where no legal liability, or only civil liability (except for sex trafficking, and copyright law which has its own special rules) would be involved, mostly leave it up to the free discretion of each online provider to determine and address unwelcome content.
(2) a whole bunch of crime-specific rules in criminal law, including (relevant to recent events) an absolute prohibition against knowingly providing any good or service (with narrow medical and religious exceptions) that will be used in “terrorism” offenses.
> I feel like we've all been a bit brainwashed by the government in our notion that "free speech" must have limits. I very much doubt that that is true. I think the speech part should always be 100% free.
I think you should be free to say what you want and to think what you want, I also think a privately owned space has the right to remove people who are saying things they don't want in that space.
Here are three examples:
(1) You own a bar and someone comes in and starts calling your patrons racial slurs, can you throw them out?
(2) You start a social media company and somehow a large contingent of your initial user group turns out to be a hate group. Shouldn't you be allowed to remove the group and their hateful content? Do you really want to be REQUIRED to leave it on the site unless it is breaking a law?
(3) You start a social media company and it grows to the size of twitter. Your site is one of the most visited sites on the internet, and is getting overrun with hate speech. Don't you want to be able to remove that?
Are you fine with one and two but not three? Where's the line? If you want to argue that Facebook and Twitter are utilities and should be regulated as such, what do they get in return? Don't forget, utilities are often government sanctioned monopolies or near-monopolies in "exchange" for all of their regulation.
Yes. Implicit in these "free speech" arguments is the idea that the government should be able to force private companies to publish user content that violates their policies. This is the sort of thing that the 1st Amendment is actually supposed to protect us from.
In all the above cases (person spouting epithets at your bar, social media users posting hate on your website) these are people with whom you have no contract. They are there at your permission, as long as they behave according to your standards.
When you rent space to someone, and they start using it in a way you don't like, maybe even specificially violating their lease, you can throw them out, but it becomes a legal process called eviction. You can't just put their stuff on the sidewalk and change the locks without going through that process. This is how the game is played when you get into that business.
Maybe that is the part that's missing with the AWS/Parler situation. AWS doesn't want them, but they leased space and services to them and there is a contract. Breach of contract is not something that either party to the contract can determine, because they both have conflicts of interest. If we had a judge review the contract, and approve the eviction, at least there would be a lot less basis to claim that are acting capriciously or out of bias.
Amazon is not a utility, they are a private company which leased Parler "space" in an unregulated industry. The contract that Parler agreed to has an Acceptable Use Policy which is very broad. If Parler believes AWS breached their contract, they can sue just like in anyone else in a contract disagreement.
Maybe in the long-term we'll look at hosting more like real-estate and it will have more laws and regulations about what the providers can and cannot do, but I doubt it and I'm willing to bet the trade offs that come with that are truly terrible. Imagine how many things wouldn't get off the ground if hosting contracting was even 1/3 of the real estate process.
More importantly, hosting isn't a monopoly. There's no government provided moat that justifies that level of regulation. The way I see it there are only two things that are in utilities in the internet space, ISPs (both consumer facing and interconnects) and DNS, and DNS is arguable.
Landlords are also private, and infinitely farther away from being a monopoly, but they are still not allowed to just throw people out because they don't like their politics. There's a process.
It's complicated. If you own a giant bar, should you be able to close down a tiny bar next door because there is hate speech inside, because you happen to be friends with the electric company?
Parler wanted to open a new platform and attract its own users. Only incidentally was it (like everything else these days) dependent on a number of other services to work.
If you own a building and find out that the owners of a bar that rents space in your building are allowing a terrorist group to plan an insurrection, are you allowed to cancel the lease and evict them? Sure seems like a breach of lease to me.
A lease is just a contract. It can specify conditions for termination. Without reading the contract, it's impossible to know if it is being breached or not.
> If you own a giant bar, should you be able to close down a tiny bar next door because there is hate speech inside.
No, and I didn't suggest that.
> electric company
Regulated. The electric company is a regulated monopoly. Hosting companies aren't. If an ISP had banned traffic from Parler, that would be an issue, it is a regulated monopoly. If Amazon shuts them down, there's no issue, it is an unregulated service provider.
Im not sure this is a good metaphor. None of Apple, Amazon, or Google (no matter how hard they try) are social media companies. None of them are in direct competition with Parler, and shutting it down wont increase their market share one iota. None of the social media companies are banning people because of things they said on Parler, and I doubt that the pressure applied to Apple/Amazon/Google came from outside the companies, this is most likely the result of engineers working on the AWS team pressuring their bosses, and it snowballing.
Thought exercise: what if say, Twitter, wanted to put one of its competitors out of business, and decided to engage in mass creation of accounts/content on that competing platform with the intention of violating their ToS and getting the platform kicked off of their hosting provider. Is this a viable business strategy now? Heck, is this even illegal?
When Parler became a liability for any company associated with it, to their shock, it turned out no company wanted to be associated with it. In a world where people "vote with their wallets" companies like Amazon, Google and Apple would prefer to avoid giving people a reason to do just that.
I don't understand the shock and surprise. No US company is going to choose anything over their own bottom line. Certainly not for a site as small and niche and literally riddled with hate speech as Parler.
Parler and it's customers can say whatever they want to whoever they want. Can they force Amazon to take their money? Absolutely not. Should they be able to? No: forcing Amazon to host Parler would be a violation of Amazon's own right to free speech.[0]
From Parler's point-of-view it would be unfortunate if they tied themselves to AWS specific infrastructure. There's absolutely no way that they now have some kind of "right" to be hosted by Amazon. Also, it's just poor planning on their part.
Well it's been proven that at #3 your site has the ability sway elections in democratic countries and help topple authoritarian regimes. So yes, the line is somewhere between #2 and #3.
>"Obviously all of the insidious planning and hatred that presumably occurred on Parler is abhorrent. I think I can hate all of those things without believing that the site should be censored."
I have yet to see any evidence that the capitol protest was planned primarily on Parler. I have seen plenty of evidence that it was planned primarily on Facebook (that has since been deleted/hidden by Facebook). You'd think if the goal was to punish or curtail such events Facebook would be getting at least similar treatment as Parler.
Difference being that Facebook moderates content. This was the Apple complaint against Parler. Parler has publicly touted itself as the 4chan/8chan of social media apps. It is more that the culture of Parler is being rejected by the App Store gate keepers and not so much the vehicle enabling it.
Parler absolutely moderated content - many went on and posted something left leaning and had their content quickly removed. It was moderated by ideology instead of by any attempt at "decency" though.
it’s standard practice for many certain political forums, If you don’t espouse the same beliefs, you will get banned.
Their counter argument is that if you bring up conservative view points, the liberal echo chambers ban you. So they should be able to do it in their free speech spaces.
This also unfortunately hides the fact that hate speech, dog whistles, saying that COVID is a hoax, pushing for falsehoods and getting upset about not being able to do so, is why you get banned.
> This also unfortunately hides the fact that hate speech, dog whistles, saying that COVID is a hoax, pushing for falsehoods and getting upset about not being able to do so, is why you get banned.
I fail to see how this any different from any other social media site??
Again, NOTHING that Parler did is any different from any other social media platform that is a total cess pool of what you just described. The difference is, the speech was predominantly conservative in nature. Which leads me to believe the decisions to remove the app were purely a political decision - which is an incredibly dangerous precedent to start.
There are plenty of individuals claiming they were banned for posting left content or disagreeing with right wing content. Hard to know how much is trolling or not, but I think that's kind of the point. If it is the home of free speech, who is Parler to determine their intent?
> Still, there is a set of community guidelines and a user agreement, which prohibits deliberately obscene usernames, pornography, and threats to kill others. Meaning even Parler’s free speech absolutists have some vague rules for what they deem as too offensive. “When you disagree with someone, posting pictures of your fecal matter in the comment section WILL NOT BE TOLERATED,” wrote Matze during a consequential exchange on his site, shattering the hopes of conservatives and libertarians everywhere who dream of a social media site with a completely laissez-faire ToS.
Because all of the people who defended Twitter from suppressing conservative voices told them if they don't like it, they can start their own network.
Which is exactly what they did.
Now THOSE people who told them to start their own network so they could do as they please, are up in arms because they didn't moderate their content enough for their liking?
While I like the highway analogy, it only works if the places where "free speech" are being conducted are US owned , public infrastructure.
A closer analogy would be private roads. If Amazon owned a series of private highways used solely for shipping goods, would we care if they stopped transporting items they didn't agree with on those Amazon owned highways?
Forgive me if I'm repeating a common refrain here, but the words we say on Twitter, Parler, Facebook, and even HN aren't "free", spoken in a public place. They're owned by Twitter, Parler, Facebook and HN. Those companies can choose to do whatever they want, for better or worse.
Honestly my road analogy was more about reflecting on the notion of limits on free speech in general. It was not meant to be compared to the specific issue of AWS & Twitter. It was meant to draw our attention to the fact that we wholeheartedly endorse many systems (e.g. roads) that absolutely facilitate immoral and criminal activity. And that's ok to do. Thus I claim that it's similarly ok to endorse absolutely free speech without limits, despite the immoral & illegal activity that it might incite.
Roads encourage bank-robbers. Honestly who would rob a bank if you could only flee on foot? It's ok though that it encourages and facilitates bank robbers. We should not close the roads because of it.
We need to be OK that certain things (free speech) can have huge negative effects and criminal elements.
Generally speaking, there's two ways to determine a truth, either from experimentation and results, or from first principles.
From the experimentation side of things, you have Canada, Australia, Denmark, France, Germany, India, South Africa, Sweden, New Zealand, and the United Kingdom (I'm probably missing some) where they impose restrictions on free speech while still having mostly free speech rights. You can even include the US as well, since it does have restrictions, they're just more relaxed, and seem to only be enforced if financial damage can be proven from libel.
Now very broadly looking at that list, it seems that countries that take a most speech is free (especially speech that criticizes the government and ruling class), but some speech is restricted (especially hate related speech, speech that imply violence to others, speech that targets minority groups, and diffamation and libel speech) seem to work pretty well in practice. At least, those countries have had stable social and economic environments, and seem to allow for good opportunity to its citizen and give them a good standard of living in general.
So from the experimentation side of "truth seeking", it seems to me I'm not seeing an argument for absolutely all speech should be free always no matter the circumstances or the intent of the speech.
Now, we don't have a good experiment example of "all speech goes" unfortunately. Maybe the US is the closest to it, and that seems to be causing quite a lot of social and economic instability for now at least. But I'd say it's too soon to conclude anything on that front.
The other approach to "truth seeking" would be from first principle. The theory around free speech comes from the liberal progressive thinkers of the enlightenment. So turning to them for first principle makes sense. From my research into it (and I welcome you do your own), there seem to be no winning theory around it. All agree that speech against government should be free, but how far to take other speech in other circumstances is not clear. Also debatable if the government should be free to criticize groups of citizens or not, because that can enable top down propaganda and repression, which free speech is trying to protect against. Most theory seem to recognize the "risks" with unrestricted free speech, but some believe that the benefits of free speech against authoritarianism and majority's rule is worth it, while others think it is possible to draw a line that protects against this and mitigates the risk of unrestricted free speech.
It seems some of the thinkers that are pro unrestricted free speech also assume the system provides people with an education that allows them to identify and rationalize fake and manipulative ideas and thoughts from legitimate ideas and thoughts.
So the first principle outlook also seems inconclusive in my opinion.
That personally leaves me to conclude that mostly free speech is good, and fully free speech might also be good but that's not yet been demonstrated to really know, with keeping in mind that this uncertainty about fully free speech could resolve in it being worse or better than mostly free speech.
> > If you said "Once a company like that starts moderating content, it's no longer a platform, but a publisher"
> I regret to inform you that you are wrong. I know that you've likely heard this from someone else -- perhaps even someone respected -- but it's just not true. The law says no such thing. Again, I encourage you to read it. The law does distinguish between "interactive computer services" and "information content providers," but that is not, as some imply, a fancy legalistic ways of saying "platform" or "publisher." There is no "certification" or "decision" that a website needs to make to get 230 protections. It protects all websites and all users of websites when there is content posted on the sites by someone else.
> To be a bit more explicit: at no point in any court case regarding Section 230 is there a need to determine whether or not a particular website is a "platform" or a "publisher." What matters is solely the content in question. If that content is created by someone else, the website hosting it cannot be sued over it.
Edit: BTW, I'd suggest reading that whole article. Section 230 has been misrepresented by politicians and the media fairly regularly, and this piece does a nice job of laying out the current state of the law and its interpretation and application.
I didn't read the comment as an opinion. What they wrote is a common misunderstanding of section 230 that, these days, is being promulgated by defenders of Parler.
But right now, it doesn't matter if I disagree or not. US law makes it very clear what responsibilities and rights publishers have.
Title 47 U.S. Code § 230 explicitly states that publishers are not liable for the content that their users post, with some minor exceptions related to sex trafficking.
They are also allowed to restrict whatever they like, whether it's constitutionally protected or not.
> Title 47 U.S. Code § 230 explicitly states that publishers are not liable for the content that their users post, with some minor exceptions related to sex trafficking.
No, it doesn't.
It states that online systems with user generated content (and other users on such systems) aren't treated as publishers of what their users post, with some major exceptions related to civil liability related to sex trafficking and all criminal liability regardless of subject matter. Civil liability not deriving from status as a “publisher” is also not on its face, affected, though some courts have also applied 230, controversially, to immunize against notice-based civil liability that would apply to them as distributors, even if they aren't considered publishers.
> No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
It also says other things that I neglected to state, most importantly, that section 230 does nothing to change criminal law, so it's also fair to call me out on that.
No, it doesn't say they won't be liable for user content, it says they won't be considered the publisher. There is liability for content that is tied to being a publisher, and there is liability that has other bases. On its face, 230 says nothing about liability on other bases (as noted in GP, some courts have also used it to provide immunity from liability as a distributor, but that is controversial and not stated in the text.)
230 does protect platforms from liability of what their user base posts. Having run forums and chat servers for a long time, I can attest to the experience of having to moderate content and having received legal complaints. There are two major factors that people are conflating in these discussions. There is the direct legal aspect of having illicit content. The platform is covered if they make an effort to remove illicit content AND they themselves are not encouraging the illegal behavior. So for example, if they have users that also have admin roles and make sub-forums that promote illegal behavior and they do not warn/ban the admins, they may eventually be outside the protection of section 230.
Then there is the acceptable use policy of the hosting provider(s). dns, server, cdn, app store This is entirely outside of 230. If the provider gets enough complaints, they may eventually see your site as a risk and may choose to terminate your account in order to protect the image of their business. They do not want their reputation tarnished as it will affect their profits. I think that is totally fair. If you want to run a site that may likely provoke emotional response from the public, then in my opinion it would be best to find a hosting provider that accepts the risk in a contract. The contract should state what is expected of you and what you expect of them and what happens if the contract is to be terminated, such as off-boarding timelines. Smaller startups are at higher risk as they provider has less to lose by booting them off their infrastructure.
Where I believe this issue has gone sideways is what the industry believes to be considered an appropriate method of moderation. The big platforms like Facebook, Twitter, Apple are using automated systems to block or shadow-ban things they consider a risk to their company or their hosting providers. This leads to people fleeing those systems and going to the smaller startups that do not yet have these automated moderation and shadow-banning systems and that is what happened with Parler and a handful of other newer platforms that wanted to capture all the refuges of the big platforms. A similar thing is happening with that alternate to Youtube, but I can not remember what it is called. Bitchute?
Another potential problem that may confuse the 230 discussion could be that many powerful politicians and corporate leaders use the big platforms like Twitter and Facebook. They and big lobbyists and investors may have some influence over the behavior of these platforms and may be able to tell them to squash the sites that do not follow the automated version of banning and shadow-banning. Does that create echo chambers? Is that what is happening here? Not sure. If so, I predict it will push many people under ground and that is probably not great for agents that would like to keep an eye on certain people.
One of Greenwald's observations was that no planning occurred on Parler, but rather tended to occur on FB. If you'd used Parler before it went down (I poked it months ago, before all this madness, and found it to be sorely lacking), you'd notice that it's a shoutcasting platform like Twitter, and is wholly unsuitable for any kind of event planning. At most, you could give messages saying to prepare for X at event Y, which is 'planning' of a sort, I suppose.
Planning also occurred on Facebook, Twitter, and Reddit. Violent threats and incitement to violence occur on these platforms all the time. Some of this content is moderated. Much of it is not.
> Planning also occurred on Facebook, Twitter, and Reddit.
Sure.
And that may indicate problems with those platforms. At a minimum, though, those platforms reacted against continued use by the same terrorists once it was unmistakably, publicly, concretely clear that they were a deadly serious threat.
I don’t think he said that. He said that many of those arrested were not active parler users, and that significantly more planing happened on Facebook and Twitter
He provides some cursory anecdotal evidence to that effect in the article, at least. My only addition is observing the nature of the platform itself as also making it an unlikely venue for that activity. But people have used platforms for entirely unsuitable purposes before...
You need a license to legally use the US highway system. It is patrolled by police who can stop you at any time and request your documents. You can have that license revoked for a variety of reasons.
The highway has a crapton of regulations around how you can use it. Speed limits, rules against drunk driving, driving only in 1 direction, requiring your lights be on (in some places during the daytime).
Nobody (almost) complains about limits on "free driving."
Would the highway not be better equated to the internet itself?
With AWS, Google Play store, iOS store as toll roads (Pennsylvania Turnpike, Golden Gate Bridge etc...) and Parler, Facebook, HN etc... as car brands?
Manufacturers can be forced to take all of their cars off the road for repair. Take Toyota or Waze.
You might have an argument about iOS taking Parler off the store as an issue because you can't sideload, but you can directly install APKs onto Android. You can self-host Parler with physical servers. I guess I'm less bothered by this than a lot of folks cause one of my rules is basically "Be nice around other people's things and ask before you touch". AWS and the Android + iOS stores are other people's things. And Parler poked at a sore spot: being used to plan attacks
You can't post pirated content or child porn online - because you're either directly engaging in or enabling criminal behavior.
If you're promoting armed, violent protests and insurrection - that is also a crime.
And sure, this is happening to a small degree on Twitter and FB - but they make some attempts to stop it, and it's not the main value proposition of the platforms.
The problem with Parler is that this was always where it was headed. It was built to serve people who would use it for this, and a significant portion of the content created and consumed was about this.
There is also legitimate content available on Kick Ass Torrents. But the majority of the consumption is for things that are illegal in the US. So it gets the same treatment as Parler.
> If you're promoting armed, violent protests and insurrection - that is also a crime.
You are conflating Parler with it's users
> this is happening to a small degree on Twitter and FB - but they make some attempts to stop it
From the article:
> And contrary to what many have been led to believe, Parler’s Terms of Service includes a ban on explicit advocacy of violence, and they employ a team of paid, trained moderators who delete such postings. Those deletions do not happen perfectly or instantaneously — which is why one can find postings that violate those rules — but the same is true of every major Silicon Valley platform.
You've made some good arguments throughout this thread, but this one in particular is disingenuous. You can't market yourself to people that were deplatformed specifically for inciting violence and then credibly mock surprise when those people begin inciting violence on your platform. By the time the limited number of moderators get around to deleting or hiding posts the damage is done, and everybody knows it.
1. You either are unaware of the meaning of the word "disingenuous", or you know my own intentions better than I do.
2. Did Parler express suprise that some of its users attempted to (and in some cases succeeded) incite violence on their platform?
3. Again your tendency towards superlative undermines the discussion, but "everybody knows it" and "the damage has been done"? This is a very strong statement indeed, claiming that you have knowledge that Parler's moderation has been so ineffectual that every user on their platform is able to view all inciting content before it is taken down.
> You either are unaware of the meaning of the word "disingenuous", or you know my own intentions better than I do.
I'm simply crediting you with the intelligence and experience to understand that what someone says publicly is not always in line with their actual goal. Therefore, by pretending that Parler's terms of service represent their actual intentions despite evidence to the contrary, I believe you are being disingenuous.
> Did Parler express suprise that some of its users attempted to (and in some cases succeeded) incite violence on their platform?
I have no idea. It doesn't matter. "I didn't think the leopards would eat my face!" is not a credible expression of surprise when you invite a bunch of leopards into your home and set them loose.
> Again your tendency towards superlative undermines the discussion, but "everybody knows it" and "the damage has been done"?
I don't think it's undermining the discussion to assume a certain level of conversational and contextual shorthand. "Everybody" does not mean literally every person on earth, it means "people with interest and experience in these matters". I apologise if English is your second language or similar - I'll try to be clearer in future.
> This is a very strong statement indeed, claiming that you have knowledge that Parler's moderation has been so ineffectual that every user on their platform is able to view all inciting content before it is taken down.
Not every user needs to have viewed content for that content to be damaging. However, the more people that see damaging content the more damaging it is. Most social media platforms expose more recent content to more users, so damaging content will do most of the potential damage within a short time. Therefore, platforms with actual intent to reduce damage will need to remove problematic users from that platform while also employing a highly effective moderation team to identify new damaging content as quickly as possible.
It stands to reason that a platform that only wanted to look like they were reducing damage could employ an ineffectual moderation team to remove content only after the majority of the damage was done. I suggest that's what happened here - it seems clear that large amounts of inciting content were available for long periods of time (hours/days).
> I feel like we've all been a bit brainwashed by the government in our notion that "free speech" must have limits. I very much doubt that that is true.
I very much disagree. Hate speech is just one example. "Speech" designed to terrorize, threaten, and incite action should not be "free".
> Of course any crimes that derive from it are and will always been fully enforceable.
By this logic no one who is purposely inciting anything is even liable for the actions that they cause. "Leaders" will never face punishment, because they only say things, right?
The line (or at least one of the lines) comes when speech is no longer an expression, but an instruction. It may be a tough line to draw, but that doesn't mean that the line shouldn't exist somewhere.
"hate speech" is a term that was coined in recent years to silence people. The only things that are dangerous are when people actually act or are explicit and then we deal with them. This isnt Minority Report. Free speech is meant to be nearly absolute and protect unpopular opinions. The only restrictions are explicit threats (someone is literally saying they are going to kill someone and names said person) Today the interpretation has morphed into whatever is unpopular or is 3 steps away from being an actual threat.
> "hate speech" is a term that was coined in recent years to silence people.
Americans have been struggling with hate speech and censorship for well over a century. The most obvious example is the censoring of the film The Birth of a Nation back in the late 1910s, but there are examples even further back in US history.
Our interpretation of "freedom of speech" (both philosophically and as protected by the First Amendment) isn't immutable and has changed since the Bill of Rights was adopted. Prior to the 1950s, the supreme court upheld the censorship of books and film for reasons that we now interpret as unconstitutional, and censorship remained the law in many states for decades after.
I would argue that our current expectation for "free speech" and this idea that it is "nearly absolute" is far more liberal than what we've seen through most of American history.
I'm not sure what you mean by this but Free Speech in America certainly did not start at absolute. Our constitution endorsed slavery and slaves had no freedom, including speech. The constitution only prevented the federal government from restricting speech. The Supreme court has both expanded or restricted free speech depending on the ruling.
You need to actually read up on the history of the Enlightenment and the philosophers who actually devised the notion of free speech and how they envisioned its use.
In no way does it even reflect the possibility of something like Parler being used to amplify the sort of messaging it does.
We have court opinions on what free speech means in this country and the supreme court has made the heavy decisions on that. Its not a distorted wish list of what each individual wants it to be.
Those court opinions all say it's absolutely fine under the First Amendment for a private business to refuse to do business with another business that it finds objectionable, so I'm not sure what you're getting at.
Let's set aside the distinction between free speech rights guaranteed by the First Amendment vs. the broader free speech ideal that's foundational to our society, since it isn't even needed: as suggested by the objections from Germany, France, and Mexico noted by Greenwald, these corporations are effectively acting as the government, so the reasoning behind the First Amendment's existence directly applies here.
(This is not an endorsement of the delusional presidential behavior that created the leadership vacuum filled by the corporations.)
The irony of referencing Germany as an example when Germany has an explicit ban on speech that is anti-constitutional is pretty high here.
It would be government overreach to tell the platforms that they're required to host whatever speech is posted to them, not the other way around. As I've mentioned in other comments, there are more ways to communicate now than at any other time in history. Facebook and Twitter do not have monopolies on speech, nothing on the internet does.
The issue is that the corporations are acting as the government. As you noted, Germany restricts speech more than the US for obvious historical reasons. So the fact that even their government objects to this behavior is evidence against your position, not mine.
In your linked comment, all of these governments recognize (3) as a scenario that should be governed by laws (which vary between countries).
> In your linked comment, all of these governments recognize (3) as a scenario that should be governed by laws (which vary between countries).
So why not 2? When does 2 become 3?
> As you noted, Germany restricts speech more than the US for obvious historical reasons. So the fact that even their government objects to this behavior is evidence against your position, not mine.
The German and French government objects to the US not having laws that require this behavior and leaving it in the hands of private companies, sure, and I object to the US having censorship laws and would rather private entities be able to make the decision for themselves, the direction of MORE freedom of speech.
If this was primarily about "[objecting] to the US not having laws that require this behavior", their emphasis would not have been on the platforms being out of line.
> as suggested by the objections from Germany, France, and Mexico noted by Greenwald, these corporations are effectively acting as the government
To elaborate on the other user, Germany and France both ban holocaust denial and "hate speech", which would include much of the content on Parler. Mexico's speech laws are less clear, but if my reading is correct the constitution allows regulation of hate speech. And in practice, speech in Mexico isn't protected from the government or cartels.
So France and Germany could be seen as asking the US government to take a stronger stance on hate speech. This would, of course, require a constitutional amendment, at which point anything goes. Excluding that, what you see (and will continue to see) is corporations stepping in to ban hate speech because the government is restricted from doing so.
Did you actually read what Merkel or AMLO said? The first sentence of your second paragraph ("So France and Germany could be seen as asking the US government to take a stronger stance on hate speech.") can be immediately verified to be false in the sense that you are stating it (as justification for the platforms' behavior).
> Asked about Twitter's decision, Merkel's spokesman, Steffen Seibert, said social media companies "bear great responsibility for political communication not being poisoned by hatred, by lies and by incitement to violence."
> He said it's right not to "stand back" when such content is posted, for example by flagging it, but qualified that the freedom of opinion is a fundamental right of "elementary significance."
> Seibert said the U.S. ought to follow Germany’s example in how it handles online incitement. Rather than leaving it up to tech companies to make their own rules, German law compels these companies to remove possibly illegal material within 24 hours of being notified or face up to $60.8 million in fines.
[0]
You mean verified to be correct as confirmed by her spokesperson who released the initial statement. (Seibert released the initial statement, as can be seen here[1])
So yes, the statement can be seen as saying two things
1. Twitter is too powerful and needs to be regulated
2. The US needs stronger regulations on hate speech.
"Seibert said the U.S. ought to follow Germany’s example in how it handles online incitement. Rather than leaving it up to tech companies to make their own rules, German law compels these companies to remove possibly illegal material within 24 hours of being notified or face up to $60.8 million in fines.
""This fundamental right can be intervened in, but according to the law and within the framework defined by legislators — not according to a decision by the management of social media platforms," he told reporters in Berlin. "Seen from this angle, the chancellor considers it problematic that the accounts of the U.S. president have now been permanently blocked.""
I don't think this contradicts what I've said. One can conclude from this statement both that Merkel believes Twitter needs to be regulated, and that the US needs stronger speech regulation in general. (also I'll note that what Twitter did isn't actually illegal in Germany, there's no law that compels social media companies to host people)
The primary message from all of these governments is that the platforms are out of line.
Your original comment was in support of/elaborating on "It would be governmental overreach [to set the rules]." That original comment had an overly permissive rule in place of what I've bracketed, but that's beside the point since all of these governments are specifically objecting that the recent actions are problematically restrictive.
> The primary message from all of these governments is that the platforms are out of line.
Yes, and one reason for that, as stated by Merkel, is that the US doesn't have a democratic framework for managing hate speech. Because such a framework is illegal under the first amendment. And her statement suggests that the US adopt a more German framework for adjudicating such speech, so that corporations don't need to make their own rules.
Your claim is that Twitter is "effectively" acting as the government. That's not true under a significant amount of law and precedent. (There are cases where private entities are acting as a government, and importantly, trying to use government force to suppress speech, Marsh v. Alabama).
In fact, one could argue that by censoring speech, Twitter is explicitly not acting like the government, because Twitter is taking action the government cannot.
Hate speech was very much an issue even before 1947.
Libel laws alone put paid to your second sentence.
Free speech is not meant to be nearly absolute - it is not even meant to be largely absolute, copy right alone would be incompatible with such a strength of freedom.
None of what you just wrote is true actually and it is meant to be nearly absolute. Everything you just listed is a specific legal carve out. We start with absolute and insert very specific exceptions through court cases which are narrow. Copy right just happens to be one of them.
"hate speech" is not a recent term unless you mean that it was developed in the past century. In the united states, hate speech was determined as a limitation on the first amendment in 1942.
Can you point to the court case (court case that was not subsequently overruled)? The reason I dont believe this is that would be extremely arbitrary. If hate speech is in fact banned it likely has a very narrow and explicit definition in the ruling, most likely circling back to direct threats of harm.
Chaplinsky v New Hampshire found that a law forbidding abusive speech in public was Constitutional because the words in question "by their very utterance inflict injury or tend to incite an immediate breach of the peace."
We're so far down in the comments I forget who's arguing for what but I don't think that case is relevant.
That case gave rise to the "fighting words" doctrine or test which has (thus far) only been applied (successfully) to speech that is so vulgar or offensive as to provoke violence. A good representative example is the recent Twisted Tea smackdown video that made the rounds. If a police officer had swooped in and arrested the instigator before he got hit the arrest likely would have been kosher under the doctrine because the n-word is generally so vulgar you don't expect to be able to use it in that manner and not start a fight. Basically it's used to justify arresting someone for speech so inflammatory that even though you are not picking a fight someone is inevitably gonna pick a fight with you whether you want one or not.
I can't think of any case where it was used to prosecute someone for calling for "adjacent to violence" type behavior. I am unaware of any cases (that have not been overturned) where fighting words doctrine was used as a justification for suppressing political speech. If you know of any examples I'd be interested to read them.
There's sort of a catch to the way you're framing the question. Instances of speech that are forbidden are generally not considered political speech, even if there are political issues involved. For example, if I threaten the President because of his policies, there is obviously a political angle to that, but it is also a fairly uncontroversial felony.
For a concrete example of nominally political threatening speech that was found not to be protected, see Planned Parenthood v. American Coalition of Life Activists, where it was found that certain anti-abortion ads were not protected by the First Amendment because they reasonably caused the targeted doctors to fear for their safety. I don't believe they explicitly invoked the "fighting words" doctrine, but it relied on the same principle as Chaplinsky — that if speech has the effect of creating a real-life threat, the speech isn't necessarily protected by the First Amendment.
(To be clear, the point wasn't that Chaplinsky is the be-all-end-all, just that the concept of "hate speech," where the consequences of some speech make it unworthy of free speech protections, is not a recent invention.)
> that if speech has the effect of creating a real-life threat
>(To be clear, the point wasn't that Chaplinsky is the be-all-end-all, just that the concept of "hate speech," where the consequences of some speech make it unworthy of free speech protections, is not a recent invention.)
Yes, but those consequences must be imminent and unlawful.
The point of Chaplinsky isn't that it's a real life in the moment affront to civility so offensive that it's bound to cause a fight.
Planned Parenthood v. American Coalition of Life Activists is about specific threats, e.g. "I'm going to specifically kill you".
Both examples of non-protected speech are under totally different doctrines (I forget how far back the specific credible threat doctrine goes and what the real name for it is but it's really old, older than planned parenthood) that were more or less condensed into the "imminent lawless conduct" test established in (Brandenbyug).
It's going to be very hard for anything that is published in an asynchronous medium that isn't a direct call to lawless action by someone who can credibly get people to pull it off to fail the test because in order for the lawless action to happen people must do things that would be premeditated crimes on their own. There is no current US court doctrine for limiting free speech with regard to hate unless the content and situational details add up to something that fails the Brandenburg test.
v New Hampshire. I dont know much about this one (which is limited to a single state) but 1) it can be overturned by the supreme court, 2) if its an old ruling there may be an updated ruling 3) I have not seen the parameters that justify abusive speech, its probably quite specific.
By this logic you're saying that advocating and inciting genocide is just an "unpopular opinion" that deserves to be protected. You know, because it's not against a specific, named person.
Actually that would be a direct threat of harm which I specifically mentioned several times. So no, I'm actually not saying that. I'm not sure how its even possible you arrived at that conclusion.
If the primary use and purpose of highways was to commit crimes, and the people running them refused to do anything about it, then yes I'd question them or at least their management.
I think when speech starts to risk real harm to others, we need to start thinking carefully about it. It's not so clear cut, but I think that if someone threatens to harm someone with some degree of seriousness, society should be able to act before that harm occurs.
With Parler (and to be fair, Twitter), I see it creating more radicalization, which very directly creates a risk of harm to others as we saw play out on the 6th. And I don't think we should tolerate it, or we're stuck just treating symptoms rather than causes.
For the analogy to work it's not even enough for the roads to be used to commit crimes. The roads would have to be continuously re-routing you from your intended destination and taking you down roadways filled with signs inciting violence, cult indoctrination, and lies about reality.
The algorithmic curation of all social media platforms that is intentionally built to assault users with the most distasteful, extreme lies (because it's good for engagement!) is the real problem in my view. If every social media platform stopped all algorithmic curation/recommendation and simply presented a chronological list of updates from people you follow (and did not recommend who to follow), then I think the bulk of the problem goes away.
I have no problem with free speech (even abhorrent speech). But I have a problem when a person's online experience is controlled by algorithms specifically designed to ratchet up the garbage and inundate people with hateful rhetoric.
It's pretty telling that "64 percent of people who joined an extremist group on Facebook only did so because the company’s algorithm recommended it to them" according to facebook's own research into divisiveness.
https://www.theverge.com/2020/5/26/21270659/facebook-divisio...
With all the discussion about Section 230, could such opaque algorithmic curation constitute a form of editorial control, not unlike that of a publisher? Could we reform Section 230 in a way that is pro-user, so if a website wishes to be a "platform" they would have to make their raw feed available to the user, or if they provide algorithmic curation, it's transparent to the user how information is prioritized? Could we clarify the distinction between platform and publisher?
> Could we reform Section 230 in a way that is pro-user, so if a website wishes to be a "platform" they would have to make their raw feed available to the use
That's not reforming 230, that's abolishing it and repudiating it's entire purpose. Enabling host action to suppress perceived-as-undesirable content without increasing host liability for content not removed was the purpose of 230.
> Could we clarify the distinction between platform and publisher?
The distinction in 230 is crystal clear: to the extent content items are user-generated, the online service provider (land other users, even if they may have the power to promote, demote, or suppress the content are not publishers or speakers, period, the end.
The source (whether it is the user that is the source or the service provider for it's internally-generated, first-party content) is the publisher or speaker.
Wow. That is an amazing statistic- I find it hard to accept that the average person would be influenced by social media to that extent but that type of study result is undeniable.
I don't always go 85 in a 55 but when I do I get passed by a cop going 95.
Unequal enforcement is already very much a thing on the roads. Ever heard of a "fishing stop" or "driving while black". Getting cut less slack than a boring sedan and a work truck is one of the few things red Porche drivers and 30yo shitbox drivers have in common.
That's some sneaky language but this isn't a philosophical hypothetical. Parler hosted people calling for specific acts of violence and did hardly anything to moderate them. Those people were almost entirely right wing and you can see for yourself looking through the data dumps provided recently by a hacker or looking up articles about Parler.
If you have evidence o some other app hosted on AWS where left wing groups calling for violence and applauding it when it materializes, but not being shut down, please show everyone. I won't even go to the extreme of saying it has to be at the same level or quantity of violent speech we saw on Parler.
Actual terrorism? Al-Queda and ISIS are active on Twitter. There is content still up calling for genocide against certain ethnicities. Real genocide and terrorism. Not the hyperbole in U.S. politics.
U.S. politicians were actively egging on protestors and calling for violence around the country this summer. Where do you draw the line? It's cool if one side does it but not the other?
There's clearly an uneven application of their moderation policies. And, they are afforded legal protections as platforms under the assumption / intent that users create the content and they stay out of curation. IMO, they aren't being equitable with enforcing their own rules and should lose status as platforms. Because clearly they are opinionated in their enforcement of the ToS.
When it became undeniable that the traffic was connected to actual terrorism, other sites acted swiftly to cut it off. Parler did not.
Now, it's arguable that the other sites knowingly facilitating crime and just hoping to escape consequences because no one was going to make a big deal of it, and they only cut it off because the risk of that strategy increased after the Capitol attack. But while that may paint the past actions of the other firms in a worse light, it doesn't paint Parler’s actions before it was cutoff by other suppliers in a better one.
> When it became undeniable that the traffic was connected to actual terrorism, other sites acted swiftly to cut it off.
Remind me when the politicians who promoted the Antifa and BLM riots and actually set up funds so the protestors would be bailed out had their accounts "indefinitely suspended" for doing just what you're describing as the reason Parler got shut down.
Free Speech does have limits though. If you're incarcerated for a felony you can't vote, if you go to a mall and start yelling obscenities you can be removed, if you make youtube videos on how to create pipebombs the US President can kill you in a drone strike without a public trial.
The thing I think people miss when making the "this is an assault on free speech!" is that they think it's becoming a gray area, when in fact it has always been a gray area.
> I think the speech part should always be 100% free.
I don't think a claim like this is meaningful without a precise definition of "speech". And, consequently, I think you'll find any attempt to define which things are not "speech" ends up being functionality equivalent to dialing back from that theoretical 100%.
In general, you can't have 100% freedom over any finite resource. If there's only one dessert in the fridge, you and I can not both have 100% freedom to eat it.
If you presume that any meaningful "speech" has some non-zero audience size, then speakers are competing for the finite attention of other humans. You can't have perfect freedom for that.
> Curious if you feel torn supporting the US highway infrastructure? It can clearly be used for to traffic drugs, humans, blackmarket weapons, etc. It can be used to flee justice, evade police, abets vehicular manslaughter, etc. The list goes on and on. Is it even controversial to support the highway system as is? Do we loose sleep over it?
IMO, this is literally a strawman argument. You're picking a very rare, extreme event and amplifying the importance of that event in an attempt to make an argument.
Following the implication of your argument (that we don't worry about a rare even on an otherwise good system), we shouldn't even bat an eye at Parler being removed.
It's by definition not a straw-man. He's not misconstruing the parent's point. He's making a comparing or extending it to another subject. It might be comically bad comparison but it's not a straw-man.
I believe that there are two common and non-negotiable principles for any kind of freedom to apply:
1. Abuses and crimes should always be persecuted. I have read lots of posts on Parler, and ALL grounds for violent speech, radicalisation and terrorism apply to lots of them. I've read posts inviting people to hang and quarter democrats on the streets in front of their families, as well as posts inviting armed sedition against the institutions. Those who use this kind of language MUST be made accountable of their words, just like we'd make ISIS supporters accountable of their words. It's not that just because they're white and Christian dudes that look like us we can condone them a bit more. And if a platform refuses to limit this language, then the whole platform must be taken down.
2. Your freedom ends where my beings. You may be free of saying whatever you want, but if that ends up doxxing information about me that I didn't want to reveal, or it ends up spreading misinformation about me that ends up in death threats, then you are NOT free to do that.
Parler has failed to guarantee both the non-negotiable freedoms when it comes to building a sustainable free speech framework, therefore it must be taken down. I really fail to see any contradiction in this.
And keep in mind that the anarco-liberalist vision of free speech is something that has arisen only in the past couple of decades. The founding fathers of the liberal school thought (including Popper and Hayek), those who had REALLY seen how things in Europe ended up when unlimited freedom of speech is guaranteed also to fascist jerks, were well-aware that unconstrained freedom with no framework to contain fundamentalism is a threat to a tolerant society. "Being intolerant with the intolerant is a civic duty for a tolerant society that wants to preserve its values" (Popper)
> Your freedom ends where my beings. You may be free of saying whatever you want, but if that ends up doxxing information about me that I didn't want to reveal, or it ends up spreading misinformation about me that ends up in death threats, then you are NOT free to do that.
You just described investigative journalism.
Doxxing is not itself a violation of your rights, it’s just taboo when it’s done in the small.
Only when it’s done with the intent of causing illegal harm, such as “X lives here, go kick his ass”, would it be a violation.
I was obviously not talking about rightful investigative journalism. I meant things like EnemiesOfThePeople (former link: https://parler.com/profile/EnemiesOfThePeople), which leaked home addresses, email addresses and phone numbers of any politician who opposed Trump's effort to overturn the election. This has nothing to do with journalism and everything to do with fascist ways of intimidating private citizens.
> I've read posts inviting people to hang and quarter democrats on the streets in front of their families, as well as posts inviting armed sedition against the institutions.
And I've read posts on HN saying we should hang and quarter Trump supporters; should HN be wiped from the face of the earth?
Took me about 30 seconds to find. But in general I will not do unpaid moderation work for ideological crusaders like dang. For example, I'm sure there will be a reallllly good excuse why this comment is actually okay. And I'll probably get flagged/moderated for good measure.
Oh, but for good measure...
"The radical right is a scourge ... They need to be repeatedly smacked down until normalcy is achieved."
And keep in mind that I'm just picking a random example from thousands of similar posts.
Radicalized Trump supporters were using Parler to share invites to "do a bloodbath out of Democrat voters", "burn them alive on the streets and throw them in wooden chips in front of their children", and you really have the courage to come here and say that WE are the violent ones for a whack-a-mole the Nazi joke?
From the bottom of my heart, fuck you, you filthy fascist.
I've got absolutely nothing against the old Republicans, I've got absolutely nothing against conservatives, I've got many, many conservative friends as well. But I've got A LOT against the violent scum of the earth represented by people like you who try to play the role of the eternal victim after inciting Civil War and gratuitous gruesome violence against the political enemy!
The complete quote: "If we have to play an constant game of Whack-a-Nazi, I vote we whack as many Nazis as we can."
That is a reference to the game Whack-A-Mole. Literally, that would mean hitting them with a soft foam hammer.
"Oh, but for good measure... "The radical right is a scourge ... They need to be repeatedly smacked down until normalcy is achieved.""
owlbynight's entire quote:
"Our political representatives are corrupt and generally represent whomever gives them the most money, namely large corporations.
"We, the people, are represented through our wallets now by the corporations that control our politicians because social media has unionized us. We're able to use online platforms to leverage companies into giving us what we want socially by threatening them when they step out of line. The companies that led to Parler shutting down were acting on public sentiment as a boon to their brands, thus ultimately reflecting the will of the people.
"It's kind of like a single payer system for social justice.
"It's weird end run back to representation but I'll take it for now. The radical right is a scourge that, unchecked, will lead to us having no rights at all. They need to be repeatedly smacked down until normalcy is achieved."
I could be wrong, but I'm also not reading that as a call for violence, much less "hang and quarter". It's not a particularly attractive metaphor, though.
See what I mean? There's always an excuse for leftist calls for violence; whereas right-wing calls for peace like Trump's recent tweets are akshually dogwhistles for violence. I'm disgusted.
It just jumps so easily to your mind how to defend, defend, defend leftist violence; you don't even consider yourself doing it. You too have trained yourself well as an ideological crusader.
You are correct. I admit, as a Democrat, that I want to hit all of those on the right with a medium-sized foam mallet thing. I am ashamed of the violence in my soul.
I've always refrained from calling Trump's supporters Nazis because I'm always prepared for the backfire of "oh, leftists are fascist because they don't let us do and say whatever we want bla bla".
But your comment clearly outlines that you consider them Nazis, that you have no problems with it, and that even if they are Nazis they should not be targeted for their hate and bigotry.
"We really need to whack Joe Biden before he becomes president!"
That's okay, right? Because it's just a children's game, right? Or is it interpreted differently depending on who the target is? Nazis, whacking them is just a game. Leftists, whacking them is srs bsns?
> Or is it interpreted differently depending on who the target is? Nazis, whacking them is just a game.
Did you read the full quote or not? The context matters not the targets (in this case a singular target changes the context). There is only one Joe Biden, so they way you are using it has a different context. If it was about whacking lib-trolls from your news group, that's different than specifying a person.
Maybe you aren't a native english speaker, but whack-a-mole is a common carnival game. That's the context in the quote YOU picked.
Can you show me where "whack-a-mole" is mentioned in the original comment? It's something that exists only in your mind to excuse a call for murder.
"Singular target changes the context". Okay, so "Man, Hitler sure was really good at playing whack-a-Jew, wasn't he?" This is okay by your "logic", right?
The context of the "whack Nazis" comment was the worst day of domestic terrorism and murder in American history. I don't think anyone was playing carnival games at the Capitol last week, but maybe we should go ask them? Maybe the whole thing was just a misunderstood carnival game!
I posted some examples in sibling comments, and the response was that leftist calls for violence are actually secret code about carnival games. So I'm not particularly inclined to post more.
And when it comes to doxxing of private individuals who simply opposed Trump's attempts to overturn the election, just search for EnemiesOfThePeople (it was also a very active account on Parler).
Nobody is proposing to remove the infrastructure itself.
It's a question of who gets to decide who gets to _use_ the infrastructure.
Imagine that you have a private company that manages all of the toll roads in a city. One day, this company decides that they no longer want John Smith to use their toll roads. John Smith is banned.
Maybe John is a terrible person. Criminal convictions, DUIs, whatever. Regardless of that, should a private company have unilateral right to ban a customer? With no recourse? No appeal, no accountability? There is no elected official to vote out of office if you don't like it. There's no appeals court to hear your claim. John is just banned. He now has to drive an extra 30 minutes every day because he can't use the high-speed toll roads to get to work.
Parler is problematic. For sure. And I'm a big believer in free speech, and that companies, in general, should be able to run their business however they want.
However, there are limits. A sandwich shop can't refuse to serve a customer because they are black, for instance. But cake shops can refuse to serve customers if they are gay, as we recently learned from supreme court cases.
I think that much of the issue here revolves around how much of a monopoly a company has. If my local sandwich shop doesn't want to serve me, because I'm a jerk, that's fine. I can just go to another shop down the street. I'm not that inconvenienced.
But these massive tech companies have enormous ecosystems. They dominate their industries, and are often the only really viable choice in their markets.
I see a constant stream of article about YouTubers that build a massive business with millions of followers, and then one day 'poof', Google kicks them off, and they have no recourse.
Or the guy on Facebook that spent $47 million dollars in advertising over the years, and one day Facebook kicks him off, banned for life. No recourse, no appeal, no explanation, even.
Apple and Google have absolute say over their app stores, and what is allowed. Companies can be ruined overnight because some algorithm tipped from the "ok" to "not ok" overnight.
From my perspective, your analogy fits for the internet backbone, but not for the fact pattern under discussion here. To stretch the analogy, I think AWS would be more like a really big network of private distribution centers where client businesses can drop off and pick up goods. I think those distribution centers would be well within their rights to refuse to serve clients who are trafficking "drugs, humans, blackmarket weapons, etc".
If there were a US highway that was used primarily or disproportionately for crime, then the government would take some actions.
Examples of this in practice are the checkpoint around El Paso in West Texas that checks for all sorts of contraband. And the agriculture checkpoint on Highway 80 between California and Nevada.
In this analogy, Parler seems more like a single road used for lots of crime, while social media overall is the highway network that is more free.
I don’t want to nitpick, but i’ve been thru the checkpoint east of el paso many times. It’s deeply racist. Here is how every interaction i’ve had goes:
Checkpoint cop, looks at people in vehicle, sees they are all white, bored sounding asks “is everyone an american citizen?”
Driver: “no, some of us are american and some are canadian.”
There purpose is supposedly to check for illegal cross border activity in the US and yet a car full of canadians doesn’t even blip their radar because it’s not actually about nationality it’s about race.
Which is all to say that i believe your comment about that checkpoint being about contraband glosses over the real motivations. In the dozens of times i’ve been thru there all i’ve ever been asked about is citizenship, and it’s never mattered what the answer is because i am white.
There aren't millions of Canadian citizens crossing the US-Mexico border illegally every year. There aren't tens of millions of Canadian citizens living illegally in the US.
They are trying to stop the 99.999% of illegal aliens who are from Mexico and Central America from crossing the border, not the random Canadian who is basically guaranteed to be entering legally.
Because it’s a car full of white people, the border patrol and you assume that they’re “very likely to be entering legally”. There is a similar ratio of canadians legally and “illegally” living in the states as there are latinos living legally and “illegally”. But that aside, a car full of white people definitely gets treated differently at that checkpoint than a car full of latinos - and nobody EVER asked about “contraband” as originally suggested
A better analogy would be landlord-tenant. AWS was the landlord here. Although they should have the right to evict Tenants under certain circumstances, we might all be better off if these evictions were legal proceedings, and could be documented and challenged in court.
Crossing a state line on a federal highway to commit an illegal act is a federal crime. Parler was given the opportunity to police itself, and they defiantly said, no. What other choice do these companies have? They can't just leave it ignore it, considering there was legitimate concern for the safety of other humans lives.
> I feel like we've all been a bit brainwashed by the government in our notion that "free speech" must have limits
Convince me. I have a side project that's intended to foster free speech, but I disallow advocating harm toward identifiable humans, except through process of law. Should this limit really be removed?
We have free speech. That means I can speak freely, and not be compelled to repeat or amplify what you/they/govt want me to say.
Free speech is one thing, free amplification of speech at global scale is another
To the highway analogy: Yes, highways can be used for crimes. And there are restrictions on highways to prevent crime, enforced by everything including local police, county sheriffs, state police, border patrol, and national guard when necessary.
The Interstate highway system was built specifically for wartime transport of people and materials. One of the specifications was to be able to move a division coast-coast in 24 hours.
You can get away with small crimes on the highways.
However, if you try to wage your own war doing that, with significant numbers of your own fighters, you will be shut down pretty quickly.
Similarly, we have free speech.
I am also not required to amplify your speech. That would be compelled speech - your govt compelling me to speak what I do not want to say - just as bad as forbidding me to say what I want.
Similarly, nothing should require any hosting provider to carry the propaganda for someone else's war, when they do not want to be a part of the war (and make no mistake, what was being planned on Parler is nothing short of war). No hosting provider should be required to carry, or be prevented from carrying porn either.
Should the New York Times be required to carry David Duke's (fmr KKK Grand Wizard) screed on the benefits of racism, or should Fox be required to carry Bernie Sander's latest speech?
This is no different from the press since Guttenberg.
If you want free speech, speak
If you want free amplification at scale, build your own press or find a friendly one.
A bit more detail on what I've read, the banking was designed for 130 mph to be used by high-speed convoy troop transports.
I'd expect your usage description to be the norm, but to fight an on-continent war, I'd expect that they'd close the roads, and send through the convoys, and since 2899mi / 24h is 120.8 mph, it'd be doable.
The Cannonball Run is indeed approaching 24h seems to have just gotten to 25:39[1]! Of course, they have to deal with normal traffic out of NYC and into LA, and avoid cops the whole time. With the the cops clearing the roads and any street-legal car allowed, I'd bet under 17h would be the norm
Counterpoint: if tanks were the primary way to transport something obvious like a giant battle tank that were being used to kill people and attempt overthrow of the government. If the checkpoints setup couldn't catch enough of them to remove the danger, would you support shutdowns of the highway infrastructure until the checkpoints could stop them?
But do you believe in free association? Is it okay that per Parler's CEO, banks and payment providers, law firms, and mail services have also cancelled on them?
But the US highway is policed (moderated), maybe too much (e.g. racial profiling in violation of a few constitutional amendments).
Whereas Parlor intentionally created a system where there was virtually no/super biased moderation, and bragged about it as a core feature.
It would be like if the various law enforcement that is tasked with keeping the drugs, trafficking etc you mention off the road, were instead staffed entirely by a group of a handful of these very same law breakers who obviously vote in their own illegal interests.
And additionally the creators of the highway spoke to the NyTimes bragging about their setup, maybe even telling the public about specific highway routes for these criminals travel.
> I very much doubt that that is true. I think the speech part should always be 100% free. Of course any crimes that derive from it are and will always been fully enforceable. I just question whether or not the speech itself should be viewed as illegal,
So would you say, for instance, that we should be able to do an unlimited amount of discussion, planning, and coordination of an elected official's death. And it's only when one person takes a concrete action towards the plan that they should anyone be able to be arrested, and only that person? Because the rest is all protected speech?
Yes, this would certainly be naughty under current law, but perhaps not in a legal regime where "all speech is OK!" Note that the overt act need not be committed by the speaker, too.
There's even ambiguity about elements of this in current law. If one were to advocate for the violent overthrow of the government, and begin running training exercises to help people prepare to overthrow the government at some unspecified future date--- it is unclear whether this is protected by the First Amendment. SCOTUS mentioned -- but did not address -- this problem in Stewart v McCoy (2002):
... While the requirement that the consequence be “imminent” is justified with respect to mere advocacy, the same justification does not necessarily adhere to some speech that performs a teaching function. As our cases have long identified, the First Amendment does not prevent restrictions on speech that have “clear support in public danger.” Thomas v. Collins, 323 U.S. 516, 530 (1945). Long range planning of criminal enterprises–which may include oral advice, training exercises, and perhaps the preparation of written materials–involves speech that should not be glibly characterized as mere “advocacy” and certainly may create significant public danger. Our cases have not yet considered whether, and if so to what extent, the First Amendment protects such instructional speech. Our denial of certiorari in this case should not be taken as an endorsement of the reasoning of the Court of Appeals....
Not GP, but I would expect parts of the highway infrastructure where more crimes occur to be more closely monitored and controlled than others. I don't see how this is a black and white issue where you either should be in support or against the highway infrastructre.
And since you mention drug trafficking, Personally, I think 100% of drugs should be legal and we have been brainwashed by the governments and media about their effects. But I realize that it is currently very much not the case where I live, so I have to be aware that my actions might have consequences and know I'm going to have a hard time to convience others of my view, so I might have to compromise to get anywhere.
This should be in my opinion the main focus of democracy: to contiously tweak the system to what the current society agrees on in regards to living together instead of inisting on ideals. It just seems to me, that most democracies are not really fond of the idea of taking democracy actually seriosly.
“And contrary to what many have been led to believe, Parler’s Terms of Service includes a ban on explicit advocacy of violence, and they employ a team of paid, trained moderators who delete such postings.”
It is a bad analogy to compare Parler with a highway.
If Twitter is a highway, Parler is a tunnel operated by drug cartels.
What do you think about the Silk road? The onion website that operated strictly in Tor that was a marketplace used exclusively for crime? That is infrastructure too right? Should it be legal? Fuck no.
Stop defending a lost cause. They got shut down because they tried to stage a coup by kidnapping senators in the Capitol.
They failed, and it is a good thing that they failed, and it is a good thing they got censored. And it is great that more companies decide to do the same.
Shut them down, all of them. Enough is enough. Have you ever been assaulted by a Trumper while minding your own business, just for being a minority? I have. These news make me happy. Adios, amigos... Your movement will never attain anything again.
> Curious if you feel torn supporting the US highway infrastructure? It can clearly be used for to traffic drugs, humans, blackmarket weapons, etc.
This is a good thought experiment actually. I think typical Americans support policing all roads and highways, especially where stretches of road are known to be frequented by bad actors. If private corporations owned highway infrastructure and unilaterally decided to shut down segments of road in order to stop those bad actors, it should raise a lot of questions.
I think this highlights the need for better legislation, not only to limit corporations ability to shut down services, but but to replace that with policing put in place by elected governance and based on laws that apply equally to everyone.
In your comparison it's about what CAN be done with the highway. For the comparison to hold true it's more like we let them knowingly drive there while we're 100% aware where they are at that moment. So we could have acted on it but didn't and watched them do it.
Threats, slander and misinformation where never part of free speech and never will be. Invoking "free speech" here is disingenuous.
To go back to your comparison, if Parler - or anything similar - was like the highway we wouldn't know about what was going on. But we do, and it's inciting violence so it's basic human decency to stop it. Even apart from anything that a government would say. I don't get it, why are we still talking about this on HN.
> Curious if you feel torn supporting the US highway infrastructure? It can clearly be used for to traffic drugs, humans, blackmarket weapons, etc. It can be used to flee justice, evade police, abets vehicular manslaughter, etc. The list goes on and on. Is it even controversial to support the highway system as is? Do we loose sleep over it?
As we've learned from the now settled case law on Bittorrent trackers & co, it is not sufficient for your infrastructure to make it possible to support legal uses, it is necessary to show that that is the vast majority of its uses.
If there was a highway that was mostly used by drug cartels, blocking it would be a no brainer. That's not an adequate comparison. The problem is really that this is uncharted digital territory and, as always, our laws are too outdated to fit a digital world. Facebook/Twitter has enough money to flood a competitor social network with nazi spam, to bribe journalists to write about it and to push competition to destruction and still have hacker news and reddit applaud it. Not defending Parlor, but that is a very possible scenario.
This is a bad analogy. The highway is heavily policed to combat those things, and they're nowhere near a central use-case.
I'm sure it was a small minority of Parler's activity that was death threats or planning/encouraging/inciting violence, but it seems like it was intentionally a safe-haven for those activities.
The highway has criminal activity, but is not a safe haven for criminal activity. Parler appears to have been to an extent beyond what is generally considered acceptable.
I'm perfectly fine with Parler being an outlet for conspiracy theories and such. Many people made good money off of Parler, and Parler made good money grifting off of delusional folks. All legal. Crazy? Probably. But definitely within the realm of "free speech".
The line gets drawn when a platform is used as a base of coordination to overthrow a legitimately elected government and threaten violence against people. Not sure why that's so hard to grasp.
Traffickers, when using the highway, do not make HQs at major intersections and rest zones and do multiply at will there. And if they do, they got banned for some time.
I don't understand the comparison with US highway infrastructure. That infrastructure cannot be use to spread "crime" at rapid scale, leading to extreme degradation of society. The whole ISIS movement took advantage of lax enforcement and created a monster that will haunt was for decades. What's the equivalent for highway infrastructure ? Are drug dealers using it to spread addiction at rapid scale ?
> Curious if you feel torn supporting the US highway infrastructure? It can clearly be used for to traffic drugs, humans, blackmarket weapons, etc. It can be used to flee justice, evade police, abets vehicular manslaughter, etc. The list goes on and on. Is it even controversial to support the highway system as is? Do we loose sleep over it?
They get closed fast when the shit hits the fan though. So, where does the analogy leave us ?
> Curious if you feel torn supporting the US highway infrastructure?
Are you equating social media with the US highway infrastructure? I have to disagree with you in that case. The Internet and ISPs are like the US highway infrastructure. Social media is like demanding your stuff be carried by a particular truck company.
If you want social media to be public infrastructure, maybe the government should start a social media company.
Unlike the US highway system, Parler was specifically set up to encourage - or at least provide a haven for - insidious planning and hatred. So I don't see how your argument is convincing.
Would you feel the same way if it had been Muslim terrorists in camo storming the Capitol, and they had organised on a Muslim site?
Because no matter how this is being spun, that is literally comparable to what happened last week.
There isn’t really anything related to free speech here. No one censored parler, let alone the government. Amazon and Apple didn’t even censor them, just refused to support their product because they failed to live up to the terms of service.
The only element of free speech ironically was that Parler was found to censor left-leaning and moderate messages in its forum.
Your comment could be so much better without the highway analogy. Now there are people expanding it, indulging in thoughts like "wouldn't the digital equivalent of Toyota be XYZ...".
I think comparisons suck, because people obviously come up with comparisons with the things that prove their point of view, while ignoring all other comparisons.
If the highway system was created purpose or was majorly used for traffic drugs, humans, blackmarket weapons, etc., and the people running/building/profiting from the highways support trafficing drugs, humans, blackmarket weapons, etc. then I would not support the highway infrastructure.
In europe, we long accepted that uncensored talk of hate speech which in practice is almost always racial hatred will lead to the eventual overthrow of democracies and therefore deserve to be censored.
Is preserving democracy worth putting limits on free speech? I'd argue absolutely. People are animals.
We have to consider the old adage about screaming "Fire" in a crowded theater.
Unfettered free speech would dictate we can all do that anytime we want because free speech has no limits.
Brandenburg v. Ohio in 1969, limited the scope of banned speech to that which would be directed to and likely to incite imminent lawless action. That is where the line in free speech ends.
The notion that the purveyor of technologies used to distribute speech that are incitements to imminent lawless action has no legal obligation in regards to the consequences is akin to saying a theater owner has no obligation to make sure a person who's repeatedly screamed "Fire!" and caused a stampede that injures people isn't allowed in their theater doesn't hold up. And it has ground at all to stand on if the theater owner actively pursues them and promotes they can do that in their theater.
In this case, Parlor has essentially pursued and invited those who love to scream "Fire" and actively encourage them to use their service to do that. And in fact they used it to organize a mob and help plan a insurrection.
And it did not matter to them that lies were being spread to fan the flames of hate, or who or how many might lose their lives as a result.
Parlor is a prime example of the lowest form of capitalism. Little different than crack dealers. We cannot let them hide behind the noble goals of "Free Speech".
If there was a specific highway that was almost exclusively used by drug / human traffickers, and there was a mountain of evidence to prove that, it seems like we would really look into that road and add extra security or shut it down until we could get a plan together.
Not what I was referring to at all, but I see where you’re going.
I was just trying to make a point that this was removing a toxin from the app eco-system. Not a harmless player who did mostly good with a few “bad apples”
> I feel like we've all been a bit brainwashed by the government in our notion that "free speech" must have limits. I very much doubt that that is true.
Then why are you posting that on a heavily-moderated discussion forum and not 8kun?
Well funny you mention 8kun specifically it is down at the time of your comment. I find HN to be a good balance of moderation and open discussion. Like all platforms it has it's biases but, people are relatively civil and open to discussion which is commendable in the current climate of online discussion.
The analogy of the highway system is apt. I never thought of it like that.
That being said the Supreme Court has weighed in on what kinds of speech are protected under “free speech” and which aren’t. Overt calls for violence and such are not protected.
A highway doesn’t scale in the same way a social media site does. I disagree that this is brainwashing. We just don’t know how to cope with the internet’s scale and the answer is not clear..
once you’ve lived in a totalitarian state, then you instantly agree with the comment “speech should always be 100% free”. It is just too important value to lose.
As someone who was born and lived in USSR it pains me so much to see to many Americans willing to give up or restrict free speech...
Let us be real. People who were using Parler were using it to plan violence. If not, no one is stopping them from using FB or Twitter or some other social network. There is no special love for Parlers rights except that it allows illegal activities not covered by free speech. People who are fighting this are using free speech as a blanket to do what they wish
Speech should have limits, but calling anything that took place on parler automatically 'hate speech' so contemptible that it ought to result in banning along with everything else on that site is ridiculous. There have been few instances of truly censorship-worthy speech over the past year, from either left or right.
28% of Americans believe that Bill Gates wants to use vaccines to implant microchips in people - with the figure rising to 44% among Republicans. [1]
If a significant chunk of the population hesitates to get vaccines then it has consequences for all of us, regardless of our beliefs. Lies and misinformation spread through social media should be kept in check by patrolling, just like our highways are patrolled.
> Curious if you feel torn supporting the US highway infrastructure? It can clearly be used for to traffic drugs, humans, blackmarket weapons, etc. It can be used to flee justice, evade police, abets vehicular manslaughter, etc. The list goes on and on. Is it even controversial to support the highway system as is? Do we loose sleep over it?
The people responsible for those highways spend a lot of money preventing them being used for drug trafficking, blackmarket weapons etc. Parler actively refused to moderate right wing hate speech.
> Obviously all of the insidious planning and hatred that presumably occurred on Parler is abhorrent. I think I can hate all of those things without believing that the site should be censored.
I have to ask, where then do you want the line to be drawn? How detailed does the plan have to be before it's nipped in the bud?
I feel like we've all been a bit brainwashed by the government in our notion
While I agree with the general sentiment of your comment, I wouldn’t say that the brainwashing has been on the part of the government. All of the efforts at limiting speech, at least recently, have come from private, left-leaning people and organizations. The general consensus seems to be that all speech that doesn’t endorse leftist political views is hate speech and should therefore be banned.
I went on Parler, which I had never heard of before last week, just to see what the big deal was. I saw nothing endorsing violence, planning attacks, conspiracy theories, or hate speech. I saw a modern looking Twitter clone with similar, mundane conversations. I suppose I didn’t see much leftist banter, but that was the only real difference between it and Twitter.
The idea that elite liberals with monopoly power colluded to strangle a site like this, simply because a percentage of its users likely voted a different way than they did, should be offensive to all Americans, regardless of political affiliation. It makes me fear for the future of democracy as a whole. Democracy cannot exist without the ability to debate.
AWS has terms of service that are legally allowed to be broad and ambiguous. Violation of these terms of service is grounds for removal from their platform. AWS has sole authority over the adjudication of such violations and they are under no obligation to inform the client as to the reason behind their decision.
If we want to limit the powers of these platforms, then Congress needs to pass laws limiting the scope of ToS and/or create a regulatory agency charged with adjudicating claims.
There's no analogs or moral arguments necessary. Just the legal ones. And Congress has failed to take any action to invoke legal authority over these platforms and their ToS. Thus, the government has minimal control here.
The argument is a poor one because it draws parallels between things that are not parallel. There is no indication that (A) it serves the public interest for the government to forcibly alter business decisions, nor that (B) there is an existing legal basis upon which to do so.
> Curious if you feel torn supporting the US highway infrastructure? It can clearly be used for to traffic drugs, humans, blackmarket weapons, etc. It can be used to flee justice, evade police, abets vehicular manslaughter, etc. The list goes on and on. Is it even controversial to support the highway system as is? Do we loose sleep over it?
I've seen this kind of argument all too frequently in the last week. While I don't think the People Who Decide what is taken down to be infallible, I do think that we're all capable of making reasonable decisions here. Taking down a hate speech site obviously doesn't mean we need to delete the internet, or iPhones, or whatever else Parler users have in common with every other person who uses the internet.
> I feel like we've all been a bit brainwashed by the government in our notion that "free speech" must have limits. I very much doubt that that is true. I think the speech part should always be 100% free. Of course any crimes that derive from it are and will always been fully enforceable. I just question whether or not the speech itself should be viewed as illegal, or something that should be regulated.
I strongly disagree. Apps and websites being taken down is not something new at all, and what happened to Parler is only notable because it impacted more people.
There are many sites that are IMO righteously taken down. Our conversation should not be "should big tech control what we see online", but should be "where do we draw the line?".
I feel like we've all been a bit brainwashed by the government in our notion that "free speech" must have limits. I very much doubt that that is true. I think the speech part should always be 100% free. Of course any crimes that derive from it are and will always been fully enforceable. I just question whether or not the speech itself should be viewed as illegal, or something that should be regulated.
Obviously all of the insidious planning and hatred that presumably occurred on Parler is abhorrent. I think I can hate all of those things without believing that the site should be censored.