Remember that these individuals are usually hired and given responsibility based on their ability to game class signalling games; they exist to justify the status quo.
If the status quo were more justifiable their actual job responsibilities to other people within the organization ( conflict resolution, scheduling, resource allocation, and personnel management) are things that could be automated in a hot minute.
I don’t know about full automation, but I can confirm that in large f500 companies, Gartner holds a lot of weight. Most follow their recommendations. I’m guilty too of using that argument: “According to Gartner...” to give weight to my position when talking to C-level folks. I wish I didn’t have to, but lots of folks trust their recommendations.
I wonder how long until we see the first lawsuits for stock price manipulation using this feedback loop?
If an unscrupulous executive were to make factually true statements that were engineered to mislead automated assessment algorithms so that they or their associates could buy the dip.
Imagine if an automobile manufacturer allowed you to configure the safety features of your car and had the defaults set to unsafe but convenient values to help sell vehicles... do you think the manufacturers should evade liability?
They absolutely do. In northern climates where there is snow and ice on the roads for 3+ months out of the year, a car MFG will GLADLY sell you a vehicle with sport/summer tires. If you try driving with those in the winter: at best you'll get stuck, at worst you'll slide through the first intersection you come to and die in a fiery crash.
Just about every business will have options that are a perfectly reasonable choice in some circumstances and really, really stupid in others.
I live in New York, where the speed limits on highways vary from 50 mph in NYC to 65 in the rural areas. My car is governed at 110 mph. Why? There is no reasonable scenario where that is smart to discover.
Microsoft has billions of users. The security needs of the US Department of Justice are not the same as my mom’s real estate office.
When you use 3rd party IdP, for example, how does Azure MFA know what the app is?
The configuration described was not out of the can. Somebody decided to make it the way it was.
Most likely due to some physical (not legal) issue that would make it mechanically unsafe to operate the vehicle above that speed even in an otherwise appropriate location. (Or perhaps it's due to some obscure state law, or the manufacturer is just out to spoil your fun, or ... who knows?)
More generally, I agree with the point you make here about the responsibility to configure things correctly. However, it seems to me that Microsoft is also on the hook for failing to include the necessary context when an MFA request is sent. It's a bit like selling a car with seat belts that superficially appear to work but fail at the slightest provocation, no?
This is all the result of confusing 1st amendment rights with the right to access the audience that gathers at a particular URL.
What Taibbi is asking for is that the guy who tells you that drinking rat poison is good for you should be allowed an audience and that even putting a warning alongside the video would be an infringement on his rights.
Looking at the comments here I have to conclude that HN is no longer on board critical thinking much less common sense.
People are making normative arguments, not legal arguments. We are aware that this is legal, just like it used to be legal to discriminate in hiring practices against black people.
Taibbi is suggesting that if you can't trust the people not to post error and lies, how can you trust the oligarchs and officials? Especially if your access to alternative perspectives is limited. Read for yourself:
>> Cutting down the public’s ability to flip out removes one of the only real checks on the most dangerous kind of fake news, the official lie. Imagine if these mechanisms had been in place in the past. Would we disallow published claims that the Missile Gap was a fake? That the Gulf of Tonkin incident was staged? How about Watergate, a wild theory about cheating in a presidential election that was universally disbelieved by “reputable” news agencies, until it wasn’t? It’s not hard to imagine a future where authorities would ask tech platforms to quell “conspiracy theories” about everything from poisoned water systems to war crimes.
Those with platforms have always had the opportunity to lie to large groups, but extending that ability to every single person seems like an EXTREMELY BAD "solution."
Historically there's been a burden of proof for wild claims because it's been hard to get a huge mass audience. And people with those audiences were reluctant to repeat whatever wild bullshit was proposed to them if they couldn't vet it themselves.
If you didn't have your own credibility, you had to convince those who did to run your stuff. The cost of this is that it's slower to break things, and some stuff gets missed.
Unmoderated internet platforms with algorithmic jumps between otherwise-unconnected publishers let you borrow and hijack other people's credibility and platforms.
Why those platforms shouldn't be allowed to have editorial control - given that maintaining a certain reputation will still be critical for their long-term success - is beyond me and seems to have obvious un-American problems (infringement on their own private rights).
The trade-off being desired also seems fundamentally bad. More people being misled more quickly seems like a worse situation than slower breaking of news and the ability to suppress some stories, given that we were still able to break those stories you mention in the past. (Of course, I don't know what else might have been more widely reported in the past... I'm having to rely on a "we didn't feel like we were living in a totalitarian dystopia in the 60s-through-80s" assumption.)
The key word is "allowed". YouTube should be allowed to do everything they have the right to do. They have the right to stop providing all free services (unless they have contractual obligations). They have the right to ban all creators whose names start with "K". They have the right to add a 10-second delay to all page loads. They have the right to put Goatse on their homepage.
However, if they do any of the above things, the rest of us have the right to be disappointed, to think YouTube sucks, and to tell everyone else about it.
So, if they demonstrate that they have no respect for the principle of freedom of speech, we have the right to call them cowardly, un-American, probably unfair in their implementation, counterproductive even assuming their goals, etc.
> However, if they do any of the above things, the rest of us have the right to be disappointed, to think YouTube sucks, and to tell everyone else about it.
> So, if they demonstrate that they have no respect for the principle of freedom of speech, we have the right to call them cowardly, un-American, probably unfair in their implementation, counterproductive even assuming their goals, etc.*
They aren't just saying it sucks, people and politicians are calling for a repeal of Section 230 of the CDA in a knee jerk reaction.
They want to fundamentally shift the liability for user created content online, effectively ensuring that hosting any speech becomes a massive liability for those without billions of dollars comb through user uploads for illegal content.
As a business owner, I don't want to be raided by the FBI in the middle of the night and then go to prison because someone thought it would be funny to upload illegal content to my servers.
I am not a fan of repealing section 230. I think it'sactuallya pretty inspired piece of law for its time.
But it's original purpose was to remove civil liability for platforms for making an imperfect but good faith attempt to remove illegal content.
The farther we move away from the original motivating case, the less clear it is to me that Internet companies need or deserve the protection afforded to them under the auspices of section 230.
Well, Taibbi hasn't mentioned Section 230; I see only two other comments mentioning it. Also, I skimmed an article that says most people don't understand Section 230 (or the context around it—it provides immunity for certain things, and therefore you have to understand "immunity from what?"), so I would hesitate to say too much about it. It's entirely likely that there are some prominent partisans who claim to be in favor of free speech but don't have a principled stance on the subject (e.g. think flag-burning should be illegal), or who are as ignorant as I am on section 230 and less averse to recklessly advocating for political measures they don't understand.
At any rate, as I doubt you'll be surprised to hear, I am also not in favor of business owners getting raided by the FBI because users uploaded illegal content. That sounds like a mechanism for crushing small websites who can't afford their own legal department, thereby protecting large websites against competition.
You reference a totalitarian dystopia and yet you are salivating for widespread censorship to be applied. The great thing about the internet is the freedom of communication which broke the monopoly of mainstream media. If people like you have your way the internet will be as censored as cable tv used to be, in your blessed utopia of the 60s to 80s. Were you alive back then? Have you heard of the Vietnam war. It's not an exaggeration to say your ignorance and stupidity is staggering
> Those with platforms have always had the opportunity to lie to large groups, but extending that ability to every single person seems like an EXTREMELY BAD "solution."
Why is it bad? before only a few people could lie to everyone and keep the majority in the dark because they lacked access to information that would expose the lies they were told. Now everyone's voice is amplified and the people who used to have this power are upset because people believe things they don't want them to believe.
> Historically there's been a burden of proof for wild claims because it's been hard to get a huge mass audience.
This seems like a non sequitur. Historically its been hard to spread wild claims for most people because they didn't have a platform. What burden of proof are you referring to?
> And people with those audiences were reluctant to repeat whatever wild bullshit was proposed to them if they couldn't vet it themselves.
Isn't it more likely that they were reluctant to repeat stories unless it benefitted them? Yellow journalism predates the internet by almost 100 years.
> Unmoderated internet platforms with algorithmic jumps between otherwise-unconnected publishers let you borrow and hijack other people's credibility and platforms.
Perhaps. I'm not sure that I could hijack the credibility of (for example) Dr. Fauci by retweeting him. Its more likely that he could voluntarily lend me his credibility by retweeting me.
> Why those platforms shouldn't be allowed to have editorial control - given that maintaining a certain reputation will still be critical for their long-term success - is beyond me and seems to have obvious un-American problems (infringement on their own private rights).
The argument is that they have become large and commonly used enough that they are akin to a public utility. This is an open question and I certainly don't have the answer. Think of it as if the interstate highway system was owned by Procter & Gamble and they began to limit access to the interstate for carriers who delivered their competitors' products, or refused to allow left-handed redheads to access the interstate. A lot of people would say that in that case it would be an appropriate use of the government's regulatory power to nationalize or break up the "P&G Interstate" for the public good. Other people would say that it was within their rights as property owners to decide who they sold roadway access to. You'd have a situation where people's interpretations of fundamental rights conflicted because of technological advancement.
> The trade-off being desired also seems fundamentally bad. More people being misled more quickly seems like a worse situation than slower breaking of news and the ability to suppress some stories, given that we were still able to break those stories you mention in the past. (Of course, I don't know what else might have been more widely reported in the past... I'm having to rely on a "we didn't feel like we were living in a totalitarian dystopia in the 60s-through-80s" assumption.)
Consider that Manufacturing Consent was published in 1988.
> The argument is that they have become large and commonly used enough that they are akin to a public utility.
While this is an interesting conversation, you don't even have to go this far. You can just argue that rebutting bad ideas is more effective than censoring them and a good video hosting platform should value open discourse, and so YouTube should try to be as content-neutral as possible. If you convince enough YouTube users that open discourse is more important than censoring perceived falsehoods, then it might make more sense for YouTube to commit itself to open discourse.
Should you be forced to publish and host things that you think are terrible lies? If so, why? If not, why should YouTube?
Not making a comment on the validity of their claims, just trying to understand what you are saying - a private organization should be legally compelled to spend money to host ideas that they think are harmful? How does that work in practical terms?
> Should you be forced to publish and host things that you think are terrible lies?
Should I be forced to pay for public schools if I don't have kids or disagree with what they are being taught? Should I be forced to pay for roads if they will be used to support activities I disapprove of? What if I passionately and sincerely disagree with what people are using the roadway access to facilitate? Should I be forced to subsidize activities that I reasonably believe are harmful to the environment? What if I can produce peer-reviewed academic evidence supporting my point of view? Should I be forced to hire a qualified individual at my company if I have an opening if I don't like their religious beliefs? What if their religious beliefs involve arranged marriage or female genital mutilation?
Hopefully the list of rhetorical questions serves the purpose of highlighting the fact that our society and our system of government already compels people to support things they oppose, including terrible lies, harm to people and the environment, and various forms of abuse. Yet we haven't chosen to abandon this form of government for anarchy. For this reason I am unmotivated by absolutist private property arguments when applied to this issue. Thank you for your excellent question as it draws attention to a central part of the issue.
> Not making a comment on the validity of their claims, just trying to understand what you are saying - a private organization should be legally compelled to spend money to host ideas that they think are harmful? How does that work in practical terms?
I don't have the answers here, I'm participating in the conversation with the aim of moving it forward. How does it work in practical terms to require that companies hire and serve people even when the owner doesn't like their religion, ethnicity, or sexual orientation? In theory we write laws on the basis of balancing the values we hold dear. In practice lobbyists donate money to lawmakers for influence and lawmakers compromise with each other on things that they think are going to be practical to enforce and get them re-elected or some other form of benefit. Perhaps not in that order.
Government schools are pretty bad despite the ever increasing funding afforded to them, and the teachers unions wield enormous power. If we are going to fund anything, it should be the students directly (via vouchers or whatever) not the systems.
I hardly imagine that a government can really be held to account by its citizens if it also takes for itself the role of educate their children.
Thank you. This is exactly the basis upon which we justify the proposal to force YouTube to host content that their shareholders would prefer not to host. Or the alternative formulation, this is the basis on which we justify prohibiting YouTube from censoring political speech that their shareholders disagree with.
I think its entirely reasonable for YouTube to be compensated if the polity concludes that they maintain a useful public service by providing a place for people to share videos and they adhere to reasonable objective guidelines on what content to permit and remove.
> A misinformed populace acts to destroy democracy.
who is to be trusted with the authority to decide what counts as misinformation? Is it possible that a gatekeeper of information would find it easier to misinform the populace than a prominent person who had to contend with other dissenting voices?
It feels to me like both sides in this debate want to argue "of course it's patently obvious that I'm right". But services like YouTube and Facebook have no real historical precedents in terms of audience reach, and it's hard to think of a more rock-and-a-hard-place situation than choosing between "force a private company to treat their property like a public square no matter the cost to society" and "allow a private company to dictate what's allowed in a de facto public square."
So, yes, of course it's possible a gatekeeper would find it easier to misinform the populace. Over the long term, it's almost guaranteed. Yet it's also possible -- in fact, one can argue the probability is essentially 1.0 -- that refusing to have any gates will also misinform the populace. If we're looking for a blanket rule that will cover all possible situations in this new information reality, we're probably looking in vain.
The burden of proof lies with those who want to deny me rights, not those of us who want to maintain those rights. FB and YouTube has a right to remove content they don't want to host, just like me. If you want to take away my rights, you'd better come up with something really really good.
That's not something legally recognized though. You can't say you base this on something when there is nothing equivalent. An education is one thing. It's something that's been recognized in court.
Now, if you want to claim that public sites like YouTube should be forced to host content they don't want to from people they don't want to do business with, you'll have to explain how that equates with your established rights in the US?
More to the point: You'll have to explain why you want to take away my rights?
We're having a discussion about what the law should be, not what the law is.
> You can't say you base this on something when there is nothing equivalent.
The above discussion demonstrates that our society already limits property rights in the service of an informed populace, which is relevant to the issue at hand.
> An education is one thing. It's something that's been recognized in court.
Courts have recognized ballot fraud in the past as well, its not some outlandish theory. Its why people are supposed to maintain chain of custody of the ballots.
> you'll have to explain how that equates with your established rights in the US?
Corporations are chartered by the government in exchange for certain privileges and limitations. We already infringe on property rights because an informed population is a public good. QED there is a precedent for requiring a public corporation to host certain kinds of content. For example, foods are required to meet labeling requirements.
> More to the point: You'll have to explain why you want to take away my rights?
The right to remove content is potentially dangerous to democracy. See recent events, where a bunch of trump supporters are ever more convinced that an election was stolen because people keep claiming there was no evidence and YouTube has a policy where they state they won't allow anyone to post the evidence that the Biden camp says doesn't exist.
> my rights?
As a GOOG shareholder, you exchanged certain rights for other privileges when the government chartered your corporation and allowed it to traded on the public market while limiting your liability for criminal acts that might be performed by Google employees. In a sense you are taking the benefit of profit and legally protected from liability for the actions that might generate that profit. That arrangement comes at a cost of increased regulation by the government.
No, that would be the basis for the US to stand up its own video publishing platform. Forcing YouTube to host content it doesn't wish to is "un-American".
Is forcing Jeff Bezos to hire qualified black people and women un-American?
> that would be the basis for the US to stand up its own video publishing platform.
Why not just nationalize YouTube? As a corporation, Google already enjoys special privileges enshrined by law. Lets not pretend to be private property absolutists when the government already limits the liability of shareholders for the actions that bring them profit.
Black people and women are humans with equal rights. I believe this is basic humanist morality, but also they are a protected class before the law that can't be discriminated against in many cases. A false video is just a false video. Being a liar is not a protected class.
Why not nationalize YouTube is irrelevant to the conversation, but the end result would be the same. The government would then be providing a self publishing platform.
A vast majority of the people who are too stupid to realize these questionable youtube videos are hoaxes(and therefore need to be protected from them) went to public school.
A vast majority of the people who immediately recognize these YouTube videos are fake also went to public school. Something like ninety percent of America goes to public school, you can cite them as a vast majority for almost any American activity and probably be right.
This is a turning point for YouTube. YouTube was a medium of expression. In that sense your question is like "Why should the paper company allow ideas to be written on their paper that they disagree with?"
Of course YouTube does also take a editorial role. As both a platform and a publisher, we should allow them to stop giving recommendations for things that don't align politically.
By taking the more extreme step of taking down videos, they are balking on their self expressed purpose of being a platform for digital expression.
There are several people in my life who I completely disagree with politically. At the same time, I'm grateful that I can hear their perspective. YouTube is no longer supporting this dynamic.
I'm already seeing folks signing up for social networks that have less censorship, I think alternative video hosting platforms may increase too.
This doesn't even get into the really early viral videos like the charcoal briquettes lit with liquid oxygen, or the massive viewership of the comet Shoemaker-Levy 9 impact way back in the day.
Probably because its dangerous to democracy to silence people who question the integrity of elections. It sends the impression that there is something to hide. And people who suspect that the election was stolen will take this as confirmation of that suspicion. Since they won't be allowed to use that platform to raise their suspicions, no one will be able to respond to them to allay their suspicions. Then as time goes on confirmation bias will lead them to become entrenched in this belief. Eventually this belief (that the elections have been stolen) will lead to the perception of a loss of legitimacy for the government, and consequentially public servants will find it more difficult and dangerous to do their jobs.
>Probably because its dangerous to democracy to silence people
YouTube is not silencing anyone. These people are free to espouse these views espouse these views anyway they please just not on YouTube's private property. The New York Times is not a danger to democracy because they refuse to publish my article on why the earth is flat.
> YouTube is not silencing anyone. These people are free to espouse these views espouse these views anyway they please just not on YouTube's private property.
Its dangerous to democracy for oligarchs to have this much control over the discourse.
Well, I could simply point to the entirety of Taibbi's article. Or even just the headline and sub-headline: "[It's] Un-American, Wrong, and Will Backfire. Silicon Valley couldn't have designed a better way to further radicalize Trump voters." Many of the points I might make, Taibbi did so in his article.
For the sake of novelty, I'll make a different point: I see one way in which YouTube may have promoted democracy: by making their odiousness more clear (and in a public, "everyone knows that everyone knows" way), they may have encouraged quicker production or adoption of alternative platforms. This seems unlikely, because websites like them have done lots of crappy stuff before, without usually causing much effect; but it is possible that this may be seen as enough of a "They've declared war on the entire right wing" to motivate a significant migration. Two partisan platforms is better than one, for democracy and just for competition. (Better yet would be either a platform that has made some kind of enforceable and very-painful-to-break commitment to neutrality, or some kind of decentralized system that no single company or party can decide to censor. We may get there eventually.)
"Corporationism is above socialism and above liberalism. A new synthesis is created. It is a symptomatic fact that the decadence of capitalism coincides with the decadence of socialism. ... Corporative solutions can be applied anywhere"
> This is all the result of confusing 1st amendment rights with the right to access the audience that gathers at a particular URL.
No one is confused. The article mentions neither the First Amendment nor the Constitution. It only calls the matter "un-American." This phrase is clearly a value statement, not some neutral and dispassionate assessment.
The line you draw to separate the government from the private sector is quite useful in other contexts, but does not affect the matter before us today. After all, in a democracy (which the Constitution aspires to be) the ideal of free speech itself must proceed from shared cultural values; if those values change enough, we might also expect the First Amendment to be repealed anyway.
Of course YouTube of course is not required to represent these purportedly-American values; just as surely, it may be criticized as "un-American" by those promulgating such values. A much more interesting argument would assess the extent to which the values in question are, in fact, American, whether YouTube's choices are more representative (surely there's a case for this) or simply more desirable (I note many here agree with them) -- and whether American (or "American") values can coexist with these YouTube values.
The words "free speech" and "1st amendment" are never used in the article. "Free press" is used only in passing. As far as I can tell, your mention of the 1st amendment is a complete non sequitur, and you are the only one doing the confusing.
The question is whether what YouTube is doing is good for society, and Matt Taibbi says no. He's criticizing YouTube for doing something he thinks is bad, since public criticism is sometimes effective at altering corporate actions. It's pretty annoying that so many people try to sidestep this topic by saying that the first amendment does not prevent private censorship. Pretty much everyone agrees with that.
And denigrating people you disagree with is unnecessarily inflammatory.
Is it legal in the US to encourage people to jump in front of a train?
If the answer is no then telling people to drink rat poison is illegal. The legal system has the power to punish the uploader, and youtube can remove it just as it do with copyright infringements.
If the answer is yes then I would suggest to change the law. Pushing people to drink rat poison sound awful close to harassment, which is just the kind of speech which the 1st amendment do not protect.
Should we censor them? Or perhaps just put up a warning sign that not everything on the Internet - like the physical world - is worth believing? Where do the guardrails end and censorship begin?
If you have a law and a politician seems to be breaking it, then the usual answer is to have a prosecutor to look into it.
Politician also have what is called a position of trust, so in addition to legal enforcement there could also be a vote at that political level to remove said politician.
In addition, I would imagine that encouraging people to drink water poisoned with lead can put that politician in legal liability if anyone got sick.
I will add that the US has a odd legal system which potentially give the president permission to ignore laws. The best way to fix that is to remove that exclusive protection, and if its a constitutional problem, fix the constitution.
“A Massachusetts woman [Michelle Carter] who sent her suicidal boyfriend a barrage of text messages urging him to kill himself was jailed Monday on an involuntary manslaughter conviction nearly five years after he died in a truck filled with toxic gas.”
She appealed to the Massachusetts Supreme Court, and the conviction was upheld (unanimously).
> What Taibbi is asking for is that the guy who tells you that drinking rat poison is good for you should be allowed an audience
He should, it's the basis of democracy. Sure rat poison is a bad example, but thinks are much less clear cut if we speak about other things, including covid19. So putting up warnings about this problems (potential more then just a subtle note at the side) is reasonable removing it is much less so (as long as they don't e.g. tell people to go on the street and kill all politicans or similar bs.).
I mean you start with removing 100% clear bs. misinformation.
Then you remove misinformation which contains some truth (it's still misinformation).
And before you notice unpleasant opinions are removed to (like e.g. the CIA having had similarities to a terrorist organization in the past, factual right but misguiding).
EDIT: Oh and if you tell people to dink rat poison and they do you should be held responsible for tricking people into killing them-self, still that's not the same as censoring the content.
------------
An example from Germany is that there is a very mixed movement called Querdenker. It clearly contains right radicals, nazies, covid denies, q-anon believers etc. BUT a non small part of the core movement are neither of this but people which believe in covid, believe it's bad. But also believe that the decisions done by the German government do more harm then good. But they get frequently denounced and grouped with all the problematic groups from before. Which has all kinds of problematic consequences (including making this people being more susceptible to manipulation from the problematic groups). Now removing (some of) the information sources of this people instead of adding information enlighten them about what is wrong with the information is only making it only worse. Furthermore you can't censor them as this will just push them to other information platforms and if you continue to censor again and again you will end up with a censorship system no less powerfull then chinas... which I believe no one would want in a democracy. So I believe we will have to learn how to handle such information _without_ censoring information we believe is wrong/misguiding.
Do you support the election that happened on November 3rd 2020? Then you must support the results of the election, whether you like the election results, or not.
THAT is the basis of Democracy: the peaceful transfer of power based on the votes. If you do not respect the votes, then Democracy falls apart. You literally cannot have a Democracy without votes (while we had a Democracy through WW2 despite the "Office of Censorship").
We have a very, very large group of people who are now using free speech to destroy our trust in the election. We are now left with a decision: Free Speech vs Election.
My gut says that elections are more important than free speech. Historically, we have had times without any free speech what-so-ever (WWII / Office of Censorship). Its a luxury we can do without in times of crisis.
We cannot afford to lose faith in the election process. Period.
And yet the solution to a negative can still itself be the cause of equal or greater negatives. Hence why I was asking if you believe it is a NET gain. Because the Government cracking down on speech, even obvious lies, will erode some people's trust in the system as well.
> And yet the solution to a negative can still itself be the cause of equal or greater negatives.
Cracking down on anti-election rhetoric is an obvious net positive, and needs no further explanation.
> Because the Government cracking down on speech, even obvious lies, will erode some people's trust in the system as well.
Too late for that. Dozens of millions of Americans have lost faith in the 2020 election. The only concern from my perspective is to stop the obvious bleeding: we must stop the false anti-election rhetoric before it poisons the minds of even more.
The lies are winning right now. Be it masks, election fraud, mail-in ballots, or whatever. My sister's father in law believes that COVID is a hoax, I have coworkers who don't think masks help and my mom thinks Obama is a Muslim born in Kenya. I've seen enough lies, and I've lost faith that these people can have their vision cleared with the truth. My sister thinks the vaccine may hurt more than it may help and is avoiding the vaccine.
Its clear that misinformation is running amok and nothing is stopping it. The naive "debate with them" perspective goes literally no where, have you ever tried?
>Cracking down on anti-election rhetoric is an obvious net positive, and needs no further explanation.
Yes, it does. Cracking down on speech would further enrage all those people, while also pissing off a great deal more. I think riots are a more likely outcome of your solution than anything positive.
And you are just happy to assert "riots" and leave the discussion off at that?
I mean, I can say "boogieman" too and leave the discussion, pretending to have made a point. It will encourage socialism, or it will destroy freedom, or it will encourage censorship! (Etc. Etc). Come on, just stating a boogieman doesn't help anyone's point of view.
No offense to you personally, but a one sentence assertion is not an argument I'll take seriously. Your contribution to this discussion so far has been less than mine. How about you elaborate your points a bit more?
------
But whatever, I'll mirror you, so you see how stupid this gets if you just do one sentence assertions of boogiemen.
Your point of view will destroy democracy.
Ball is in your court now. Figure out a way to elevate the discussion. I'm not doing the heavy lifting unless I see some work from you as well.
"Oh and if you tell people to dink rat poison and they do you should be held responsible for tricking people into killing them-self,"
So in your mind Google should assist people in tricking others into drinking rat poison by boosting the signal of their videos by hosting them on one of the world's most popular web sites, essentially making Google employees an accessory to murder, because the "basis of democracy" is assisting bad actors in tricking people into doing things like drinking rat poison.
That's certainly an interesting take.
In my view, you host someone's web site where they trick people into drinking rat poison, that makes you morally culpable. If you have human decency, you try to avoid doing things like that.
>And before you notice unpleasant opinions are removed to (like e.g. the CIA having had similarities to a terrorist organization in the past, factual right but misguiding).
This is a slippery slope, and I'm not sure that it's justified theoretically. It's not as though this is precedent-setting - all platforms have excluded swathes of content for a very long time.
> What Taibbi is asking for is that the guy who tells you that drinking rat poison is good for you
No. That is not what Taibbi is saying. He articulated his point extremely well, he doesn't need your help with convoluted interpretations. You twisted his words sideways and upside down, then built irrelevant conclusions on top of your own nonsense, and diverted the discussion from the PROBLEM the original article was written about to your own misguided statement which bears no resemblance to Matt's argument.
> HN is no longer on board critical thinking
Indeed, otherwise your comment wouldn't be on top.
I must say though that there is a difference between putting a disclaimer next to a video to provide facts/context vs outright banning it from being posted.
Yes, it's not directly an issue of the first amendment, but it's still a topic with discussing since sites like YouTube, Twitter, and Facebook constitute a lot of where discourse takes place these days.
I'm guessing there's a lot more deliberate troll farming hitting HN, and threads like this (and how far up this incoherent rant of a post went up the list) lend credit to that.
In case you didn't know, Warfarin, a heart medication, is also used as a rat poison.
A guy telling people they should drink genuinely harmful poison is of course bad, but by analogy I would say many of us are concerned that YouTube's current trajectory would result in banning important information about the health benefits of Warfarin in their crude attempt to stop the people from taking the bad sorts of rat poison.
Not a single word in the article hints towards the 1st amendment. I suggest reading the article before acting all high-and-mighty about your "common sense."
People constantly conflate "freedom of speech" with the protection granted by the first amendment.
That's right up to and including Randal Munroe, who said "The right to free speech means the government can't arrest you for what you say"
That's not freedom of speech, that's the first amendment protections of freedom of speech... and it doesn't even adequately describe the protections. There are other things the government isn't allowed to do besides arrest you. They can't force you to say something, either. They can't prohibit certain opinions from being expressed in government owned or regulated channels.
And freedom of speech can be infringed on or limited by other entities than the government.
A better description would be "principle that supports the freedom of an individual or a community to articulate their opinions and ideas without fear of retaliation, censorship, or sanction." This means that if you're forced into silence by threat of retaliation from, say, a religious sect or political group in your area, it's still infringing on your freedom of speech even if they aren't part of the government.
Where is the line though? If I disagree with what you say am I allowed to call you an idiot? Am I allowed to no longer associate with you? Those could both be considered retaliation or sanction.
And am I required to publish your words even though I disagree with them? That would seem to infringe my freedom of speech.
You HAVE to make those determinations on a case-by-case basis.
Like all freedoms, the freedom of speech is slippery, difficult to define, and has fuzzy boundaries. It can never be absolute, because that would immediately raise contradictions. Where does your freedom end and mine begin?
In this particular case, how big does a company need to be before we start forcing it to host content that it doesn't want to host? When is it ok to force someone to provide a universal platform... to tie their hands so they can't moderate as they see fit?
Do I have a right to stand on your stage and shout whatever I want at your audience? Do you have a right to buy every stage and then exclude and censor people like me?
I'm struggling to understand the 1st amendment doesn't apply to private companies argument. Are you saying that Google, under its own volition, independently of the democratic party decided it was their ethical or patriotic duty to take this action? This is the company that built the platform that amplifies crazy ideas (many crazier than this) to hundreds a millions of people for profit. It's also well know that there's a revolving door between google and democrat administrations and campaigns. Or are you acknowledging that google and the DNC have a symbiotic relationship and that this is a valid loophole that the government can use to censor people?
You bring up the Democratic Party as if everyone who disagrees with you must always be a part of the exact opposition and you come across as a conspiracy theorist because of it. It really doesn't seem to be the world's most unsurprising thing that among two possible conclusions people could be on opposite sides without being brainwashed. It's almost like the binomial distribution exists for something like this.
You can't just say "It's well known" and then throw out some new conspiracy theory that I've never seen before.
You're not even to the "most people are familiar with your theory" part yet, much less the "I can act like everyone else thinks this is true, but won't admit it" part. Build up your brand a little first.
> Looking at the comments here I have to conclude that HN is no longer on board critical thinking much less common sense.
The book burners are running the show. And the complaints you see on hackernews are just people with no power or influence complaining about the censorship you support. Seems like a decision was already made by people much more powerful than us, and this is the path we are on. Free speech and freedom of the press are over. America is done. Nothing special about this place anymore.
I think this opinion is short sighted, because it assumes that we can trust Youtube (or other governing body) to rightfully and honestly decide which content is worth deplatforming. Isn't this idea central to the concept of freedom of speech?
I think the conversation is just a little more sophisticated than you imagine it to be.
And you can be generally in favor of free speech (!= the First Amendment) without accepting its more extreme interpretations. I mean, I'm guessing that you personally are too if you are like most people. For example, most of us would think that YT shouldn't censor either of two people saying (respectively) "Trump was elected in 2016 as a result of Russian disinformation" and "Trump was not elected in 2016 as a result of Russian disinformation."
The interesting question here is, "What speech should a privately-owned forum allow that its owners disagree with?"
Also this guy is not the same author some might have known a few years ago, he himself has been relegated to the regressive left Glenn Greenwald corner of the internet. Take his narrative with the proper grain of salt.
Why do you consider them part of the "regressive left"? Is it because you think they've behaved unethically, or are they hopelessly biased, or do you just disagree with them, or some other reason?
First, I looked up the term "regressive left" and you might want to know your use of the term doesn't match its general usage, which might make it confusing.
Secondly, I think you're doing yourself a disservice if you dismiss these voices and are critical of them but not the mainstream pro-Democrat press. Ryan Taibbi, for example, clearly dislikes Trump, but also thinks the Democrats aren't responding to him well[1], which is an interesting perspective to understand if you are anti-Trump. If you are a liberal who fails to incorporate or understand even the criticisms of people who are close to being your allies, you may be less equipped to properly combat Trumpism.
Basically, it sounds to me like you are advocating ignoring these people because you disagree with them, which doesn't strike me as the best way to rigorously improve one's political perspective.
I remember the days on forums when people screaming "YOU'RE VIOLATING THE FIRST AMENDMENT!!!11" after the admin banned them were treated as the laughable jokes they were. Now there are supposedly serious political thinkers subscribing to this same idea: that the Constitution forces private parties to do business with you against their will. It's a serious lack of civic knowledge.
It's worth reading Barry Goldwater's opposition [0] to the Civil Rights Act of 1964, despite him claiming to be "unalterably opposed to discrimination or segregation on the basis of race, color, or creed, or on any other basis". His stance was exactly what you describe: that government did not have a right to force private parties to conduct business against their will.
There's a good-faith debate to be had about positive rights vs. negative rights; or the potential backlash from forcing individuals to do the right thing; or the usual right-libertarian arguments about the sanctity of property rights. But I'll bet dollars to donuts, that the vast majority of those cheering for tech media giants booting out those with verboten views, based on private property rights, would also be horrified at the idea of even questioning the CRA under the same logic.
You are aware of the difference between "what you are" and "what you say" are you not? CRA prohibits discrimination on the basis of "what you are". So it's not really the same thing at all.
I'm not claiming it's a fully apt comparison; rather, that if one supports a principle of "it is entirely out of scope for government to force private businesses to transact against their will", with no other qualifiers, that would necessarily preclude the CRA as well.
It's a different stance to say "the government is allowed to force transactions where one party is unwilling, but only where the unwillingness is related to identity rather than actions". (Though even that distinction can blur: a religious person banned for sharing "my faith teaches that life begins at conception" could hardly be blamed for interpreting the act as being based on their protected-class religious identity, rather than their speech as such.)
Looking at your comment, I have to conclude that you think you are smarter than 95% of the population and that they need to be treated like children.
You aren't dumb enough to drink rat poison if you see a video telling you to do so, and therefore don't need this protection, but not others. No sir, the little people are too dumb, and need to be protected, guided, and fed a diet of information curated by people like you. The paternalism is as disgusting as it is arrogant.
Your kind of thinking is what created the War on Drugs. And you aren't even self-aware enough to realize it.
The first amendment right only bans the government from infringing on free speech. It doesn't say anything about private corporations. There are many practices by US companies concerning free speech that would be illegal in European countries.
Many European countries recognize freedom of speech as a basic right that everyone needs to respect; Neither the government, companies nor individuals can infringe on this.
However, these freedom of speech laws often exclude certain kinds of speech and governments have some power in defining these exclusions.
This distinction is what makes censorship such a big issue in the US. When censorship is dictated by an elected government, the people have some power over it. When censorship is dictated by a private company the size of Google, people have very little power over it.
The US constitution seems to be mainly about limiting the power of government, not about protecting people and society. As a result, companies have a lot more power and you get unaccountable decisions about what should be censored and what not.
Don't get me wrong, I think censorship is necessary, but you have to admit that there is a real danger when companies are in charge of it.
Well, of course billionaires don't directly exploit people. They just build systems and take advantage of large scale features of the social environment that exploit people.
It's a completely different morally.
I mean, you can't hold airbnb responsible for either the housing crunch or any part of the homelessness crisis. They just created a system that allows landlords to cash in on high-return short term rentals. The negative externalities created by the mass adoption of their platform cannot be landed on their moral accounting. It's totally obvious.
This pattern sometimes happens in other businesses: Banks with a culture of yelling at the compliance department. "Independent" Auditors getting pressured to sign off on the books of a company, etc. The pushback is not always entirely unjustified — I've been annoyed myself at how much easier a legal department finds it to say "no" than "yes" — but it's often a warning sign.
"The market can remain irrational longer than you can remain solvent." -- J.M. Keynes
Also applies to competitors of Venture funded vortexes like Uber/Lyft/Doordash.
Sure, in the long run your cloud kitchen with guest chefs and nifty social networking features might be a more sustainable model in the long run. But you'll never find out because your effort will be murdered by Uber's marketing spend.
With this release Tesla has become a very attractive platform for assassins; if your target drives a Tesla it can be subverted and their death will look like an accident.
So the base salary may be misleadingly low. Especially since many police contracts have a 2x or even 3x payout for overtime under some conditions.
As with many statistics around policing there is a deliberate and calculated intent to muddy the waters and prevent effective policy discussion.