Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Florida governor signs bill to ban Big Tech 'deplatforming' (bbc.com)
64 points by dsnr on May 25, 2021 | hide | past | favorite | 136 comments


> The legislation includes a clause that exempts a company "that owns and operates a theme park or entertainment complex".

Amazing.


Someone's angling for a presidential run in 2024, or seats in 2022.

Vague bills that are sure to be challenged in court are pretty standard partisan fare. They're essentially win-win.

If you pass it, and it's upheld, you get to trumpet your victory.

If you pass it, and it gets struck down by the courts, you get to rally your base with how the courts are thwarting your agenda.

Bonus points if it's on an ephemeral enough topic that it would be impossible for it to "fail" post-implementation.


As a FAANG employee, I'm excited about visiting our new theme park in Florida. Hopefully it'll be open in time for our next planning offsite.


Disney makes its own gravity in Florida. Ever wonder why Orlando, unlike most cities its size, has next to no strip clubs and doesn't even have a zoo? Because the Mouse doesn't want them in his backyard.


I lived in Orlando for 14 years, well within Disney's time-sphere of influence.

While I lived there, it had strip clubs. A fair number of them. One of the reasons they shut down was a massive roadwork project on South Orange Blossom Trail where a lot of them were. Many, many businesses shuttered due to the road work median limiting left turns, and the widening reducing available parking to next to nothing.


> Many, many businesses shuttered due to the road work median limiting left turns, and the widening reducing available parking to next to nothing.

Quite true. Yet, as you would expect, today's Orlando doesn't lack for restaurants, bars, gas stations, night clubs, etc. But one business category conspicuously failed to make a return.


That could also be, and there's not one shred of evidence that historically conservative FL had any less to do with it than Disney.


Nice strip clubs you’ve got there. It would be a shame if someone happened to start doing a bunch of road work that forced them to close. -°o°


Guess we'll be seeing a tiny two-bit "Facebookville" opening up sometime soon...


> Guess we'll be seeing a tiny two-bit "Facebookville" opening up sometime soon...

More likely an in-person VR space.


Can’t wait to play VR FarmVille! Maybe Zynga will make a comeback.


With blackjack and hookers!


There was a small amusement park near where I live that was open for many years, and finally closed for good around 2 years ago. I'd be okay with Facebook re-opening that one.


Nice Disney carve-out. I bet there was a specific phone call about he received to insert that provision.

On a slightly-related note: I do feel that the rules for platform moderation should be different between platforms that are geared towards children vs adults.

I wonder if the future holds an internet that's heavily segregated between the two groups? Maybe everyone will auto-graduate from kidFacebook to adultFacebook when they turn 18?


> I bet there was a specific phone call about he received to insert that provision.

I really don't think that would have been necessary.


Exactly. From politicians down to the lowest bureaucrats few in government are so stupid as to need to be told who wants what. And those that are don't tend to last long.


I skimmed through the linked text (https://www.flsenate.gov/Session/Bill/2021/7072/Amendment/94...), but cannot find the exemption.


That looks like an amendment to the Bill and not the original Bill. It looks like the original Bill is here: https://www.flsenate.gov/Session/Bill/2021/7072/BillText/er/...

And the wording is on page 17: "The term does not include any information service, system, Internet search engine, or access software provider operated by a company that owns and operates a theme park or entertainment complex"


I did the same, similarly found nothing.


Facebook: "Yes we can ban you, Mr. President. We have a foosball table in the break room."


This may not be effective and may not stand up to a court challenge. But it's the first in a salvo.

Watch to see the kinds of arguments big tech is forced to make when they fight this law. Those arguments will be used against them later.

FB, et al., enjoy immunity (criminal and civil) from the illegal stuff their users post, and that is their weakness. There is nothing in the Constitution ensuring that they keep that immunity (or at least the First Amendment hasn't been interpreted to say that yet).

It's actually similar to how the PLCAA (https://en.m.wikipedia.org/wiki/Protection_of_Lawful_Commerc...) protects gun manufacturers from being sued into oblivion. Rights don't mean much when exercising them means you will get buried in frivilous suits.


Claiming they "enjoy immunity" against illegal stuff posted on their websites is like claiming that you "enjoy immunity" against illegal stuff that shows up in your physical mailbox.

It's not enjoying, its sensical.


That's the same as the argument for the PLCAA, which has been under direct attack for a while and may get changed.

Laws can change, and section 230 is not the only possible "sensical" one.

The First Amendment is a big rock to hide behind. But there are other avenues of attack that are clearly possible.

Those attacks will happen when people feel that the spirit of free speech is truly being threatened, where politicians can't effectively reach even people who want to hear from them.

Edited for clarity.


I don't see it pointed out in this article, but the de-platforming ban appears to be limited to state political candidates:

"The new law will ... impose fines of $250,000 per day on any social media company that de-platforms candidates running for statewide office. It would also impose a $25,000 a day fine on companies that de-platform candidates running for local offices. ... DeSantis said any Floridian can block a candidate they don’t want to see online and it’s not up to Big Tech companies to make those decisions for the public."

Seems odd to take such a moral stand on this issue, but only offer protection to political candidates/politicians.

(from https://nbc-2.com/news/state/2021/05/24/desantis-signs-bill-...)


I don't think it's about morality. I think it's about preventing very large corporations from influencing elections.

If there was a healthier and broader marketplace of forums that people spoke on, I would agree that those private companies could do whatever they want.

...but since probably 99% of all political discourse is concentrated on a single digit number of social media platforms, then I think they there's a systemic risk of election influence we need to protect ourselves from.


> If there was a healthier and broader marketplace of forums that people spoke on,

> 99% of all political discourse is concentrated on a single digit number of social media platforms,

let's not blur discourse from the banned candidate/politician with discourse by the population at large.

Any candidate could be banned from Twitter, but that doesn't stop discussion about said candidate on any platform. The candidate can still respond on other platforms and subsequent discussion can still ensue.

I can only guess Florida's motivation on this is but I have a hard time believing that protection from election influence is the primary factor.


Twitter, FB et al are the de facto public square. Discussions on other platforms are futile in comparison.


Laura Loomer is one such candidate (almost -- she was a Federal Congress candidate)

She has been trying to fight the bans on her own, for a while.

https://loomered.com/

Do not know if the new law will help in her cases or not.

I am certain it will be challenged, however.

The legal battles that this will spawn , will be used to sharpen the legal language, evidence, etc.

Overall, the direction that Conservatives will take -- is arguing that Political speech must be protected the same way that religious speech is. So this is a federal level battle, overall.

Personally, it seems that giving examples of how hotel owners were refusing Black customers, in the past -- by arguing that they can build their own hotel, and equating this to the Left tech titans vs Conservatives -- will not work.

As race, ethnicity or biologicals gender is not chosen. But religion is.

With regards to theme park protections, I guess they are not speech platforms anyway, so carving out -- for one of the larger tax payers is understood.

This is a raging ideological war in the country that has been going one for more than a decade. Conservatives feel on the defense for some time, the time of questioning the legal weaponry that sides use -- has long passed.

Will be interesting to see how investment banks that have large presence there -- will react as well. They may threaten to leave (and covering up by the demands of their clients) to apply their pressures, as well.

Everybody who has not yet taken side, will have to pick one.


I'm having a really hard time trying to understand the level of entitlement that takes to believe that you can have a service from a private for-profit business without paying for it, like Facebook, YouTube, etc. and they are forced to give it to you without any say on it, no matter what, even if it's not advertiser friendly and ruins their business.


It's the monopoly angle. You can agree or disagree, but it doesn't make sense to form an argument without addressing that.


It's the same concept as requiring companies not to discriminate in providing accommodations to members of protected classes, e.g. https://en.wikipedia.org/wiki/Civil_Rights_Act_of_1964. It's just a different class. The debate is about which classes it should be legal to discriminate against.


If someone is acting like a moron or saying harmful things, I should be able to kick them out of my business.

Saying whatever you want is not and should not be a protected class.


"Acting like a moron" is entirely subjective. You will inevitably find a way to apply it to anyone you disagree with. This justification would work just as well for other protected classes; it just so happens they've been set up to include your in-groups and not your out-groups.


What…? Are you claiming that membership in the existing protected classes is subjective?

It doesn’t matter that the definition of inappropriate behavior or speech is subjective. Each party will determine for themselves what they think is reasonable. As long as they aren’t infringing on any other rights or breaking other laws in the process, this is totally fine.


I can reasonably say that calling a cake shop whom you know for a fact will not bake you a cake, not because you actually want a cake but instead for the purpose of getting them in legal trouble from the official inquisition, is "acting like a moron".

And yet I had to very very carefully word that for the sake of my own conscience because I know full well that HN will crucify me regardless on the theory that merely mentioning this topic amounts to homophobia.


If a shop owner tells a gay customer "I don't believe you about actually wanting [the product / service]", how do we know if the shop owner has legitimate reason to think this or just doesn't like gay people?

I'm not a legal expert, but I'm guessing that there would be need to be clear evidence of harassment from the customer (or something of that sort) in order for the shop owner to be able to refuse service.

Otherwise, any asshole could get around discrimination laws by simply claiming they don't think customer the actually wants the product. That would be a terrible situation.


I'm not the one claiming

> It doesn’t matter that the definition of inappropriate behavior or speech is subjective. Each party will determine for themselves what they think is reasonable. As long as they aren’t infringing on any other rights or breaking other laws in the process, this is totally fine.


I’m not following. What do you mean?


> The debate is about which classes it should be legal to discriminate against

I'm not so sure it is. You're allowed to refuse service to a black person, for instance—you just can't do it because they're black. But this law says you can't ban a politician at all, not merely because they're a politician. This law would be like saying you can't refuse service to a black person for any reason.


>Imagine if the government required a church to allow user-created comments or third-party advertisements promoting abortion on its social media page. - Steve DelBianco, NetChoice's chief executive

It sounds like he's arguing that the social networks are making editorial decisions, isn't that what they're desperately claiming not to do?


They are definately trying to have their cake and eat it: either you are a media company and legally responsible for content, or you are a platform and then you leave people alone unless they are doong somethong illegal


> either you are a media company and legally responsible for content, or you are a platform and then you leave people alone unless they are doong somethong illegal

Says who?


I think many people agree that the reason phone companies are not held responsible for what people say on the phone is that they have no control over it, they don't make any editorial decisions. Newspapers on the other hand ARE responsible, because they decide what to print.

I can't really see how one could argue that Facebook or Twitter should be able to avoid being classified as a newspaper if provide something that is very similar to a gossip magazine, where they can choose freely what to print and not. They can and do ban gossip and unfavourable information about politicians they favour, for example, and they could suppress bad press about people that pay them.

I don't think they should be able to do that, without any accountability. Do you? If so, why?


I see no reason to ban moderated online spaces. Why should only curated and unmoderated ones be allowed to exist?

However, I do agree that there's a discussion to be had about twitter, youtube etc due to their dominant positions within the market. But I don't think that just forcing these companies to host anything that isn't illegal is the right way to go, and would likely result in making them even more shitty than they already are.


My point wasn’t so much that you shouldn’t be able to moderate, but if you do then I think you’re a different kind of medium than a platform, more like a newspaper, and you should be treated like one and not enjoy the benefits of an unmoderated, unbiased “utility like” platform.


> if you do then I think you’re a different kind of medium than a platform, more like a newspaper

Disagreed.

Newspaper-like and unmoderated, unbiased “utility like” don't work as exhaustive categories.

If I run a BBS, an IRC channel, an internet forum, a subreddit, etc, I would like to be able to moderate without having to pre-approve all messages before publication. I also don't want to have to go to court because someone has posted something illegal which I did not remove in a timely manner, or get sued because I removed something that I thought crossed a line, but was eventually determined by some court not to.

I see nothing wrong whatsoever with online services open to the public enforcing house rules. It only becomes an issue when a single service grows so large that it starts to essentially subsume the public sphere. That might require some intervention, but please do so in a nuanced way that isn't based on some imaginary binary distinction between 'publishers' and 'platforms' that hasn't accurately described how online spaces work since the late 70s...


Maybe we need a new body of law to address this, but to date we have two categories in law: moderated and unmoderated. Somehow Facebook/twitter/e.t.c. are enjoying benefits of both.


Where's the contradiction?


It doesn't to me, unless you are arguing that churches' decisions to remove offensive comments made by visitors to their social media page or blog should make them legally liable for whatever stays up?


Yes, seems obvious. Why shouldn't it?


So if I have an online magazine, I can't choose what kinds of authors I want to invite? How is that any different than reddit deciding what communities to have/not have...


I think the idea is that the tech giants are NOT publishers. If they were, they would be responsible for third-party content. So they can't really use that argument if they want to continue being exempted from responsibility.

https://en.wikipedia.org/wiki/Section_230


Literally read the link you just posted, the text of Section 230 makes absolutely no distinction between 'Publishers' and platforms. I really wish people would stop parroting this completely incorrect reading of the law.

Or take it from Mike Masnick:

> If you said "Once a company like that starts moderating content, it's no longer a platform, but a publisher"

> I regret to inform you that you are wrong. I know that you've likely heard this from someone else -- perhaps even someone respected -- but it's just not true. The law says no such thing. Again, I encourage you to read it. The law does distinguish between "interactive computer services" and "information content providers," but that is not, as some imply, a fancy legalistic ways of saying "platform" or "publisher." There is no "certification" or "decision" that a website needs to make to get 230 protections. It protects all websites and all users of websites when there is content posted on the sites by someone else.

> To be a bit more explicit: at no point in any court case regarding Section 230 is there a need to determine whether or not a particular website is a "platform" or a "publisher." What matters is solely the content in question. If that content is created by someone else, the website hosting it cannot be sued over it.

https://www.techdirt.com/articles/20200531/23325444617/hello...


The point that people are making is that they believe the "spirit of the law" is a conceptual split between platforms and publishers. The way people commonly think of telephones vs newspapers. Telephones are a platform for speaking, newspapers are a publisher with editorial control.

The discussion is not about whether or not social media companies have Section 230 protection but whether or not they should have it.


Yeah that Platform vs Publishers thing is one of the most misunderstood things among internet geeks - I was completely wrong (and perpetuated my wrongness to others!) about it through hearsay for two decades until last year:

https://www.techdirt.com/articles/20200531/23325444617/hello...

edit: Wow, 3+ of us jumped in with same link, Mr. Masnick has a following here :D


Ha - yep. Domain experts are very useful!



>I think the idea is that the tech giants are NOT publishers. If they were, they would be responsible for third-party content

In an ideal world we repeal Section 230 and the US government provides, free of charge, a small digital space for each citizen to say whatever they want in the context of free speech. There's no reason we should allow these tech monopolies to exist and maintain their fiefdoms of extrajudicial decisions about what is or isn't considered protected speech. If we want to assert that digital communication is a protected right, there's no other way.


>>maintain their fiefdoms of extrajudicial decisions about what is or isn't considered protected speech

Where do they do that?

My understanding of U.S.A. First amendment is that government won't infringe on your right of free speech. It doesn't say you can come into my home, workplace, store, or website and yell what you want on my property.

There's nothing "Extrajudicial" about it and it's not "Protected speech". It's just a business owner saying "you can say whatever you want, but not necessarily in my home, on a platform I built and pay for".

I think we've gone weird somewhere if we think Facebook == Public Square on any level. It's just somebody's storefront (and insert cliches about who / what is the product :)

In some ways, I am hoping tech giants do MORE weird policies / bans / things, so hopefully general population sees it as a private proprietary closed property / platform that it is, not a friendly public gathering spot we somehow take it for. But I still don't think they are doing anything illegal/extrajudicial... it seems people feel they have strange rights or ownership on Facebook or Twitter, have some investment in property, treat it almost as a mix of paid-for condo and public space (depending on what they need to assert at the moment)... and none of that is the case. At best, you are walking in a Mall, and if you start yelling obscenities, pontificating, or even start advertising for different shops etc, the mall security guard will escort you out, as is legal and proper. It's not government infringing your constitutionally protect free speech, it's not extrajudicial, even if you don't like it and really really wanted to yell obscenities or market your business or promote your personal point of view in the mall.


Both the government and private companies can censor stuff. But private companies are a little bit scarier. They have no constitution to answer to. They’re not elected. They have no constituents or voters. All of the protections we’ve built up to protect against government tyranny don’t exist for corporate tyranny.

- Aaron Swartz


I agree with that statement 100%.

Our government, on some level, is supposed to be accountable to general public and there exists, in principle, a mechanism to hold them accountable.

Whether that mechanism is flawless or not, whether it works as well as we want it to, should not distract from the fact that there isn't even REMOTELY such a mechanism for big corporations. When people clamor for flawed government being replaced by companies, I shudder exactly for that reason. You're trading system with public protection/accountability that's flawed, to one where public protection/accountability do not exist, by design.

My post should not be mis-interpreted as being against government or regulation.

But I don't think Facebook is infringing upon my constitutionally protected free speech. If we want to regulate Facebook (and I think we should), we need to have a more thought-through reason, method, desired goal and tracking of outcomes.


> Where do they do that?

In their decisions about which comments to delete and which not; which users to ban and which not.


>My understanding of U.S.A. First amendment is that government won't infringe on your right of free speech. It doesn't say you can come into my home, workplace, store, or website and yell what you want on my property.

This is exactly my point. If we are going to classify online discussion on platforms like Facebook as being protected free speech, there is no way that we can allow those services to be in control of who gets to say what. The government has an obligation to provide an acceptable platform for citizens to have online discussions in that case which would actually serve as a digital "town square", protected by our first amendment rights. Anything else becomes a de facto infringement of those rights. And saying "well you can build your own website" is naive and missing the point. To put a barrier like that in front of the exercising of one's rights is tantamount to a digital Jim Crow.


>>If we are going to classify online discussion on platforms like Facebook as being protected free speech

But that is a massive "If" and I'm denying you the premise, until you convince me otherwise. I think you're starting from a certain point assuming everybody is there already.

Note, I agree with many of your detailed points later, such as "build your own website" missing the network effects of established platforms; but most fundamentally I think what you're taking for granted is the actual big important discussion to be had, e.g. "The government has an obligation to provide an acceptable platform for citizens to have online discussions" - I don't think Government has that obligation today, and if you / we think it should, then that's the discussion and lobbying to be had.

(Never mind how well I think a government-created platform with government-designed technical stack and UI, and with guaranteed free speech and no moderation whatsoever, will actually work in practice... to quote Frank Borman, what we have here is a failure of imagination :P )


While i avoid using the tech monopolies; no way would I participate in a state run system.


I imagine the law's wording and scope has some issues. But addressing the situation where Google can render your phone, thermostat, etc, useless because you made a comment on YouTube is worthwhile.


> But addressing the situation where Google can render your phone, thermostat, etc, useless because you made a comment on YouTube is worthwhile.

That's a noble goal, but this is a clear culture war law that will crumble under any real scrutiny. Hitching your wagon to a law that's all about sticking your finger in someone's eye isn't going to help your cause. So instead, what could a good regulation look like?

Maybe something that says these providers are forced to moderate the user-facing pieces of their platform separately from the core infrastructure pieces of their platform: so you can be banned from posting to Facebook for whatever reason Facebook wants, but your existing "Login with Facebook" connections won't break.

Maybe toss in some data retention requirements as well – you can't post new photos, for example, and you have up to a year to pull the photos you've posted before Facebook is allowed to purge them.


What I wish to happen is for the big companies to tell you what exactly you did wrong and give you the opportunity to appeal.

Also giants like Google should not break all your stuff if they consider you did something wrong on youtube.

Your perspective changes when you are hit by this stuff. My son managed to get my PlayStation account suspended for 2 months(while I have a 1 year subscription PS+ working) without no right to appeal and no clear/exact reason of what happened.


that is the grey area 230 creates. Companies like facebook argue that they are more like the printing press that makes the magazine and not the magazine publisher. The issue arises when they choose to sometimes police their content. But this would be true even of a printer if a long time client that sold cigarettes or something sent in ads to be printed targeting kids it seems reasonable they could refuse services. But where to draw the line as 230 says they are not responsible to do that so does that mean if they choose to enforce decency rules are they responsible to be watchdogged or must police all content?


It seems to me online platforms need to be addressed by modern laws rather than shoehorn laws that applied to different (old) media.

It’s hard to say what the right balance would be. But just imagine if instead of the US, we were in Belarus. Or any other country with autocratic tendencies. Would we want the media to be one sided even if it’s controlled by oligarchs who would have clear selfish agendas?

What do we want when the pendulum swings the other way? The same top down private decision on permissibility?

I think there is a reason the likes of Roger Daltrey, John Cleese and others are weary of tech censorship. If these tools had been available in the 70s and 80s the world would be a different place.


This law is largely a way for FL governor DeSantis to further solidify his conservative bonafides ahead of a likely 2024 run for president (assuming a certain former president doesn't enter the field). Whether it stands up in court eventually doesn't make the slightest difference.


I believe that's pretty much the essence of DeSantis' approach.

It would be nice to see rational equitable solutions as to who can speak and where though. It wouldn't be such a big deal if were back in the age of 100,000 tiny blogs.

But in this age of a handful of big platforms determining which content and viewpoints are acceptable, there might be some moral hazard and negative potential for society. This is true no matter what your particular politics or religion might be and even if they are "on your side" today.


Actually it failing in the courts provides more fire for the campaign

"Activist Judges!" as a rallying cry.


How can this possibly hold up in court? Doesn't "big tech" have a right to free speech?


> Doesn't "big tech" have a right to free speech?

If corporations had an unlimited right to free speech, the Civil Rights Act wouldn't be constitutional. Clearly though, it is, so the courts have found that some limits on private businesses ability to discriminate is legal.


I guess the question will come down to whose free speech rights are more important, the end-users' or the corporate entity's?


The 1st amendment in the US constitution prevents the government from stifling speech. Being banned on privately owned and operated social media is not a violation of your right to free speech.


The US Constitution only prohibits the government from inflicting cruel and unusual punishment. It says nothing about preventing a privately owned and operated company from doing so.


Yes. The constitution doesn't include Section 230 either. Other laws besides the constitution also regulate the activities of privately owned companies. This doesn't have anything to do with the government not being able to abridge the free speech of its citizens.



[flagged]


Please omit personal swipes from your comments here. Besides breaking the site rules (https://news.ycombinator.com/newsguidelines.html) and poisoning the forum, they have the side effect of discrediting the view you're arguing for—which isn't in your interests. And if your view is right, that means you're discrediting the truth as well—which isn't in any of our interests.

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor...


Depends what they want to claim to be. If they write, or if they orchestrate an ongoing discussion so that they de facto decide what is said, then they are "speaking".

They can do that, but not while claiming to be a platform.

Platform or publisher.


This is completely wrong. There is no distinction between platform or publisher in Section 230, no matter how often people repeat this ridiculous take.

https://www.techdirt.com/articles/20200531/23325444617/hello...


That's just a law. It's not in the Constitution.

Depending on how people see Big Tech, the law may or may not be changed. And the kinds of arguments FB, et al., make to defeat this law will affect how people see Big Tech.


Sure, but people are currently making lots of comments thinking the law, as written, makes some distinction between publishers and platforms. Which it doesn’t.


> Hello! You've Been Referred Here Because You're Wrong About Section 230 Of The Communications Decency Act

Did I mention Section 230 Of The Communications Decency Act?

Well no I didn't.

The concept is broader than any law. And you should not be getting legal advice from Mike at techdirt anyway.


That's really cool, but we're not talking what the law is, but what it should be.


What does free speech entail when a literal handful of unelected people control the modern public square?


I just can't take these people seriously.

They rallied hard against net neutrality because "the government can't tell private companies [ISPs] what to do"

Now they want the government to moderate speech on private platforms. That's hypocrisy that even a 2nd grader could point out.


Please don't use divisive terms like "they" as it turns things into an "us vs them" scenario rather than focusing on the issue itself.


Why not? Can you explain what "the issue itself" is in a way completely divorced from the people who have devoted themselves to _making_ it the issue?

Even if you can I don't see the value honestly.


Yes, you can talk about whether "deplatforming" users is good or bad without introducing politics into it. When someone turns it into an "us vs them" issue, it distracts and devolves the conversation.


Republican leaders have shown their hypocrisy for decades at this point. Say one thing, do another, perpetually. This is more evidence of the same.

It's most certainly a "they" thing.


You are turning the conversation into a political one when it doesn't need to be. The idea of banning "deplatforming" does not need to be married to a political party. We can have interesting conversations on this site without pointing the finger at a political party. This is exactly the type of messaging that promotes anger and hatred in our society. Not sure why HN doesn't have rules against this type of political agitation yet has rules against "nationalistic flamewar". Maybe dang can comment on this.


The issue we're talking about here is not an abstraction, but a specific law with real-world ramifications.

We've talked the abstraction to death; there is nothing new to contribute. This is an opportunity to apply it to an actual real-world instance.

That's inherently political because it was created politically. There is no way to ignore that it was crafted by one political party to the detriment of another. We can avoid naming them if you think that would help from getting people's hackles up, but it's also part of a pattern of behavior that goes far past this one instance.

Discussing this as an abstraction is a luxury for those who don't have to live with the consequences. Blaming those who object for promoting "anger and hatred", while giving a pass to those who pass the law designed to disenfranchise them, is inherently partisan.


brb creating 100 alts to spam every comment section on twitter with images of frogs and crows. they cant ban all of me for 14 days


Clearly violates the commerce clause. No chance of this sticking, even with the current courts.


Out of curiosity - is it possible for UPS/FedEx/USPS to stop one from sending mail through their services? I consider UPS/FedEx to be sort of the physical analog to Facebook and Twitter.


Sure for FedEx / UPS if you break the ToS. USPS might be a different story.


Really? Do you have any info on this? I know all three ban certain types of things from being sent in certain circumstances, but I've never heard of an individual being barred services.


In other news: entirety of FL government forgets about the 1st ammendment.


Section 230 muddied the water. The asymmetry has been a disaster for society.

When social media companies assume legal liability for content on their platforms we can talk about the protections of the First Amendment and the limits that grew under law over the past 230 years; they were casually discarded over the past 20.


Good luck to them when this finally hits the Supreme court.


Maybe the goal is to force the Supreme Court to weigh in on the issue.


The goal is to rile up certain voters and be able to point to the law as proof of his convictions, even though the law will be struck down and is a grand waste of taxpayer resources, all from the party of “small government”.


The Supreme Court is now supposedly 6-3 and many red states are passing new and restrictive anti-abortion laws hoping they trigger a re-evaluation of Roe V. Wade. I see this bill as having a similar impetus to get the court to make a decision on Big Tech and Free Speech.


While I think this is just social signalling about the evil tech elite, It’s quite possible that the ultimate intention here is to make it to the Supreme Court and make new precedent. During ACB’s confirmation hearing Sen. Sheldon Whitehouse talked about how there are three major right wing NGOs that work together in both confirming federal judges (and Supreme Court justices), and in getting cases to them to be tried.

https://m.youtube.com/watch?v=cjcXVKg43qY


I've always found it funny how the most vocal detractors of deplatforming are generally right-learning politicians and pundits, some of whom have publicly stated how much they like Ayn Rand.

The irony here is that Rand pretty explicitly supports the right to deplatform people in The Virtue of Selfishness. Up until recently it was a common attitude in conservative political circles that forcing someone to publish content they disagreed with was itself the violation of a right. See below:

"The right of free speech means that a man has the right to express his ideas without danger of suppression, interference or punitive action by the government. It does not mean that others must provide him with a lecture hall, a radio station or a printing press through which to express his ideas. Any undertaking that involves more than one man, requires the voluntary consent of every participant. Every one of them has the right to make his own decision, but none has the right to force his decision on the others."

I struggle to find a single source that the American right likes to cite which would go in the opposite direction, given that more right-leaning conceptualizations of rights tends to be a freedom from as opposed to a freedom to right. Given this framework, social media companies have a freedom from providing a platform for ideas they find objectionable.


Ultimately, I think right-leaning politicians want to ban deplatforming because deplatforming works and they have the most to lose from it. I see this as practical political calculus and a fight for survival rather than some ideological exercise.


Absolutely, that's what it actually is. That said, they do tend to sell it as an ideological exercise in "free speech suppression". Pointing this out is important anyway to make sure that people realize they're being bamboozled by some political strategy and not a genuine attack on their rights.


It is practical political calculus, but not necessarily because it works. It may be because objecting to deplatforming is more benefit than the actual deplatforming.

A prominent politician does not lack means for getting their message out. If anything, being banned from a social media site provides free airtime on conventional media as they talk about it. It's unclear whether the net effect is positive, negative, or otherwise.

But it is clear that riling up people with talk about "deplatforming" is pretty effective. And, at least for the past couple of decades, riling up their constituency has been a fairly effective political tactic.


> "Ultimately, I think right-leaning politicians want to ban deplatforming because deplatforming works and they have the most to lose from it."

If deplatforming worked, the LGBTQ rights movement would have failed because they were kept in the closet. Deplatforming does not work, particularly in an era where "The 'Net interprets censorship as damage and routes around it."


Censorship cannot eliminate an idea, but deplatforming succeeds with the goal of severely limiting the spread of ideas and controlling discourse. One can argue the Donald Trump has "routed around" being banned from Twitter and Facebook with his mini-blog on his personal website but that is nowhere near as effective of a communication tool as his old social media presence.

I also assert that "The 'Net" of today is a much different one than that inspired John Gilmore's quote, as non-censoring alternatives to Twitter, Facebook, and YouTube have failed to gain much traction.


So does deplatforming work or is it just a myth, like some people say?


This apparent contradiction is just a moderately clever way of using an ambiguity in the word.

If "deplatforming" is taken to mean "depriving people of an easy means to find and communicate to an audience" it can be an effective technique depending on context, alternative communication channels, etc.

If "deplatforming" is taken to mean "a novel social ill based on depriving people of their right to communicate" then yeah it's a myth.


That's basically the same thing. The purpose of doing the former is the latter, since everyone does it.

Free speech is like encryption at this point. Sure, it's not technically illegal, but if you actually try to do something for the public to use then the system will find some way to make it impossible and shut you down.


No the second embeds two important differences from the first: that this is new (it's as old as writing at least) and that it's bad (a value judgement you can fall on either side of).


So to summarize, deplatforming is real, but it's a myth that it's a bad thing. Got it.


Correct!


So tell me, are you describing what other people's beliefs are, are you explaining your own beliefs or are you just trolling?


Well the first two at least. I'm describing my beliefs. They are also shared by a lot of people; y'all wouldn't have to yell about free speech so much if this view was unpopular.

I admit that I also enjoy riling up first amendment fundamentalists but it isn't my main goal and these are my sincerely held beliefs so I don't think really qualifies as trolling.


Fair enough, I already knew all of that, I just wanted to make sure you're not trolling me by saying things you don't actually believe.

Putting the potential straw man aside, wouldn't you say that your definition of a 'myth', whatever it might be, is ambiguous to the point where you can accuse anyone of spreading myths and disinformation, even if they're factually correct? For example, let's get everyone offended and say that someone could say that George Floyd getting killed by police is a myth, because he was killed for resisting and being a criminal, and it wasn't a bad thing at all. People cast a moral judgement on this event, so in a way it is a myth, is it not? Or is it when it's something that contradicts your own moral judgement?


better then doing nothing... they should classify them as pipes and unable to hide content from their platforms (HN could benefit from that too, a lot, if not more)


I wonder how people who support deplatforming would react if their ISP cut their internet service?


Which is why ISPs should be common carriers.


People support or don't support things based on context and details. People don't simply "support deplatforming", and it's not hypocritical to support a ban from a social media platform in instance X but not support a ban from an ISP in instance Y. Analogies used in this way are lazy, and serve only to provoke. You should address the topic directly, rather than comparing it to something different.


That is a terrible example. Is not the same not having internet, water, or power, than not having Twitter or Facebook dude.

And besides that, If I have 20 options like I do with social networks and it's as easy to change as to create a new account, I'll simply change providers or I can easily create another account in 3 seconds. This kind of argument comparing apples and oranges is terrible.


"Is not the same not having internet, water, or power, than not having Twitter or Facebook dude."

20 years ago, we could have seen the same argument that internet is not like water or power.


If the government forces people in their homes, say, because of a pandemic, and the only legal way to engage in the public square is Facebook or Twitter, then I'd argue it is a lot like a utility like water or power.


There was nowhere in America where people couldn't go outside, or put up flyers for their website, or knock on neighbors' doors, or create their own neighborhood watch site, etc.


They can, for various violations of the terms of service.


I thought about this several years ago. TLDR: Every scenario has a spectrum; there is always a line to be drawn. When it comes to cutting off access to someone from Internet services, I believe that line is ISPs.

https://wannabewonk.com/gab-and-free-speech-on-the-internet/


I wonder if he'll also sign a ban on NFL deplatforming. Probably not.


This has nothing to do with censorship. He knows it won't survive a courtroom. This is just DeSantis doing the equivalent of virtue-signaling to his base.


I don't know - the only real counterargument I've seen is that Facebook and Twitter are private entities, so they should be able to operate however they like. I could use very similar reasoning to reason that the government has no mandate to break up monopolistic businesses, either.


Also in the news today:

Conservative congressman Tom Cotton is showing a headline (I think from CNN) that bashes him for repeating the 'debunked' idea that corona may have come from the Wahun lab. Of course that theory is very much alive and viable today.

Republican Rand Paul is upset that some songwriter made a Twitter post offering to buy drinks for someone who 'finishes the job his neighbor started', a reference to the neighbor who physically attacked Paul a few years back. Paul correctly points out that Twitter is supposed to have a policy against threats and violence.

So yes, there seems to be something there.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: