Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Google and Section 230: US court ruling could “turn the internet upside down” (wsj.com)
32 points by cachehit on Jan 15, 2023 | hide | past | favorite | 40 comments


I would like to see recommendation algorithms brought into this discussion.

On the one hand there's moderation to censor, or remove, offensive, dangerous, or disruptive content. I feel that section 230 does a fair job addressing this.

On the other hand there's recommendation engines often promoting outrageous content, putting things in front of everyone, in order to optimize for engagement and profit. Isn't this acting as a publisher, choosing what to promote, instead of showing people chronological, friend, or user selected feeds? These companies are in control over much of what is being presented to people.

There's been massacres committed in various countries because big tech companies promoted certain messages, based on their secret algorithms. I've been thinking that these algorithms promoting bad content (for profit) are a bigger deal unpromoted (obscure) bad content.


Yup, I’m absolutely fine with algorithm suggestions not being covered under safe harbor rules, that’s where responsibility starts for the publisher.




I sometimes wonder, if we never had 230 to begin with, if the internet would still be a decentralized collection of blogs and websites.


No. 230 is why sites like geocities and tripod and blogger and livejournal and so on could exist in the first place.


Those sites are the beginnings of the centralization I’m talking about.


Previously the liability was held by the ISP that hosted the individual website with objectionable content. Section 230 was written in response to a pair of lawsuits against ISPs, not against bigger webhosting firms like Geocities or Tripod. Without it there would be a strong incentive not to host anything by arbitrary unvetted customers.


The incentive would be to act as a dumb pipe, which absolved telecom providers from liability pre-230. (And still does today for unmoderated communications services)


There is no "dumb pipe" liability shield. There was no absolving of liability pre-230. And 230 — not some fairly tale alternative statute that mysteriously doesn't appear anywhere in the U.S. Code — is what provides that liability shield today, for moderated and unmoderated hosts and sites alike.


There sorta was - pre-230, the legal precedent was that "distributors" of content were absolved from any liability for it, while "publishers" of content were not. This was established in Smith vs. California, 1959 [1].

The issue was that in 1996, it was not yet clear whether ISPs, hosting companies, and web services counted as "publishers" or "distributors". Compuserve vs. Cubby Inc found that Compuserve was not liable for users' content because it didn't moderate (i.e. was a distributor), while Stratton Oakmont Inc vs. Prodigy Services found that it was liable because it did (i.e. was a publisher). Legislators felt that this setup a perverse incentive and so enacted 230 to that companies would not be penalized for good-faith attempts at moderation.

[1] https://en.wikipedia.org/wiki/Section_230#Background_and_pas...


> Compuserve vs. Cubby Inc found that Compuserve was not liable for users' content because it didn't moderate

That is incorrect. It was not liable because it didn't moderate (i.e., was a dumb-pipe distributor) and because it had no reason to know of the defamatory content.

That latter piece is fact-intensive and case-specific. Which means expensive litigation in every case. Which is not remotely the same as a liability shield.


This forum likely wouldn't exist if we never had section 230 to begin with. Remember the original reasoning behind it - a company that ran a completely unmoderated forum won a court case (because it wasn't moderated at all). A company that imperfectly moderated a forum lost.

Section 230 means you can have user generated content that isn't perfectly moderated. The result of section 230 being abolished isn't some panacea of decentralized blogs and websites, its usenet flamewars and 4chan.


Neither a completely unmoderated nor a perfectly moderated platform would have ever gotten to the size of the giant centralized platforms we have today. It just wouldn’t have been possible.

And I think a good bit of the problems on the internet today are because moderation is inadequate on these huge platforms, but they are passed off as safe places for everyone. It has blurred the line between trustworthy and untrustworthy, everything is now just ambiguously okay-ish.


No. Without 230 comments and forum posts would probably be on a decentralized network where only the author is liable.

As say a blogger, you would only promote 3rd party content you approved of.


This forum is pretty heavily moderated, and its usefulness is largely because it is moderated. Back in the early days of HN the comparison was Reddit, which at the time was completely unmoderated (they didn't have subreddits or mods at the time), and Reddit was starting to become a cesspool of spam, low-effort posts, and offensive content. Many people moved over to HN simply because it avoided that. (And Reddit ended up introducing moderation a couple years later.)

Unmoderated forums work fine when they're a small group of people with a vested interest in preserving the norms of the community. They fail quickly if they can't establish barriers against the outside world that prevent people with no skin in the game from hijacking the community for their own agendas. The modern Internet is much more similar to the latter than the former.


I'm not saying it wouldn't be moderated, but that moderation would be opt-in and also decentralized. You could pick people/algorithms you trust most to filter your view of the conversation.


If those decentralized comments ever appeared together on the same webpage, e.g. in the "comments section" of that page, lawmakers would absolutely consider that webpage liable _even if the content wasn't served from there_.

Lawmakers don't care about the technical details.


Indeed, only the comments the author wants to promote should be embedded or linked on the page. Otherwise they should come out of band, either as a feature of the browser (or plugin) or another site.


i'm reminded of the short story "unwirer" by charlie stross and cory doctorow, which imagines a counter factual universe in which the internet was captured by corporate intentions much earlier

https://en.wikipedia.org/wiki/Wireless:_The_Essential_Charle...

https://craphound.com/unwirer/archives/000009.html


This doesn't look like an "overturn 230" situation.

Plaintiffs are saying that even though Google is protected when someone uploads a terrorist video, Google should _not_ be protected when Google itself _recommends_ a terrorist video.


The internet kind of sucks now it went wrong somewhere along the way. Maybe turning it upside down and seeing where the pieces settle is a good thing. Just like those water/sand toys.


> In accepting the case, the Supreme Court has agreed to answer the question: Does Section 23 exempt interactive computer services from liability “if they specifically draw attention to information made available by another information content provider”?

On the one hand, yes you should be responsible your own editorial choices in the content you publish. On the other, legislative power to impose that responsibility should be leashed to the existing narrow categories of unprotected speech, under strict scrutiny.


What narrow categories are you talking about?


https://sgp.fas.org/crs/misc/IF11072.pdf

  obscenity
  incitement
  defamation
  fraud
  fighting words
  true threats
  speech integral to criminal conduct
  child pornography
I don't agree with all of these, but it's a significant limitation to a legislature to disallow legislation on other speech, and to judge restrictions on these strictly.


If anyone is interested in the plaintiff's argument, it is here [1]

Argument #1:

> The lower courts have mistakenly interpreted “publisher” to have its everyday meaning, referring to an entity or person in the business of publishing, and have at times compounded that error by insisting that section 230(c)(1) applies to virtually any activity in which such a publisher might engage, including making recommendations. But “publisher” in section 230(c)(1) is used in the narrow sense drawn from defamation law. If section 230(c)(1) is properly so understood, the imposition of liability based on a recommendation would not in every instance treat the defendant as a publisher within the meaning of that provision.

Argument #2

> the content at issue must have been provided by “another information content provider,” not by the defendant itself. Recommendations may contain information from the defendant, such as a hyperlink with the URL of material the defendant hopes the user will download, or notifications of new postings the defendant hopes the user will find interesting. The Ninth Circuit erred in holding that URLs and notifications are not information within the meaning of section 230(c)(1).

Argument #3

> the Ninth and Second Circuits erred in holding that section 230(c)(1) protects a defendant if it sends to a user content which the user did not actually request. A defendant is acting as the provider of an “interactive computer service,” and thus within the scope of section 230(c)(1), when it is providing “access...to a computer server.” A computer functions as a “server,” as that term is used in section 230, only when it is providing to a user a file (such as text, or a video), which the user has actually requested, or is performing other tasks (such as a search) at the request of the user.

Hey, Google may win this (I am pretty sure they will). But this claim that a loss would turn the internet upside down is histrionics. No, if Google loses, the internet will still be just fine:

> Thus, although some practices that might be characterized as recommendations could satisfy all three elements of the section 230(c)(1) defense, others would not.

> Search engines are in two important respects different from social media sites. First, search engines only provide users with materials in response to requests from the users themselves, and thus necessarily function as providers of interactive computer services. Second, although search engines provide users with hyperlinks embedded with URLs, those URLs are created by the website where the material at issue is located, not by the search engine itself.

[1] https://www.supremecourt.gov/DocketPDF/21/21-1333/247780/202...


Most of the articles on this site are unintelligible. Is it just posting LLM outputs?


(We've since changed the URL)


I’m a fan of a proposal by Vivek Ramaswamy, from his book Woke, Inc. Companies that want protection from liability for user content should not be able to censor any content not required by law. Companies that want to moderate content should not be afforded any liability protection.


That goes against what I thought the entire spirit and point of 230 was. That _some_ level of moderation or content policy, to any degree, did not imply full curation / approval of all the content period.

It allows a community or website the ability to enforce against the most obviously problematic content while not forcing it to over censor against what might be reasonable discourse or simply emotionally heated or slightly factually incorrect things. Nor do they need to E.G. investigate if something potentially libelous is true or not; that becomes something for the other parties involved to work out on their own via traditional channels.


The problem has, in my opinion, become opposite to what Section 230 was meant to solve. The majority of online communication platforms are, in my opinion, “over censoring” exactly as you described. If they can be trusted to over censor, why can’t they be trusted to censor to the extent required to avoid liability? It’s obvious to me that we shouldn’t be expecting any company to have 100% correct censorship against illegal content, so we shouldn’t allow them to police other content to such a great degree.


>If they can be trusted to over censor, why can’t they be trusted to censor to the extent required to avoid liability?

Because the former is nearly impossible to avoid and the later is nearly impossible to accomplish.


Extremely simple rules and policies make for great sound bites but they rarely handle complex problems well.

The more dogmatically you apply a simple sound bite solution, the worse complex problems become.


Mandatory read [1] whenever someone recommends easy hot takes to reform 230.

Also [2] is a good one for those who think that current platforms "overmoderate" content.

[1] https://www.techdirt.com/2020/06/23/hello-youve-been-referre...

[2] https://www.techdirt.com/2022/11/02/hey-elon-let-me-help-you...


Your first source talks about all the things 230 is and is not. It’s good information. But just because something is codified in law and decided in court cases does not mean that it is how the world ought to be. Agreed that under Section 230 as it is today, all the author says is true. But none of their content is about reforming or changing Section 230; it is all about interpreting it as it is today.

Regarding [2], it is wise (necessary?) for any online platform to obey laws in places where they operate. But when I read about censoring spam or hate speech for instance, I can’t help but think of the scene in Game of Thrones where (forgive my poor recollection) a wildling tells Jon snow how lacking in freedom he is despite thinking he is “free”. Anyone advocating for unabridged (legal) free speech ought to be aware of the fact that there will be speech they do not want to see on their platforms if they get it. That is the price of freedom.


This would destroy Hacker News, which employs heavy moderation.


It would be interesting to know what would happen to sites like Reddit where there is community moderation. Might be still possible as it’s not the company doing it.


I don't know how it would be possible to distinguish between an unaffiliated volunteer and a company. They'd have to sign a statement, I don't see it being feasible. What I think would be a great step forward is moderation transparency, of which there is none on sites like reddit (and it's much needed, their moderation in large communities is embarrassing, see the /r/art thing recently), but I guess that is also without issues due to privacy regulations. What it is not possible to get away with is no user-generated content moderation, it's only going to become more strict with less leniency for platforms, not the opposite, see: EU proposals on the matter pending.


If they deem themselves fit to censor their users, why shouldn’t they be liable for the content their users produce?


Because the outcome informs the choice.

Regardless of the pithiness of the aphorism, if the result of its application is the destruction of a site like HN then it doesn't matter what the aphorism says - it's functionally a HN-destroying bomb. It's a deontological reversal. When someone points out the consequences of the action, the intentions necessarily become the realization of those consequences.

So why shouldn't they be liable? Because that liability is yoked directly to its destruction. They are inseparable.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: