Hacker News new | past | comments | ask | show | jobs | submit login

Wouldn't that also make it incredibly difficult for a new startup to invent a better and less harmful recommendation system?



What would a non-harmful or less harmful (than what?) recommendation system look like? What's the end goal of a recommendation system?


Pick any definition you like. If recommendation systems come with existential legal risks for a small company, then only the biggest companies can afford to run them.

Or think of it this way: How is Mastodon supposed to take on larger social networks without recommending people to follow? Should every Mastodon server operator be legally liable for recommending someone harmful?


My definition would be that they're all bad and there is no good use for them because the end results are harmful - more spend/engagements I view in the same way I would "more smoking". Any algorithm or curation excluding perhaps based on latest or "most views" or something similar would be a recommendation as far as I'm concerned.

But I'm just not sure how or why online platforms get to have their cake and eat it too. If the NYTs publishes a story that eating Tidepods is healthy and encourages kids and parents to do so, they get sued. If Facebook creates an algorithm that causes the same or similar to happen they get a free pass. They either have to be a public speech platform where anyone can say anything as long as it isn't literally breaking the law, or they have to follow the same rules as other entities that curate content. If you want to say "why not both?" then that's fine but you have to apply that to all entities, not just online content.


You don't use the algorithmically generated and ranked HN homepage? You scroll through pages and pages of every new submission?

If you said something libelous about me on HN, I can sue HN for publishing and promoting the comment?

A platform that "allows everything that isn't breaking the law" is a platform that is 99% spam.


I don't think there should be blanket immunity for anything simply because an algorithm did it. Let's just imagine there wasn't. Imagine that you could, in principle, sue a website over their promotion of illegal content. I would think on a fact-specific basis, HN would have a very good defence against liability even absent blanket immunity.

You could imagine the kind of elements that might matter for a fact pattern that would emerge from a deposition: revenue and size of website, proportion of website revenue directed towards moderation, percentage of requests that identify illegal material that are responded to, manner of response, tools provided to users, the types of content actually hosted on the site, nature of the algorithm itself, discussions that were had internally about access to harmful content. HN is a text-based website (which also mitigates the harm claim), it gets maybe in the orbit of a few hundred submissions a day, the vast majority of possible harm is when a submission is connected to a topic likely to cause legal issues, and in my experience such topics are typically flagged within a few minutes and removed quickly. There's no mechanism to directly communicate with users, there is no mechanism to follow users, there's no mechanism to change what you see on the front page based on what you clicked before. Everyone is exposed to the same content.

By contrast, thinking about the companies that are actually the target of these lawsuits I was at the TASM (Terrorism and Social Media) 2022 conference -- some of my research is adjacent to this but I've never done any work on terrorism and my social media work involves alt-tech stuff, not the big social media platforms -- where the keynotes were harm policy leads for Europe for Twitter, Facebook, and YouTube, all of whom made it clear that their position was that it is incumbent on academics and government agencies to identify harmful content and work with social media, because every region has their own unique content challenges and it's not possible for Tech companies to handle those at a global scale. A question was asked of the panel that went something like "Meta was, as it admits, instrumental in spreading the violence-inciting messages that drove the anti-Rohingya pogroms in Myanmar. The defense is that Meta wasn't prepared for, or staffed for, detecting issues in a single small market, and things snowballed quickly. You could hire 100 full time content people in that country for the salary of a single person sitting on the panel or a single SWE, so how could resource constraints be the issue?" and the answer was "We're already devoting enough resources to this problem." I think that's an incredibly shitty answer to the question, and I think a deposition could surface exactly this kind of logic, and to me that would be a fact pattern that supports liability. I hope they get their pants sued off in every jurisdiction that allows it. It's clear an aversion to staffing is a huge part of the issue.

So from my perspective, I think the typical process for resolving civil liability + reasonable assumptions about how courts should interpret fact patterns is likely to get to outcomes I'm plenty happy with.

(In the two cases in front of SCOTUS right now it seems like the victims, who have basically no evidence connecting the perpetrators of the violence to the social media services used, would win: the argument seems to be of the form "some terrorist killed my family member, and some other terrorists got radicalized online, ipso facto social media is liable". I don't think that'd be a winning case with or without s.230)


> If you said something libelous about me on HN, I can sue HN for publishing and promoting the comment?

Why is it different if a "newspaper" does it if HN is curating the content algorithmically? I'm honestly not sure what the difference is.


If all that's allowed is "latest" or "most views", I will keep uploading my content to your platform and bot-voting/-viewing it to keep it at the top of everyone's feed.


These are solvable problems - for example requiring registration before posting. But I'm not moved at all by the technical problem because the technical problem isn't what is in question, it's the promotion and curation of content algorithmically.

Either way I think we're going to see a big swing back to authoritative sources because the very technical problems you mention will be taken advantage of by new tools and so the already meaningless content will not even be generated by humans. The Internet in the sense of "publishing content" will be meaningless and unprofitable [1].

[1] Obviously there will exist use cases where this is not the case


'registration', what does that mean exactly? Only people with government validated IDs are allowed to post in the internet in the US? This sounds strangely in conflict with the both the first amendment and the use of anonymous materials historically as is part of our national identity.

Really everything that you're saying doesn't have shit to do with authoritative sources, but authoritarian sources. If you're a big nice identified company, or you're a member of "the party" you get to post permitted information. If you're not, well, better learn how to post those pics of a your senator pulling some crap on the darknet.


> 'registration', what does that mean exactly? Only people with government validated IDs are allowed to post in the internet in the US?

You can just register anonymously like you do on HN. Though for social media sites or similar having "verified human" seems like not just a good idea but ultimately the direction we'll go.

> Really everything that you're saying doesn't have shit to do with authoritative sources, but authoritarian sources.

You are really jumping the gun here so I'm not going to respond to your points here since I wasn't making those.


What would it look like? It would look like a configurable search system with preloading of some choices.

You're watching Tie Your Mother Down, Queen, Rock in Rio 1985. Would you like to see (select as many as you want):

More videos by or about Queen

More videos from Rock in Rio 1985

More videos from 1985

More video about Mothers

More video tagged Live Concert

More video tagged Progressive Rock

More video tagged Rio de Janeiro


Mothers like mothers in porn? Mothers giving birth? Grandmothers? What about animal mothers? What about fathers in videos with mothers? Are the mothers giving parenting or medical advice? Who is policing the filters and deciding what constitutes a video about mothers?

And either way, why bother? Just don't recommend content there's no point except to drive engagement which is fundamentally no different than what Facebook (or whoever) is doing.


I'm not sure. If the incumbents were barred from doing a thing they currently have invested a lot of time and money and effort into or such thing was made legally riskier I think that would open the door to competition.


Sure. The fact you can't knowingly do business with known criminals and stolen money also makes it harder to start a new bank.


Google knowingly promoted extremist videos? Or did they take them down when made aware?


Knowingly: https://www.propublica.org/article/youtube-promised-to-label...

> YouTube decided against labeling 22 channels identified by ProPublica, but it's not entirely clear why.

https://www.propublica.org/article/how-china-uses-youtube-an...

> YouTube said the clips did not violate its community guidelines.

> The warehouse accounts on YouTube have attracted more than 480,000 views in total. People on YouTube, TikTok and other platforms have cited the testimonials to argue that all is well in Xinjiang — and received hundreds of thousands of additional views.

https://themarkup.org/google-the-giant/2021/04/08/google-you...

> [YouTube] even suggested videos for campaigns with terms that it clearly finds problematic, such as “great replacement.” YouTube slaps Wikipedia boxes on videos about the “the great replacement,” noting that it’s “a white nationalist far-right conspiracy theory.”

> Some of the hundreds of millions of videos that the company suggested for ad placements related to these hate terms contained overt racism and bigotry, including multiple videos featuring re-posted content from the neo-Nazi podcast The Daily Shoah, whose official channel was suspended by YouTube in 2019 for hate speech. Google’s top video suggestions for these hate terms returned many news videos and some anti-hate content—but also dozens of videos from channels that researchers labeled as espousing hate or White nationalist views.

> Even after [Google spokesperson Christopher Lawton] made that statement, 14 of the hate terms on our list—about one in six of them—remained available to search for videos for ad placements on Google Ads, including the anti-Black meme “we wuz kangz”; the neo-Nazi appropriated symbol “black sun”; “red ice tv,” a White nationalist media outlet that YouTube banned from its platform in 2019; and the White nationalist slogans “you will not replace us” and “diversity is a code word for anti-white.”




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: