I don't think there should be blanket immunity for anything simply because an algorithm did it. Let's just imagine there wasn't. Imagine that you could, in principle, sue a website over their promotion of illegal content. I would think on a fact-specific basis, HN would have a very good defence against liability even absent blanket immunity.
You could imagine the kind of elements that might matter for a fact pattern that would emerge from a deposition: revenue and size of website, proportion of website revenue directed towards moderation, percentage of requests that identify illegal material that are responded to, manner of response, tools provided to users, the types of content actually hosted on the site, nature of the algorithm itself, discussions that were had internally about access to harmful content. HN is a text-based website (which also mitigates the harm claim), it gets maybe in the orbit of a few hundred submissions a day, the vast majority of possible harm is when a submission is connected to a topic likely to cause legal issues, and in my experience such topics are typically flagged within a few minutes and removed quickly. There's no mechanism to directly communicate with users, there is no mechanism to follow users, there's no mechanism to change what you see on the front page based on what you clicked before. Everyone is exposed to the same content.
By contrast, thinking about the companies that are actually the target of these lawsuits I was at the TASM (Terrorism and Social Media) 2022 conference -- some of my research is adjacent to this but I've never done any work on terrorism and my social media work involves alt-tech stuff, not the big social media platforms -- where the keynotes were harm policy leads for Europe for Twitter, Facebook, and YouTube, all of whom made it clear that their position was that it is incumbent on academics and government agencies to identify harmful content and work with social media, because every region has their own unique content challenges and it's not possible for Tech companies to handle those at a global scale. A question was asked of the panel that went something like "Meta was, as it admits, instrumental in spreading the violence-inciting messages that drove the anti-Rohingya pogroms in Myanmar. The defense is that Meta wasn't prepared for, or staffed for, detecting issues in a single small market, and things snowballed quickly. You could hire 100 full time content people in that country for the salary of a single person sitting on the panel or a single SWE, so how could resource constraints be the issue?" and the answer was "We're already devoting enough resources to this problem." I think that's an incredibly shitty answer to the question, and I think a deposition could surface exactly this kind of logic, and to me that would be a fact pattern that supports liability. I hope they get their pants sued off in every jurisdiction that allows it. It's clear an aversion to staffing is a huge part of the issue.
So from my perspective, I think the typical process for resolving civil liability + reasonable assumptions about how courts should interpret fact patterns is likely to get to outcomes I'm plenty happy with.
(In the two cases in front of SCOTUS right now it seems like the victims, who have basically no evidence connecting the perpetrators of the violence to the social media services used, would win: the argument seems to be of the form "some terrorist killed my family member, and some other terrorists got radicalized online, ipso facto social media is liable". I don't think that'd be a winning case with or without s.230)
You could imagine the kind of elements that might matter for a fact pattern that would emerge from a deposition: revenue and size of website, proportion of website revenue directed towards moderation, percentage of requests that identify illegal material that are responded to, manner of response, tools provided to users, the types of content actually hosted on the site, nature of the algorithm itself, discussions that were had internally about access to harmful content. HN is a text-based website (which also mitigates the harm claim), it gets maybe in the orbit of a few hundred submissions a day, the vast majority of possible harm is when a submission is connected to a topic likely to cause legal issues, and in my experience such topics are typically flagged within a few minutes and removed quickly. There's no mechanism to directly communicate with users, there is no mechanism to follow users, there's no mechanism to change what you see on the front page based on what you clicked before. Everyone is exposed to the same content.
By contrast, thinking about the companies that are actually the target of these lawsuits I was at the TASM (Terrorism and Social Media) 2022 conference -- some of my research is adjacent to this but I've never done any work on terrorism and my social media work involves alt-tech stuff, not the big social media platforms -- where the keynotes were harm policy leads for Europe for Twitter, Facebook, and YouTube, all of whom made it clear that their position was that it is incumbent on academics and government agencies to identify harmful content and work with social media, because every region has their own unique content challenges and it's not possible for Tech companies to handle those at a global scale. A question was asked of the panel that went something like "Meta was, as it admits, instrumental in spreading the violence-inciting messages that drove the anti-Rohingya pogroms in Myanmar. The defense is that Meta wasn't prepared for, or staffed for, detecting issues in a single small market, and things snowballed quickly. You could hire 100 full time content people in that country for the salary of a single person sitting on the panel or a single SWE, so how could resource constraints be the issue?" and the answer was "We're already devoting enough resources to this problem." I think that's an incredibly shitty answer to the question, and I think a deposition could surface exactly this kind of logic, and to me that would be a fact pattern that supports liability. I hope they get their pants sued off in every jurisdiction that allows it. It's clear an aversion to staffing is a huge part of the issue.
So from my perspective, I think the typical process for resolving civil liability + reasonable assumptions about how courts should interpret fact patterns is likely to get to outcomes I'm plenty happy with.
(In the two cases in front of SCOTUS right now it seems like the victims, who have basically no evidence connecting the perpetrators of the violence to the social media services used, would win: the argument seems to be of the form "some terrorist killed my family member, and some other terrorists got radicalized online, ipso facto social media is liable". I don't think that'd be a winning case with or without s.230)