Hacker News new | past | comments | ask | show | jobs | submit login

> How does your trivial removal automation distinguish between 'intimate depictions' and 'political imagery I dislike'?

It doesn't; however, you would immediately have a civil case against the person and/or their representative for their false claim. That's not spelled out in the bill but it should be obvious.

> The whole point of this is discussion is that this is going to be used to censor everything

That's the claim. You may accept it without objection. I simply do not. Now I'm offering a slightly modified discussion. Is that alright?

> not just 'intimate visual depictions'.

I'm sure you would agree that any automation would obviously only be able to challenge images. This does create a vulnerability to be sure, but I do not agree that it automatically creates the wholesale censorship of political speech that you or the EFF envisions here.

It also makes efforts at being scoped only to sites which rely on user generated content effectively limiting it to social media platforms and certain types of adult content websites. Due to their nature it's already likely that these social media platforms do _not_ allow adult content on their websites and have well developed mechanisms to handle this precise problem.

The bill could be refined for civil liberties sake; however, in it's current state, I fail to see the extreme danger of it.




>however, you would immediately have a civil case against the person and/or their representative for their false claim. That's not spelled out in the bill but it should be obvious.

Wow. Not sure if this is ludicrously bad faith or just ludicrously naive/ignorant/unthinking but it's ludicrous either way. Plenty enough to nullify every other thing you attempt to say on the topic.


> however, you would immediately have a civil case against the person and/or their representative for their false claim.

Look how well that's worked against DMCA and Youtube copyright strike abuse. In the case that the posts that are being taken down are not commercial, being able to prove damages means that the effectiveness of such a deterrent are minimal. The act could have put in some sort of disincentive against fraudulent reports, or even provided for a way to find the reason your stuff was taken down, but notably does not do so.


> It doesn't; however, you would immediately have a civil case against the person and/or their representative for their false claim.

Great. So a motivated bad actor can send out 10,000,000 bogus takedowns for images promoting political positions and people they disagree with, and they have to be taken down immediately, and then all 10 million people affected have to individually figure out who the hell actually submitted the takedowns, and have enough money for lawyers, and enough time to engage in a civil suit, and in the end they might get money, if they can somehow prove that taking down those specific images damaged them personally, rather than just a cause they believe in, but will they get the images restored?

This just smacks of obliviousness and being woefully out of touch, even before we get to

> That's not spelled out in the bill but it should be obvious.

...which almost makes it sound like this whole thing is just an elaborate troll.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: