Hacker News new | past | comments | ask | show | jobs | submit login

It couldn't, but what all of these companies could absolutely do is pair automated moderation with human review. When AI flags something as innapropriate, a human has to sign it off before it's removed or the account suspended.

The only exception for it is when content matches a known hash for illegal content - then it should be removed automatically.




How many automated-review actions do you think Twitter performs in a day that would require such a review?

How many seconds might each human-review take?

What false-positive rate is acceptable on that human-review?


Is that an argument that they shouldn't do it then? Because right now, if this article is to be believed, they don't do manual review even on appeals. That's obviously not ok.


As other commenters point out, that characterization is not not accurate. There were most definitely humans involved, including on appeal.

Does that make things OK? I suppose that depends on whether you object more to the decision or the process.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: