It couldn't, but what all of these companies could absolutely do is pair automated moderation with human review. When AI flags something as innapropriate, a human has to sign it off before it's removed or the account suspended.
The only exception for it is when content matches a known hash for illegal content - then it should be removed automatically.
Is that an argument that they shouldn't do it then? Because right now, if this article is to be believed, they don't do manual review even on appeals. That's obviously not ok.
The only exception for it is when content matches a known hash for illegal content - then it should be removed automatically.