Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I just want to point out that AI generated material is naturally a confirmation bias machine. When the output is obviously AI, you confirm that you can easily spot AI output. When the output is human-level, you just pass through it without a second thought. There is almost no regular scenario where you are retroactively made aware something is AI.


I've heard this called the toupee fallacy. Not all toupees are bad, but you only spot the bad toupees.


The vast majority of the time people question whether or not an image or writing is "AI", they're really just calling it bad and somehow not realizing that you could just call the output bad and have the same effect.

Every day I'm made more aware of how terrible people are at identifying AI-generated output, but also how obsessed with GenAI-vestigating things they don't like or wouldn't buy because they're bad.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: