Hacker News new | past | comments | ask | show | jobs | submit login

The article that the author links to for "goth taxonomy" (^f "many kinds of goth") is pretty clearly composed of entirely lightly edited AI-generated images, and likely AI-generated text.

For example, the confusing "goth family tree" at the beginning of the article is clearly Dall-E/ChatGPT prompted with just that, with labels edited in after the fact in a way that makes the whole thing nonsense. For the "trad goth" picture further down, it's blantantly obvious it's Dall-E's cluttered style with meaningless lines. It's the exact kind of slop the article complains about, and I'd go so far as to say it's anti-information, because you can't trust any of it. It would be fine if the images were purely for decorative purposes, but for an article that purports to authoritatively describe visual aesthetics, handing that work off to a hallucination-prone AI renders it untrustworthy.

I assume this was the result of a 30 second google, but the author including that as a reference source is so bad that it makes the rest of the article hard to take seriously.

edit:

This particular line jumps out at me:

>Particularly if the AI products assisting us now are successfully trained to not hallucinate anymore

I'm normally one to jump in and defend AI when someone says it's useless because it hallucinates sometimes, because that could not be further from the case. But if you think it doesn't hallucinate at all anymore, then your credibility when writing about AI is severely limited.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: