Hacker News new | past | comments | ask | show | jobs | submit login

> The fact that some people can’t tell is actually scary.

It really is, and I see more and more of it in Reddit comments, and even at work.

I had some obvious AI writing sent to me by a lawyer on the other side of a dispute recently and I was pissed - I don't mind if you want to use it to help you (I do myself), but at least have the decency to edit so it doesn't read like ChatGPT trash.




> It really is, and I see more and more of it in Reddit comments, and even at work.

I have a morbid fascination with how bad Reddit has become. LLMs have supercharged the problem, but even before ChatGPT became popular Reddit was full of ragebait, reposts, lies, and misinformation.

The scary and fascinating thing to me is that so many people eat that content right up. You can drop into the front page (default subreddits or logged out) and anyone with basic adult level understanding of the world can pick out obvious lies and deliberate misinformation in many of the posts. Yet 1000s of people in the comments are getting angry over obviously fabricated or reposted AITA stories, clear ragebait in /r/FluentInFinance, and numerous other examples. Yet a lot of people love that content and can’t seem to get enough of it.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: