Hacker News new | past | comments | ask | show | jobs | submit login

Yes - I’m more hopeful of LLMs (or whatever replaces them over time) getting more truthful and accurate sooner than we get rid of SEO spam. I would also imagine SEO spam gets worse soon with LLMs.



The incentives are very different. With SEO spam, you have a direct monetary incentive. More spam = more clicks = more money. With LLMs extracting information and answering questions without actually sending the user to the site ... what's the incentive to create low-accuracy content to be ingested, beyond "I want to mess with the data set"?

I'm sure there will be a few people going down that route, but that needs a lot of intrinsic motivation. It's a very different beast vs having a multi-billion dollar market draw in millions of interested parties, and you can concentrate on identifying the saboteurs and filtering them out, which feels easier, especially because it's hard to impossible to identify "shadow-banning".


On that note where are they extracting new information from? If sites are never visited people will stop creating them




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: