Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I guess you didn't take up my offer to search for how AI is killing traffic. There are numerous studies that repeatedly prove this to be true, this relatively recent article links to a big pile of them[0]. Why would anyone visit a website, if the AI summary is seemingly good enough?

My issue with AI summaries is that they are not even remotely accurate, trustworthy or deterministic. Someone else posted this wonderful evidence[1] in the comments. LLMs are sycophantic and agree with you all the time, even if it means making shit up. Maybe things will improve, but for the last 2 years, I have not seen much progress regarding hallucinations or deterministic i.e. reliable/trustworthy responses. They are still stochastic token guessers with some magic tricks sprinkled on top to make results slightly better than last month's LLMs.

And what happens when people stop creating new websites because they aren't getting any visitors (and by extension ad-revenue)? New info will stop being disseminated. Where will AI summarize data, if there is no new data to summarize? I guess they can just keep rehashing the new AI-generated websites, and it will be one big pile of endlessly recycled AI shit :)

p.s. I don't disagree with you regarding SEO spam, hostile design, cookie popups, etc. There is even a hilariously sad website[2] which points out how annoying websites have become. But using non-deterministic sycophantic AI to "summarize" websites is not the answer, at least not in the current form.

[0] https://www.theregister.com/2025/07/22/google_ai_overviews_s...

[1] https://imgur.com/a/why-llm-based-search-is-scam-lAd3UHn

[2] https://how-i-experience-web-today.com/

edit: grammar





> My issue with AI summaries is that they are not even remotely accurate, trustworthy or deterministic.

Who cares if it's deterministic? Google changes their algorithms all the time, you don't know what its devs will come up with next, when they release it, when they deploy it, when the previous cache gets cleared. It doesn't matter.


Haha, I suppose the problem is that LLM outputs are unreliable yet presented as authoritative (disclaimers do little to counteract the boffo confidence with which LLMs bullshit) — not that they are unreliable in unpredictable ways.

Presented as authoritative by its users. I mean there are very obvious disclaimers and people just ignore it.

I'm well aware of the studies that "prove" that "AI" summaries are "killing" traffic to websites. I suppose you didn't consider my point that the same was said about snippets on SERPs before "AI"[1].

> My issue with AI summaries is that they are not even remotely accurate, trustworthy or deterministic.

I am firmly on the "AI" skeptic side of this discussion. And yet if there's anything this technology is actually useful for is for summarizing content and extracting key points from it. Search engines contain massive amounts of data. Training a statistical model on it that can provide instant results to arbitrary queries is a far more efficient method of making the data useful for users than showing them a sorted list of results which may or may not be useful.

Yes, it might not be 100% accurate, but based on my own experience, it is reliable for the vast majority of use cases. Certainly beats hunting for what I need in an arbitrarily ordered list and visiting hostile web sites.

> LLMs are sycophantic and agree with you all the time, even if it means making shit up.

Those are issues that plague conversational UIs, and long context windows. "AI" summaries answer a single query and the context is volatile.

> And what happens when people stop creating new websites because they aren't getting any visitors (and by extension ad-revenue)? New info will stop being disseminated.

That's baseless fearmongering and speculation. Websites might be impacted by this feature, but they will cope, and we'll find ways to avoid the doomsday scenario you're envisioning.

Some search engines like Kagi already provide references under their "AI" summaries. If Google is pressured to do so, they will likely do the same as well.

So the web will survive this specific feature. Website authors should be more preoccupied with providing better content than with search engines stealing their traffic. I do think that "AI" is a net negative for the world in general, but that's a separate discussion.

[1]: https://ahrefs.com/blog/featured-snippets-study/


Sorry I didn't meant to discount your argument. I don't think SERPs are a valid comparison, AI is for me an apples vs. oranges comparison, or rather rocks vs. turtles :)

btw your linked article/study doesn't support your argument - SERPs are definitely stealing clicks (just not nearly as many as AI):

> In other words, it looks like the featured snippet is stealing clicks from the #1 ranking result.

I should maybe clarify: I have been using LLMs since the day they arrived on the scene and I have a love/hate relationship with them. I do use summaries sometimes, but I generally still prefer to just at least skim TFA unless it's something where I don't care about perfect accuracy. BTW did you click on that imgur link? It's pretty damning - the AI summary you get depends entirely on how you phrase your query!

> Yes, it might not be 100% accurate, but based on my own experience, it is reliable for the vast majority of use cases. Certainly beats hunting for what I need in an arbitrarily ordered list and visiting hostile web sites.

What does "vast majority" mean? 9 out of 10? Did/do you double-check the accuracy regularly? Or did you stop verifying after reaching the consensus that X/Y were accurate enough? I can imagine as a tech-savvy individual, that you still verify from time to time and remain skeptical but think of 99% of the users who don't care/won't bother - who just assume AI summaries are fact. That's where the crux of my issue lies: they are selling AI output as fact, when in fact, it's query-dependent, which is just insane. This will (or surely has) cost plenty of people dearly. Sure, reading a summary of the daily news is probably not gonna hurt anyone, but I can imagine people have/will get into trouble believing a summary for some queries e.g. renter rights - which I did recently (combination summaries + paid LLMs), and almost believed it until I double-checked with a friend who works in this area who then pointed out a few minor but critical mistakes, which then saved my ass from signing some bad paperwork. I'm pretty sure AI summaries are still just inaccurate, non-deterministic LLMs with some special sauce to make them slightly less sketchy.

> Those are issues that plague conversational UIs, and long context windows. "AI" summaries answer a single query and the context is volatile.

Just open that imgur link. Or try it for yourself. Or maybe you are just good at prompting/querying and get better results.

> So the web will survive this specific feature. Website authors should be more preoccupied with providing better content than with search engines stealing their traffic.

I agree the web will survive in some form or other, but as my Register link shows (with MANY linked studies), it already IS killing web traffic to a great degree because 99% of users believe the summaries. I really hope you are right, and the web is able to weather this onslaught.


Just to add fuel to the fire...AI output is non deterministic even with the same prompt. So users searching the same thing may get different results. The output is not just query dependent

> What does "vast majority" mean? 9 out of 10? Did/do you double-check the accuracy regularly? Or did you stop verifying after reaching the consensus that X/Y were accurate enough?

I don't verify the accuracy regularly, no. And I do concede that I may be misled by the results.

But then again, this was also possible before "AI". You can find arguments on the web supporting literally any viewpoint you can imagine. The responsiblity of discerning fact from fiction remains with the user, as it always has.

> Just open that imgur link. Or try it for yourself. Or maybe you are just good at prompting/querying and get better results.

I'm not any better at it than any proficient search engine user.

The issue I see with that Imgur link is that those are not search queries. They are presented as claims, and the "AI" will pull from sources that back up those claims. You would see the same claims made by web sites listed in the results. In fact, I see that there's a link next to each paragraph which will likely lead you to the source website. (The source website might also be "AI" slop, but that's a separate matter...) So Google is already doing what you mentioned as a good idea above.

All the "AI" is doing there is summarizing content you would find without it as well. That's not proof of hallucinations, sycophancy, or anything else you mentioned. What it does is simplify the user experience, like I said. These tools still suffer from these and other issues, but this particular use case is not proof of it.

So instead of phrasing a query as a claim ("NFL viewership is up"), I would phrase it using keywords ("NFL viewership statistics 2025"). Then I would see the summarized statistics presented by "AI", drill down and go to the source, and make up my mind on which source to trust. What I wouldn't do is blindly trust results from my biased claim, whether they're presented by "AI" or any website.

> it already IS killing web traffic to a great degree because 99% of users believe the summaries. I really hope you are right, and the web is able to weather this onslaught.

I don't disagree that this feature can impact website traffic. But I'm saying that "killing" is hyperbole. The web is already a cesspool of disinformation, spam, and scams. "AI" will make this even worse by enabling website authors to generate even more of it. But I'm not concerned at all about a feature that right now makes extracting data from the web a little bit more usable and safer. I'm sure that this feature will eventually also be enshittified by ads, but right now, I'd say users gain more from it than what they lose.

E.g. if my grandma can get the information she needs from Google instead of visiting a site that will infect her computer with spyware and expose her to scams, then that's a good thing, even if that information is generated by a tool that can be wrong. I can explain this to her, but can't easily protect her from disinformation, nor from any other active threat on the modern web.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: