Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think a key point from this article that I agree strongly with is the simple point that it is crucial that everyone recognise we are currently in an AI bubble.

I often find people contest this with the non-sequitur of "No, it's not a bubble, there is real value there. We are building things with it". The fact there is real value in the technology does not contradict in any way that we are in a bubble. It may even be supporting evidence for it. Compare with the dot com bubble : nobody would tell you there was no value in the internet. But it was still a bubble. A massive hyper inflated bubble. And when it popped, it left large swathes of the industry devastated even while a residual set of companies were left to carry on and build the "real" eventual internet based reworking of the entire economy which took 10 - 15 years.

People would be well advised to have a look at this point in time at who survived the dot com bubble and why.



I agree, although bubbles don’t always have to pop in huge ways like it did in the dot com crash.

E.g. crypto displayed many, many characteristics of a bubble for a number of years, but the crypto bubble seems like it has just slowly stopped growing and slowly stopped getting larger, rather than popping in a fantastical way. (Not to say it still can’t, of course)

Then again, this bubble is different in that it has engulfed the entire US economy (including public companies, which is the scary part since the damage potential isn’t limited to private investors). If there’s even a 10% chance of it popping, that’s incredibly frightening.


I think this is a really insightful point. Even if we are in a bubble now, in the sense that current LLM technology (impressive though it is) does not quite live up to the huge valuations of AI companies, there is a plausible future in which we get enough technological progress in the next few years that the bubble never really pops and we are able to morph into a new AI-driven economy without a crash. There are probably good historical examples of this happening with other technologies, although it’s hard to identify them because in retrospect it looks like the optimists invested rationally, even though their bets maybe weren’t all that justified at the time.

I personally think a crash is more likely than not, but I think we should not assume that history will follow a particular pattern like the dot com bust. There are a variety of ways this can go and anyone who tells you they know how it’s all going to shake out is either guessing or trying to sell you something.

It is for sure an interesting time to be in the industry. We’ll be able to tell the next generation a lot of stories.


It's a good analysis.

For me the big concern is really the level of detachment from reality that I'm seeing around time scales. People in the startup world seem to utterly fail to appreciate the complexity of changing business processes - for any type of change, let alone for an immature tech where there are still fundamental unsolved problems. The only way for the value of AI to be realised is for large scape business adoption to happen, and that is simply not achievable in the 2 years of runway most of these companies seem to be on.


Cryptocurrencies have survived and thrived, but anyone who went all-in on NFTs or blockchain gaming (or anything other than currency on the blockchain?) has been zeroed out.


> The crypto bubble seems like it has just slowly stopped growing and slowly stopped getting larger

Bitcoin is now worth 2.3 trillion dollars. The price graph looks like a hockey stick. For tokens in a self contained ledger system.

You may be conflating hype and bubble.


Once there's a consensus around a bubble the bubble has already burst?


Not really I searched up some of articles from 1996/97 dot com was already a bubble in 97...

But it had minor 15-20% corrections but kept rising for another year or two after that...

Bubbles are driving by irrational beliefs. And they won't be irrational if we could understand them.

AI is surely a bubble but it can go on for another 3-4 years when something possibly unrelated to AI pops it.


everyone does NOT recognize it, just go on Twitter if you don't think so


Agreed. I think most of the arguments are premised on AI getting infinitely better for some reason, without anyone outlining clear arguments addressing the inherent architectural limitations of today's LLMs. I have been in this space since 2021 and honestly, beside maybe voice and Gemini Deep Research, things aren't much better than GPT-4.


The fact everyone thinks we are in an AI bubble is practically proof we are not in an AI bubble.

The crowd is always wrong on these things. Just like everyone "knew" we were going into a deep recession sometime in late 2022, early 2023. The crowd has an incredibly short memory too.

What it means is that people are really cautious about AI. That is not a self reinforcing, fear of missing out, explosive process bubble. That is a classic bull market climbing a wall of worry.


technical ICs actually trialing the AI tools think we’re in a bubble. Executives, boards, directors and managers are still tumbling head over heels down the mountain in a race to shovel more money into the fire, because their engineering orgs are not delivering results and they are desperate to find a solution


This. Highly competent technical ICs in my circles continue to (metaphorically) scream at their Juniors submitting AI slop and being unable to describe what it's doing, why it's doing it that way, or how they could optimize it further, since all management cares about is "that it works".

Current models excel because of the corpus of the open internet they (stole from) built off of. New languages aren't likely to see as consistent results as old ones simply because these pattern matchers are trained on past history and not new information (see Rust vs C). I think the fact nobody's minting billions turning LLMs into trading bots should be pretty telling in that regard, since finance is a blend of relying on old data for models and intuiting new patterns from fresh data - in other words, directly targeting the weak points of LLMs specifically (inability to adapt to real-time data streams over the long haul).

AI's not going away, and I don't think even the doomiest of AI doomers is claiming or hoping for that. Rather, we're at a crossroads like you say: stakeholders want more money and higher returns (which AI promises), while the people doing the actual work are trying to highlight that internal strife and politics are the holdups, not a lack of brute-force AI. Meanwhile both sides are trying to rattle the proverbial prison bars over the threats to employment real AI will pose (and the threats current LLMs pose to society writ large), but the booster side's actions (e.g., donating to far-right candidates that oppose the very social reforms AI CEOs claim are needed) betray their real motives: more money, less workers, more power.


> AI's not going away, and I don't think even the doomiest of AI doomers is claiming or hoping for that.

Is this the consensus on nomenclature? I though "AI doomers" was people thinking some dystopia will come out of it due to it. In that case I've read so much text wrong.


At this point, my perspective is that the bubble talk has effectively boiled viewpoints into booster or doomer camps based solely on one’s buy-in of the argument these companies have created actual intelligence that can wholesale replace humans. There doesn’t seem to be much room for nuance at the moment, as the proverbial battle lines have been drawn by the loudest voices on either side.


But it's not true that everyone thinks we are in an AI bubble.


Most people don't think they're in a bubble until it starts to pop.


So you seem to be opposing entropsilk‘s argument, same as I did.


We might be in a bull market. The question is for how long. I would guess less than a year considering the market-wide P/E.


Not really. Worked through the dotcom bubble. It was obvious to some people on the ground doing the work. It was obvious to some execs who took advantage of it. Feels similar. Especially if you are burning through tokens on Gemini CLI and Claude Code where the spend doesn’t match the outcomes.


I saw someone earnestly say that a business model with potential to generate actual revenue was no longer relevant, and companies need only generate enough excitement to draw investors to be successful because “the rules have changed.” At that moment, I saw that telltale soapy iridescent sheen. I’ve heard that before.

I’m worried that the US knowledge industries jumped the shark in the teens and have been living off hopeful investors assuming the next equivalent of the SaaS revolution is right around the corner, and AI for whatever reason just won’t change things that much, or if it does, the US tech industry will fumble it, assuming their resources and reputations will insulate them from the competition, just like the tech giants of the 90s vs Internet startups. If that’s true, some industries like biotech will still do fine, but the trajectory of the tech sector, generally, will start looking like that of the manufacturing sector in the 90s.


There is absolutely FOMO. It's even being deliberately stoked. "AI won't take your job. People using AI will." This is this hype cycle's "have fun being poor."




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: