Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The (economic) AI apocalypse is nigh (pluralistic.net)
136 points by baobun 23 days ago | hide | past | favorite | 188 comments


So I do think we're in a bubble, but I also remember when all the discussion around here was around Uber, and I read many, many hot takes about how they were vastly unprofitable, had no real business model, could never be profitable, and only existed because investors were pumping in money and as soon as they stopped, Uber would be dead. Well, it's now ten years later, Uber still exists, and last year they made $43.9bn in revenue and net income of $9.8bn.


Oh dear, we are definitely in a bubble, it's just not in the way of total burst.

Back when everybody got into website building, Microsoft released a software called FrontPage, a WYSIWYG HTML editor that could help you build a website, and some of it's backend features too. With the software you can create a website completed with home, newspages and guestbooks, with ease, compare to writing "raw" code.

Now days however, almost all of us are still writing HTML and backend code manually. Why? I believe it's because the tool is too slow to fit in a quick-moving modern world. It takes Microsoft weeks of work just to come out with something that poorly mimics what was invented by an actual web dev in an afternoon.

Humans are adoptive, tools are not. Some times, tools can better humans in productivity, sometime it can't.

AI is still founding it's use cases. Maybe it's good at acting like a cheap, stupid and spying secretary for everyone, and maybe it can write some code for you, but if you ask it to "write me a YouTube", it just can't help you.

Problem is, real boss/user would demand "write me a YouTube" or "build a Fortnite" or "help me make some money". The fact that you have to write a detailed prompt and then debug it's output, is the exact reason why it's not productive. The reality that it can only help you writing code instead of building an actually usable product based on a simple sentence such as "the company has decided to move to online retail, you need to build a system to enable that" is a proof of LLM's shortcomings.

So, AI has limits, and people are finding out. After that, the bubble will shrink to fit it's actual value.


This is fair but it's also assuming that today's AI has reached its potential which frankly I don't think any of us know. There's a lot of investment being spent in compute and research from a lot of different players and we could definitely make some breakthroughs. I doubt many of us would've predicted even the progress we've had in the last few years before chatGPT came out.

I think the bubble will be defined on whether these investments pan out in the next two years or if we just have small incremental progress like gpt4 to gpt5, not what products are made with today's llm. It remains to be seen.


I think Uber’s profitability has also been achieved by passing what would be debt to a traditional taxi company (the maintenance of the fleet of taxis) onto the drivers. I think many drivers aren’t making as much money as they think they are.


Did this change since Uber was created? Did Uber previously, back when people were making their "Uber is Doomed" comments pay to maintain drivers' cars? If not why bring it up?

This is a pattern where people have their pre-loaded criticisms of companies/systems and just dump them into any tangentially related discussion rather than engaging with the specific question at hand. It makes it impossible to have focused analytical discussions. Cached selves, but for everything.


Yes, Uber paid drivers more in the beginning, and even facilitated car loans for them lol


But did their business model require them doing that forever? That seems like something they can cut back on once there is a healthy size of drivers in a market.


Yeah I agree it was the original plan from the beginning: use Saudi money to strangle competition and then get the prices back to taxi level (or higher). I believe they partly succeeded by making a compromise here: they both cut the payments to drivers and increased prices.

The original plan worked because in the switch-and-bait phase they were visibly cheaper so in the last year people's mental and speech model changed from "call me a taxi" to "call me an uber". But at least in my local market, the price difference between a taxi an and uber in 2025 is negligible.


A decade ago in NYC, they were giving out free rides left and right. I used Uber for months without paying for a single ride, then when they started charging, they were steeply discounted. I could get around for a little more than a subway fare.

Lyft did the same thing, got a bunch of free rides for a while with them, too.


What I think has never changed, is that most people do not understand depreciation on an asset like a car, or how use of that vehicle contributes to the depreciation. People see the cost of maintenance of a vehicle as something inevitable that they have almost no control over.


Yes, Uber was paying drivers more and that is how they were able to have good service.


Awesome, Uber is profitable by creating a society damaging business model which is copied by other companies.

Phew, I'm so sad I was a Uber critic from early on...


I think the point is about Uber's profitability and not necessarily about their business practices or ethics, and we should be careful not to conflate the two. It is absolutely valid to criticize the latter, but that (so far) seems mostly orthogonal to the former.

Now, it is totally possible that their behavior eventually create a backlash which then affects their business, but then that is still a different discussion from what was discussed before.


There is also a significant difference in insurance. Taxi companies usually have comprehensive insurance, hence the higher standards for drivers and vehicles (monitored and maintained) while Uber has a more differentiated model (part driver, part company, not monitored):

https://jjlegal.com/blog/rideshare-vs-taxis-understanding-ac...


This is underselling the Uber story to a degree. The original sell for Uber was that their total addressable market was the entire auto industry because people will start preferring taxis over driving. They are still trying to achieve that with similar stories now pushed to sell robotaxis.

Uber was undercutting traditional taxis either through driver incentive or cheaper pricing. Many hot takes were around the sustainability of this business model without VC money. In many places this turned out to be true. Driver incentives are way down and Uber pricing is way up.

That said, this is also conflating one company with an industry. Uber might have survived but how many ride sharing companies have survived in total? How many markets have Uber left because it couldn’t sustain?

In a bubble the destruction is often that some big companies get destroyed and others survive. For every pets.com there is one Amazon. That doesn’t mean Amazon is good example to say naysayers during the dot bubble were wrong.


Simplifying Uber's story to "pricing or more drivers" misses the most important part.

Uber was undercutting traditional taxis because, at least in the US, the traditional taxis was horrible user experience. No phone app, no way to give feedback on driver, horrible cars, unpredictable charges... This was because taxis had monopoly in most cities, so they really did not care about customers.

The times when Uber was super-cheap have long passed, but I still never plan to ride regular taxis. It's Waymo (when availiable) or Lyft for me.


Well just look at the price of Uber and Lyft rides. I regularly had single-digit fares on both Uber and Lyft early on. Of course they were unprofitable then. Now that they have gained mindshare they have increased prices drastically.


Uber proposed $43.00 yesterday for a 23 minute drive from park slope to brooklyn heights in New York City, versus $2.90 for a 35 minute R train ride.

I am humbled by how myopic I was in 2010 cheering for a taxi-hailing smartphone app to create consumer surplus by ordering taxis by calling taxi companies.


Uber charged me $85 (plus tip!) for a 35 min ride from the airport. Yeah, my fond memories of those nascent rideshare apps are long gone...


Yellow cab is still more expensive and the cars are dirtier. I wonder why they don't try to compete on price at least with Uber.


It's been my experience (~ 4 years ago) that generally taxis were cheaper than Uber in new york, especially for anything like "Get me to the airport", sometimes like $25 cheaper.


In my experience its actually cheaper at least for airport rides. $50 flat through yellowcab app and no surge nor tip when ordered through app compared to $65 at best sometimes well over double during a bad surge.


Does Uber pay the same licenses and insurance fees yet?

Airport trips these days are often over $100 for me. What is crazy is yellowcab will take me to my area for $50 flat tip included through their app. We’ve exceeded even taxicabs by this point.


>versus $2.90 for a 35 minute R train ride.

Comparing the two makes as much sense as comparing how a $500k rolls royce and a $1k shitbox both get you from point A to point B.


Yeah this keeps coming up so I checked.

Uber sunk overall, until profitability, less than $100 billion over nearly 2 decades.

By analogy (which is basically anecdotal evidence but with cognitive rhyme) we should have profitable LLMs in about 320 years.


The story about Uber was that they were going to be unprofitable until they destroyed taxi services, then they were going to charge more than taxis and give less of a share to the driver.

Nobody is predicting that AI is going to do that. One thing I hadn't considered before is how much it was in google's interest to overestimate and market the impact of AI during their antitrust proceedings. For the conspiratorially minded (me), that's why the bottom is being allowed to drop out of the irrational exuberance over AI now, rather than a couple months ago.


> Nobody is predicting that AI is going to do that.

... Now that I hear it out loud, I can't help thinking if maybe it's something we should be thinking about.

Subsidization to destroy competitors followed by lock-in is obvious, but is there any way these systems could turn professionals into serfs?


it doesnt look likely that any particular ai service will have a moat. every time one of them does anything right now, theres a dozen competitors able to match it within months


I mean, hey, maybe they _will_ increase all those 10x programmer's bills to 20k per month. At least that would be funny.


Uber was unprofitable and when it ceased to be unprofitable ceased to be better.

They did managed to offload price on weaker actors party by simply ugnoring laws and hoping it will work for them. It did, but it was not exactly some grand inspiring victory and more of success of "some dont have to follow the law" corruption.


At the prices they were charging back then that was indeed the accurate take. Of course prices rose and a lot of middle and lower income riders were kicked to the curb in favor of those who can afford to blow another $60 per leg on a night out. I guess there turned out to be enough of them at scale.


That’s not exactly world changing money.

They found the niche and market to operate in and are running with it until the next thing “creatively destroys” their business model.

That’s a far cry from the multi-trillion dollar hype bubble surrounding AI.


It's a 200 billion dollar company, roughly what Anthropic is raising at


The "hot takes" were that they were using investor money to illegally undercut the taxi industry until ride share had an oligopoly and that the government would stop them from breaking the law. I don't know why law enforcement is considered a hot take here, but I have a few guesses.


I'm 52. I experienced the dot-com bubble very up close. I was in the Raleigh-Durham area for most of it. There were hundreds of startups all over the area. Companies like Nortel were booming. IBM was booming. By 2003 it was all gone--Nortel was a shell of its former self, IBM laid off huge numbers of workers. There was suddenly a glut of office space all over. We moved out in 2006, but even around then there was still a glut of office space! I don't know if it ever recovered because it was so overbuilt.

I remember having lunch with a guy who was stubbornly holding on to his Nortel stock, which was worth mere pennies by like 2005 or so. They not only lost their jobs, they lost their 401Ks which were all in company stock. Anyways, this guy was sure it was going to bounce back. I saw in like 2008 where Nortel finally closed its doors and the stock was de-listed at $0. His dream was dead. I never worked for equity after that time period.

The enormous build out of data centers reminds me of that time period. Yeah, it's all going to collapse.


> Plan for a future where you can buy GPUs for ten cents on the dollar, where there's a buyer's market for hiring skilled applied statisticians, and where there's a ton of extremely promising open source models that have barely been optimized and have vast potential for improvement

This doesn’t square entirely with the earlier claim that AI companies have (and will continue to have) “dogshit unit economics”.

If you have a bunch of cheap “applied statistician” labor (kind of a reductive take, btw), cheap GPUs, and powerful open source models, it is a near certainty that companies would achieve favorable unit economics by optimizing existing models to run much more efficiently on existing GPUs.

I happily pay $20/month for Google One to use Gemini 2.5 Pro. I don’t really need it to be a whole lot better. It’s a great product. If they can deliver inference of that level with positive margin (and keep it ad free), it’s a viable business.

Investors will likely lose billions, if not trillions, but I don’t think the industry is inherently unprofitable - I just don’t think anybody has been incentivized to optimize for cost yet. Why would you, when investors continue to throw money at you to scale?


> I happily pay $20/month for Google One to use Gemini 2.5 Pro. I don’t really need it to be a whole lot better.

So you think those $20/month are generating profits?

Because Google is burning its own cash.


> If they can deliver inference of that level with positive margin (and keep it ad free), it’s a viable business.

That “if” is doing tons of heavy lifting here.


Either the AI hype will slow down and the market will crash.

Or AI really does have 100x productivity gains and fewer humans are needed. And you lose your job.

I dont see a positive between any of these scenarios…


2x productivity gain, jobs kept, more stuff?


Where’s the evidence, this writing strikes as purely belief based? For all the overhypers of AI, you also get extreme skeptics like this, and neither side has good evidence. It’s speculation. If he truly knew the future he’d short all the companies right before collapse.


Here's some evidence:

Oracle's share price recently went up 40% on an earnings miss, because apart from the earnings miss they declared $455b in "Remaining Performance Obligations" (which is such an unusual term it caused a spike in Google Trends as people try to work out what it means).

Of the $455b of work they expect to do and get paid for, $300b comes from OpenAI. OpenAI has about $10b in annual revenue, and makes a loss on it.

So OpenAI aren't going to be able to pay their obligations to Oracle unless something extraordinary happens with Project Stargate. Meanwhile Oracle are already raising money to fund their obligations to build the things that they hope OpenAI are going to pay them for.

These companies are pouring hundreds of billions of dollars into building AI infrastructure without any good idea of how they're going to recover the cost.


I'm slightly confused about how solid the expected revenue has to be to be counted as RPO. Does this mean OpenAI actually signed a contract binding them to spend those 300 billions with Oracle?

The second interesting part is also the part you're assuming in your argument. Does the fact that OpenAI doesn't have 300 billions now and neither has the revenue/profit to generate that much matter? Unless there are deals in the background that already secured funding, this seems very shady accounting.


If I earnt £10k a year from my job, and I was spending more than £10k a year getting myself deeper in debt every year, I wouldn't go out and sign up for £300k of goods and services. But maybe that's just me.

I guess we'll find out.


It's a case of major FOMO. They would rather burn with the others who bet wrong than be the ones left behind.


> These companies are pouring hundreds of billions of dollars into building AI infrastructure without any good idea of how they're going to recover the cost.

Well... to be fair it's only really Anthropic (and the also-ran set like xAI) that runs the risk of being over-leveraged. OpenAI is backstopped by Microsoft at the macro level. They might try to screw over Oracle, but they could pay the bill. So that's not going to move the market beyond those two stocks. And the other big players is obviously Google which has similarly deep pockets.

I don't doubt that there's an AI bubble. But it won't pop like that, given the size of the players. Leverage cycles are very hard to see in foresight; in 2008 no one saw the insanity in the derivatives market until Lehman blew up.


Pre-banking 30 years of a customer's net revenue is eron-level accounting


> eron-level

Enron?


Elon?


Has Elon taken down one of the five major accounting firms?

https://en.wikipedia.org/wiki/Enron_scandal


There are various links in the article that have more information. Clicking these references will give the evidence for bad unit economics claims and whatnot.

As for predicting the moment, the author has made a prediction and wants it to be wrong. They expect the system will continue to grow larger for some time before collapse. They would prefer that this timeline be abbreviated to reduce the negative economic impacts. He is advising others on how to take economic advantage of his prediction and is likely shorting the market in his own way. It may not be options trading, but making plans for the bust is functionally similar.


The papers he linked all fail to support his claim. The first paper he linked simply counts the mentions of the term “deep learning” in papers. The 2nd surveyed people who lived in… Denmark and tried to extrapolate that to everyone globally

His points are not backed by much evidence


The first link is a mistake. It's supposed to be the thing being discussed here: https://news.ycombinator.com/item?id=45170164.

The 2nd link seems reasonable to me? Why does a study about 25k workers in Denmark (11 occupations, 7k workplaces) not count as evidence? If there was a strong effect to be found globally, it seems likely to be found in Denmark too.

Also, what about the other links? The discussions about the strange accounting and lack of profitability seem like evidence as well.

If anything, this article struck me as well-evidenced.


Side note: If you're going to short an AI company (or really, buy put options, so you don't have unlimited downside exposure), I would suggest shorting NVIDIA. My reasoning is that if we actually get a fully automated software engineer, NVIDIA stock is liable to lose a bunch of value anyways -- if I understand correctly, their moat is mostly in software.

Wile E Coyote sprints as fast as possible, realizes he zoomed off a cliff, looks down in horror, then takes a huge fall.

Specifically I envision a scenario like: Google applies the research they've been doing on autoformalization and RL-with-verifiable-rewards to create a provably correct, superfast TPU. Initially it's used for a Google-internal AI stack. Gradually they start selling it to other major AI players, taking the 80/20 approach of dominating the most common AI workflows. They might make a deliberate effort to massively undercut NVIDIA just to grab market share. Once Google proves that this approach is possible, it will increasingly become accessible to smaller players, until eventually GPU design and development is totally commoditized. You'll be able to buy cheaper non-NVIDIA chips which implement an identical API, and NVIDIA will lose most of its value.

Will this actually happen? Hard to say, but it certainly seems more feasible than superintelligence, don't you think?


NVIDIA is like the only company making money on the AI bubble, they're not the one I would choose to short.

Tesla is currently trading at 260x earnings, so to actually meet that valuation they need to increase earnings by a factor of 10 pretty sharpish.

They're literally not going to do that by selling cars, even if you include Robotaxis, so really it is a bet on the Optimus robots going as well as they possibly can.

If they make $25k profit per Optimus robot (optimistic) then I think they need to sell about a million per year to make enough money to justify their valuation. Of a product that is not even ready to sell, let alone finding out how much demand their truly is, ramping up production, etc.

For comparison the entire industrial robot market is currently about 500k units per year.

I think the market is pricing in absurdly optimistic performance for Tesla, which they're not going to be able to meet.

(I have a tiny short position in Tesla).


Tesla has been overpriced for ages though, correct?


If today you told me all this about Enron or FTX while they are still an industry darling, I for one wouldn't want to bet against them. For every FTX where cooked books leads to epic failure, there is a Tether where cooked books lead to accidentally unwrapping an unlimited money tap through all sorts of dubious means.


Not an expert, but I'm convinced they will all pivot to military applications before they go bankrupt, and that will unleash a whole new type of hell


The new doctrine: drown the enemy in slop!


Der Angriff des Slop ist nicht erfolgt.


The difference to the dot com bubble is that the unprofitable companies are privately held and the public companies are extremely profitable and have finally found something to soak up their ridiculous profits other than stock buybacks. How a crash affects anyone other than high-net-worth individuals with money tied up in VC funds is not explained.

Real estate and crypto on the other hand...


The real question would be the debt on the GPUs. If a hyper scalar puts 100B from profits into data centers, and borrows 400B against the datacenters (or their suppliers do!). Then the buildout could be quite problematic.

If it’s just profits getting invested + some VC exuberance… I don’t actually know if it matters. If zuck simply shut off the money spigot and never spoke on ai again… would anything actually happen?


Only oracle has taken on debt so far, among the hyper scalers.


CoreWeave has taken on $11.2B in debt with interest rates ranging from 7% to 15%, paying $250M in interest on that debt last quarter on just $19 million in operating income. Half of their assets are GPUs, depreciating over six years.

(per Bloomberg)


> How a crash affects anyone other than high-net-worth individuals with money tied up in VC funds is not explained.

That is even worse. There are many so-called "AI companies" drowning in spending lots of tokens and the majority of them have lots of assumptions when going to raise more money to VCs.

What if the VCs say no?

What if 90% of all these startups get competition from a frontier AI lab and undercuts them? We are seeing this with Cursor and Anthropic already.

What if early-stage startups cannot afford the 100K per H-1B hire anymore AND cannot hire remote overseas due to the HIRE act?

Additionally, we are going to find out what does not mix well together with AI and will inevitably cause a crash that could come unexpectedly.

> Real estate and crypto on the other hand...

At least we do *know* that both of them do not mix well together.

AI + layoffs + mortgages on the other hand...


>> A much-discussed MIT paper found that 95% of companies that had tried AI had either nothing to show for it, or experienced a loss

The paper they linked to just analyzed how many times “deep learning” appears in academic papers…

This is the proof that most companies unsuccessfully tried AI?


The link is wrong, I believe they meant to link this one: https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Bus...


I think they linked to the wrong thing. It’s been discussed several times here:

https://news.ycombinator.com/item?id=45170164


There's always somebody predicting an apocalypse. This guarantees that, regardless of what happens, there's somebody who can claim they were right.

Which makes those predictions completely useless. You could as well read your horoscope.


The funny thing though is that on balance I believe that since the launch of GPT to the public almost 3 years back, I feel GPT has delivered more than it has disappointed. I still feel that we should run into limitations soon and may be GPT5 is an example of that.

Surely the tech still has a long way to go and keep improving especially as the money has attracted everyone to work on it in different ways that were not considered important till now but the financial side of things have to correct a bit for healthy growth


You've given the argument from fallacy, dismissing an argument without any reference to its content, but only to its conclusion. The existence of bad arguments for a proposition doesn't condemn all other arguments for the same.

The question is whether any particular argument's strong or weak. That's a matter of evidence and reasoning.


Still think a middle of the road outcome is most likely here. General purpose technology that changes society but falls well short of agi hype.

As for money side - think it’ll come. There is obvious utility (but not autonomy) and the economics of it will find their equilibrium. They always do


I’m not convinced Unit economics is the right lens here given that it’s a general purpose technology.

For the very near term perhaps but the large scale infra rollouts strike me as a 10+ year strategic bet and on that scale what matters is whether this delivers on automation and productivity


They're all going to start selling ads, obviously.


Does seem likely for Google though integrating ads into text in a durable way could be challenging.

If it’s overt then it’s easily filtered out, if it’s baked in to deep the it harms response quality


Is there going to be enough new ad spend to justify that model? Will everyone else spend even more on ads than they do now?


It doesn't need to be new ad spend. Just the possibility of the existing ad spend being up for grabs justifies all the capex so far.


Great points but timing it can be very hard. It can last many more years because this time they have a thing called "money printer". When crash happens, they will use it.

Yes it prints whatever amount they want, even trillions. Magically(!)


Most people, who try to time these, usually get it completely wrong and end up losing huge amounts of money. I just stay invested in the indexes and some long term stocks, every time I try to predict something it goes badly.


Bear in mind that your index becomes more and more of those six or seven companies, the more they grow. I think they're over 30% of the market? So an index tracker is still greatly exposed to this.


I wish I could get an index without them but it would probably have no growth basically - the rest of the market is struggling in comparison, right?


It exists: XMAG https://finance.yahoo.com/quote/XMAG/

> The index aims to provide a comprehensive and balanced representation of the U.S. equity market by including the largest 500 publicly traded equity securities, while specifically excluding the seven largest technology companies commonly referred to as the “Magnificent 7”.

Up 12% in the last year. Unfortunately, it's ten times as expensive (0.35%) as a straight S&P 500 ETF (e.g. VOO, 0.03%).


> When crash happens, they will use it.

You're suggesting that _governments_ will bail out the AI industry? I mean, why would they do that?


"If we won't build it, China will"

I am sure that you already heard this sort of argument for AI. It's a way to bait that juicy government money.


Its way too expensive. We bail out banks so the little people dont lose their shirts too. There's no equivalent in AI.


you are missing the point, once AI companies goes down it will take down the sp500 too. so their retirement accounts will be affected.


I think there's a bit of a difference between preventing a run on the banks and propping up the entire stock market for the sake of just a handful of companies that all have big enough pockets to fail.


covid crash and what they did in respond to market crash disagrees.


what are you referencing in particular?



sorry, wasn't that in response to a natural disaster? I don't see how we so easily sweep that into the same idea as "bail out".


I could see governments having a strong interest in bailing out Nvidia, Microsoft, etc at least?


Why would Nvidia be in trouble? They're selling shovels during a gold rush, and have slow boated scale up - they're in no trouble.


Agreed, I also feel like Microsoft is diversified enough that this would not bring them down.

Probably the hoards of startups would be most impacted. It isn’t clear the government would bale them out.


Companies like Nvidia. Microsoft, Amazon and Google are going nowhere. Just their valuations will in my opinion take massive dip. Which then will have all sorts of other effects.

They are not going to zero, but they can lose lot from the current price.


Yeah, I agree with this. Maybe it’s not obvious from my point. I suspect companies will have RIFs and hair cuts but not need bail outs.


Nvidia might have a secondary crash if cheap GPUs flood the market. Or we get a resurgence of crypto mining, who knows.


If demand for compute and cuda dropped suddenly would they be okay going back to selling graphics cards?


Their profitability would shrink, but they'd only be in trouble if they were taking on debt to expand operations on the expectation of future growth. AFAIK one of the annoyances gamers have had with Nvidia is that after crypto and now with AI, they've generally been very careful to control how they expand production since they seem quite aware the party could stop any time. It certainly helps to have a lot of product lock-in - people will bear much higher prices to stay with Nvidia at this point (due to, as noted - CUDA).

Sure, the stock price wouldn't be to the moon anymore, but that doesn't materially effect operations if they're still moving product - and gaming isn't going anywhere.

The stock price of a company can crash without it materially effecting the company in anyway...provided the company isn't taking on expansion operations on the basis of that continued growth. Historically, Nvidia have avoided this.


Well they sell graphics cards now so yes. Why would they suddenly not be okay with selling graphics cards if the AI bubble popped?


I mean, sure we're in a bubble, but the trick is to call it, with that old Keynes quote about the market staying irrational longer than you can stay liquid.

But also: > "So, you're saying a third of the stock market is tied up in seven AI companies that have no way to become profitable and that this is a bubble that's going to burst and take the whole economy with it?"

> I said, "Yes, that's right."

that is something different in this case isn't it? those seven companies making up a third of the market do not need to become profitable, they are insanely profitable. Mostly they invest a lot in AI but if that doesn't pay out, all but NVidia have their day job to go back to.


> I mean, sure we're in a bubble, but the trick is to call it, with that old Keynes quote about the market staying irrational longer than you can stay liquid.

It might be worth it just to call it now. All you really have to do is get out of the S&P, you don't have to get out of everything.


It can run for another year or two. Long time to sit on the sidelines with your cash inflating away.

This is the crux of bubbles - timing and where do you move assets so they have protection.


The things I'm trying to keep in mind, because hard to suppress the instinct to be reactive:

1. "Time in market beats timing the market."

2. When diversifying, your profession is is already part of your portfolio.

There's also the political mismanagement of the United States, but that's a whole 'nother can of worms.


[flagged]


For a relatively new 80-day-old account, I thought you'd have much fresher material for jokes.

It's the year 2025 and we've had 9 ample months of seeing more second-term policies, tariff-tax-by-schizo-tweet, the steepest decline in the dollar in decades, etc. etc.

You want to see real delusion? Look for someone still trying to brush off critiques of this administration as just "being fashionable."


Is it that impractical that the GPUs would find another use case? Of NVidia cut enterprise gpu prices by 10x (which they absolutely could!) the gpu+memory combo would be cheaper than all other compute for database-like operations, simulations, and whatever ai is left.


Is gpu good for that? It seems like it is a very niche capability that can't easily be re-purposed the way regular cpu compute could be.

I did a lot of projects with Kafka (big data) in cloud environments and companies had big, pie in the sky dreams that came crashing back to reality when they got the bill for compute services. It's happened several times on projects I was on.


Older GPUs had nowhere near enough memory to be really interesting for big data crunching.

Now however, the biggest limit for AI workloads is GPU memory capacity (and bandwidth). The billions invested are going into improving this aspect faster than any other. Expect GPUs with a terabyte of ultra-fast memory by the end of the decade. There are lots… and lots… of applications for something like that, other than just LLMs!


What’s OpenAI’s and anthropic’s day job?


Which of the seven OP referred to are OpenAI or Anthropic?


Markets can remain irrational longer than you can remain solvent.


3% of the world GDP is more than the 2 trillion needed to fund AI


> Plan for a future where you can buy GPUs for ten cents on the dollar, where there's a buyer's market for hiring skilled applied statisticians, and where there's a ton of extremely promising open source models that have barely been optimized and have vast potential for improvement.

This actually sounds like a kinda cool outcome as long as you aren’t an applied statistician.


The AI bubble feels similar to the Dot.com bubble. The pattern is the same, right? There's some tech, a hype wave that people try to surf (if you paddle fast enough, you can start something and cash out in time), and when the water recedes before the next wave people/organizations either rode the wave cashed out and walked off the beach to their next thing, didn't make the wave and will try next time, or got pounded and their board broken. Or they ripped a great line, did a backflip like Medina, threw their hands in the air, and paddled back out to catch the next wave.

But you don't always know what the wave is going to look like when it's building. You just know there's a wave, and either you get on it or you don't. The connectivity of the web was obvious, the monetization wasn't super obvious (remember K-Tel anybody?). The power of modern AI is obvious, the monetization and final form of the tech isn't. I mean, using natural language is cool and all, and I think there is a lot of value in using models to help/assist engineering and other work, but I think it's too soon to say what the end game is.


I want the hype to die and the bubble to pop as much as Ed and Cory and everyone else writing about it but right now it’s just them basically recycling the same bad news and posting about it. I’d love to see some writing which looks at the factors which caused previous pops and to line them up with factors today to try and determine what’s actually coming. Clearly the market is irrational as hell right now but seemingly very little is going to change that. The closest I’ve seen to what I’m looking for is the coverage over at Notes on the Crises[0] and he also seems bewildered.

Edit: I found this piece which does look at historical bubbles/market events and tries to discuss how they plan out in terms of timing [1]

0: https://www.crisesnotes.com/ 1: https://paulkrugman.substack.com/p/why-arent-markets-freakin...


Something of a logical leap here: if LLMs aren't capable of replacing workers and it's all lies, then what company is going to engage in mass layoffs without seeing results first? Sounds like companies that deserve to go away.


> If LLMs aren't capable of replacing workers and it's all lies, then what company is going to engage in mass layoffs without seeing results first?

We see companies layoff workers for all sorts of short-sighted reasons. They'll mass layoff to reduce labor costs for short term profits and stock price increases, so the execs and shareholders can cash out. AI is just the current reason the executive class has decided to use for the layoffs they were going to do regardless.


Further: business and management are exceptionally fad-driven, for numerous information-theoretic reasons.

Performance is difficult to measure and slow to materialise. At the same time, everyone, especially senior leadership and managers, is desperately competitive, even where that competition is on the perception rather than reality of performance. There's a very strong follow-the-herd / follow-the-leader(s) mentality, often itself driven by core investors and creditors.

A consequence is a tremendous amount of cargo-culting, in the sense of aping the manifest symbols of successful (or at least investor-favoured) firms and organisations, even where those policies and strategies end up incurring long-term harms.

Then there's the apparent winner-take-all aspect of AI, which if true would result in tremendous economic power, if not necessarily financial gains, to a very small number of incumbents. Look at the earlier fallout of the railroad, oil, automobile, and electronics industries for similar cases.

(I've found over the years various lists of companies which were either acquired or went belly-up in earlier booms, they're instructive.)

NB: you'll find fad-prone fields anywhere a similar information-theoretic environment exists: fashion, arts, academics, government, fine food, wine collecting, off the top of my head. Oh, and for some reason: software development.


Yep, those are the companies that would go away.


LLMs are just a stock price preserving excuse to do layoffs from previous overhiring.


Yes. A lot of these people should have been laid off anyway. The Musk Twitter massacre taught everybody a lesson, and layoffs were hot before AI was even the main concern.

Also, the DEI massacre is probably going to develop (or has developed) into a full scale HR/Social PR massacre. Instead of getting yelled at for doing the wrong thing, better to do nothing but make more money. And a side-benefit is that firing all of those people makes it even easier to fire more people. (Is that the singularity?)

I don't doubt that some industries are going to be nearly wiped out by AI, but they're going to be the ones that make sense. LLMs are basically super-google translate, and translators and maybe even language teachers are in deep trouble. In-betweeners and special effects people might be in even more trouble than they already were. Probably a lot more stuff that we can't even foresee yet. But for people doing actual thinking work, they're just a tool that feeds back to you what you already know in different words. Super useful to help you think but it isn't thinking for you, it's a moron.


> for people doing actual thinking work, they're just a tool that feeds back to you what you already know in different words. Super useful to help you think but it isn't thinking for you, it's a moron.

Beautiful description of AI. It’s the tech equivalent of the placebo effect. It does truly work for some, until you look closely and it’s actually a bunch of hot air.

Is a placebo worth a trillion dollars?


Yeah exactly. The question should always be - are these layoffs incremental because of AI? If not, then they should not count in this kind of analysis.


> The Musk Twitter massacre taught everybody a lesson

Well, depends on which lesson. "The company can still run" or "we actually won't build anything new for years".

Twitter released a couple things that were being worked on before the acquisition, and then nothing else (grok comes from a different company which later was merged into it, but obviously had different employees).


> The Musk Twitter massacre taught everybody a lesson

That companies can be kept on KTLO mode with only a skeleton crew?

I think everybody knew that already. The hot takes that Twitter was going to disappear were always silly, probably from people butthurt that a service they liked was being fundamentally changed.


Or maybe companies are letting people go for other reasons and blaming it on AI?


> This isn't like the early days of the web, or Amazon, or any of those other big winners that lost money before becoming profitable. Those were all propositions with excellent "unit economics" – they got cheaper with every successive technological generation, and the more customers they added, the more profitable they became. AI companies have – in the memorable phraseology of Ed Zitron – "dogshit unit-economics." Each generation of AI has been vastly more expensive than the previous one, and each new AI customer makes the AI companies lose more money...

See, I think this is wrong. The unit economics of LLMs are great, and more than that, they have a fuckton of users with obvious paths to funding for those users that aren't paying per unit (https://www.snellman.net/blog/archive/2025-06-02-llms-are-ch...). The problem is the ludicrous up front over infestment, none of which was actually necessary to get to useful foundation models, as we saw with DeepSeek.


> infestment

So true.


That's a very serendipitous typo, I must say, LOL.


What percent of consumption will go to ai ? For me probably atleast 10%. What percent of investment will go to ai ? For me another 10% probably. I mean some of it will come from less consumption and investment in other things.


I disagree with Doctorow's conversation with a student at Cornell. You can prevent further misallocation of funds by agitating against "AI" usage in general. If you are at Cornell, organize meetings, protests etc. against the dehumanization and decreased job prospects.

As a student, you have much more freedom to protest than as an employee, and that is where the resistance must come from.

We also need to take into account that while there is a bubble, most of the insane amounts of investment that were seen in headlines have not materialized.

Nvidia will crash, Tesla will crash (Optimus robot nonsense) but Microsoft and Google should be fine. If there is a bailout, protest again. preferably in the physical space and focusing on economic topics rather than culture wars (which is what the politicians want you to focus on).


There’s a quote from Gil Luria (Managing Director and Analyst at D.A. Davidson) I love on this bubble, which I hope becomes part of the zeitgeist:

> No of course there isn't enough capital for all of this. Having said that, there is enough capital to do this for a at least a little while longer.

https://www.wheresyoured.at/openai-onetrillion/


Can anyone give more than a hand-waived explanation on how this crash will come about? This paragraph reads kind of like: Companies not profitable, no more money coming in, ????, crash

>> I firmly believe the (economic) AI apocalypse is coming. These companies are not profitable. They can't be profitable. They keep the lights on by soaking up hundreds of billions of dollars in other people's money and then lighting it on fire. Eventually those other people are going to want to see a return on their investment, and when they don't get it, they will halt the flow of billions of dollars. Anything that can't go on forever eventually stops.

How will this actually lead to a crash? What would that crash look like? Are banks going bust? Which banks would go bust? Who is losing money, why are they losing money?


See comments elsewhere in this thread. To cite one well known recent example, Oracle stock went crazy recently (and Larry Ellison briefly became the world’s richest person) after they disclosed in their earnings report that they are expecting something like $400b in revenue from serving OpenAI in the coming years. These overinflated expectations systematically multiply and propagate until you’ve arrive at the situation we’re in. As soon as that does _not_ happen and everyone realizes it, the whole house of cards come crashing down, in a very 1929 sort of way.

This is the point TFA is making, albeit a bit hyperbolically.


It also happened in the dotcom bubble, telecom companies were providing leasing/loans to their customers to purchase more networking equipment.

Very similar to the circular funding happening between Nvidia, and their customers, Nvidia funding investments in AI datacenters which get spent in Nvidia equipment, each step of the cycle has to take a cut for paying their own OpEx where the money getting back to Nvidia diminishes in each pass through the cycle.


> What would that crash look like?

What it usually looks like when one of the valley's "revolutions" fails to materialize: a formerly cheap and accessible tech becomes niche and expensive, acres of e-waste, the job market is flooded with applicants with years of experience in something no longer considered valuable, and the people responsible sail off into the sunset now richer for having rat fucked everyone else involved in the scheme.

In this case though given the sheer scale of the money about to go away, I would also add: a lot of pensions are going to see huge losses, a lot of cities who have waived various taxes to encourage data-center build outs are going to be left holding bags and possibly huge, ugly concrete buildings in their limits that will need to be destroyed, and, a special added one for this bubble in particular, we'll have a ton of folks out there psychologically dependent on a product that is either priced out of their ability to pay or completely unavailable, and the ensuing mental health crises that might entail.


Isn't the thing that costs everyone an arm and a leg at the moment the race for better models? So all of the training everyone is doing to get SOTA in some obscure AI benchmark? From all of the analysis I've read, inference is quite profitable for the AI companies. So at least for the last part:

> we'll have a ton of folks out there psychologically dependent on a product that is either priced out of their ability to pay or completely unavailable, and the ensuing mental health crises that might entail.

I doubt that this will become true. If there's one really tangible asset these companies are producing, which would be worth quite a bit in a bankruptcy it's the model architectures and weights, no?


> Isn't the thing that costs everyone an arm and a leg at the moment the race for better models? So all of the training everyone is doing to get SOTA in some obscure AI benchmark? From all of the analysis I've read, inference is quite profitable for the AI companies.

From what I've read: The cost to AI companies, per inference as a single operation, is going down. However, all newer models, all reasoning models, and their "agents" thing that's still trying desperately to be an actual product category all require magnitudes more inferences per request to operate. It's also worth noting that code generation and debugging, which is one of the few LLM applications I will actually say has a use and is reasonably good, also costs far more inferences per request to operate. And that number of inferences can increase massively with a sufficiently large chunk of code you're asking it to look at/change.

> If there's one really tangible asset these companies are producing, which would be worth quite a bit in a bankruptcy it's the model architectures and weights, no?

I mean, not really? If the companies enter bankruptcy that's a pretty solid indicator that the models are not profitable to operate, unless you're envisioning this as like a long-tail support model that you see with old MMO games, where a company picks up a hugely expensive to produce product, like LOTRO, and runs it with basically a skeleton crew of devs and support folks for the handful of users who still want to play it, and eeks out a humble if legitimate profit for doing so. I guess I could see that, but it's also worth noting that type of business has extremely thin margins, and operating servers for old MMO games is WAY less energy and compute intensive than running any version of ChatGPT post 2023.

Edit: Also, worth noting specifically in the case of OpenAI are it's deep and OLD ties to Microsoft. Microsoft doesn't OWN OpenAI in any meaningful sense, but it is incredibly core to it's entire LLM backend. IMO (not a lawyer) if OpenAI were to go completely belly up, I'm not even sure the models would go to any sort of auction, unless Microsoft was ready to just let them do so. I think they'd devour whatever of the tech stack is available whole without even really spending too much, if any, money on it and continue running it as is.


everyone will lose money.


How? Everyone everyone?

How about some guy not invested in the stock market, building a house and working as a plumber be impacted?


If his clients lose all their money in stocks or lose their jobs he also loses work. Depressions tend to impact everyone in the end as unemployment gets high enough



There is on the other hand a strong correlation between the sky is falling and the sky not actually falling.

It makes one tempted to take the sky is falling as a buy signal.


There were chicken littles during the run up to 2008, too. Actually, I think most were a little bit premature. I take articles like these as an early sell signal.


I think both are kind of inevitable with bubbles like the dotcom, 2008 and the present one.

You get a positive feedback loop where the industry hypes their thing which causes the pubic to buy in which causes stocks to go up and the industry to hype more and the public to buy more. Then people point out prices are too high and capital misallocated but that doesn't stop the feedback loop so it goes on longer.

It has to stop at some point though, often because the buyers run out of money to buy with.

Then it goes into reverse - falling prices put off buyers, the industry doesn't get new cash flow to pay for its commitments to GPUs/office leases/mortgages, some of them go bust which puts off buyers even more. Then eventually, after around three years it level out.

I'm a believer in the internet, housing and AI but during the bubbles you get money misallocated on stuff like pets.com or maybe OpenAI spending zillions on data centers to provide free idle chat to the public. Money on fundamental AI research is probably good but all those data centers... dunno.


There is still a demand for these tools. I know they are useful to me. Do they make me more productive as a software engineer? Probably not, at least not significantly. But they're still useful, especially for little tools and one-off scripts which are not intended to become production code anywhere.

I also just enjoy using them for bouncing ideas off of them and doing sanity checks on all sorts of topics, personal and work-related. Sometimes they spark a better idea that I may not have had otherwise. I will still be using them after the bubble bursts.

That being said, I'm also fine if all the current AI companies implode and I'm just running an OSS model locally.


That’s still not good enough. Being a mildly good idea assistant doesn’t come close to paying for the investment spent and valuations of these companies. It’s all bet on completely changing everything. And if that doesn’t happen soon, the valuations are going to crash bad.

And since the entire US economy is being propped up by AI investing, it’s going to be a disaster.


The problem with this is, what you are using it for is not commensurate to what has been invested. In other words, to date, the investment has not been productive for society and doesnt seem to be on track to deliver what has been promised.


Automating "little tools and one-off scripts" doesn't pay for $40bn of extra data centers (and that's just one provider), though.


Yeah. For example I use chatgpt to refine my thoughts and dig deeper into ideas (to get my brain thinking by interacting with a machine with knowledge). But is this worthy of all the investment done? Absolutely not. Would I pay for this service? Nah. Personally I am incredibly sensitive to changes in price - I wouldnt pay a penny and thus my demand would disappear.

Im sure there are others who find more value from it, however, I dont think that group of people is enough to get OAI to be free cash flow to the firm positive any time soon. Note this is not accounting profit - FCFF takes into account reinvestment, and is the cash profits left after.


What can we do? Short those companies.


Something something irrational something something solvent.


I was just thinking about this. Exposure from a short is theoretically limitless. Some sort of derivative would be a better approach? Asking for a financially clueless friend.


All I have to say is good luck out competing the big firms!


Buy puts


The financially illiterate don't understand that "going short" is not the simple reverse of "going long" and that there are more difficulties involved with borrowing stocks to short or in buying puts. Well, puts are easy to buy, at least, but the manner in which they decay makes it hard to win with that strategy, harder, in fact, than buying calls to go long. But yes, you can technically buy puts. You can also play Powerball.


If you buy a put option, you can’t lose more than you put in. The problem is still one of timing things right.

Check out /r/wallstreetbets for expert advice on this. /s


Its a matter of time until the implosion happens.

As a corporate finance and valuation geek, Ill warn you now: dont try and time mood and momentum. Thats what is driving much of the valuations being thrown around.

If this blows up big time and it is found that the Big Tech firms were operating on lies and false hope, there will be consequences - in the form of shareholders demanding cash returned and setting limits on the cash balance held by Google et al. Apple has stayed smart staying out of this nonsense and not doing M&A.

Investing in projects with negative NPV destroys the wealth of shareholders.


No what's driving much of the valuations is the biggest leap in human technology since the internet and skyrocketing revenues as a result

Private companies soaring to $100m ARR in 12 months is commonplace now. That's what's driving the valuation.


Revenues mean nothing without positive equity earnings, especially so without a viable path to get there. Without a clear path, how do you justify the valuation? Lol.

Uber and Amazon had a very logical path to get there.

The reinvestment is so high that once you tack that onto the earnings youre in a fat negative. What does that mean? You will eat into the cash balance and eventually have to go raise more.


> No what's driving much of the valuations is the biggest leap in human technology since the internet and skyrocketing revenues as a result

That is a 1999-like bubble and how you get 75 - 90% of these companies crashing when the music stops.

> Private companies soaring to $100m ARR in 12 months is commonplace now. That's what's driving the valuation.

We don't even know if that is even real to begin with. Even if it is, that revenue can be lost as quickly as it is gained.

This happened to Hopin and other companies who grew extremely quickly and then their valuations crashed.

The questions you should be asking yourself is even after looking at the competition, what is the retention and the switching cost of these said "$100m ARR in 12 months" companies if a competitor moves into their core business?


We don’t know how sticky that revenue is, or if it’s going to be a commodity in the long run. Similar things used to happen in ad-tech before investors got wise that there was no moat.


[flagged]


Lol doesnt bother me. I know only 5% of people here are the real gems and thats what Im here for.

If you know of a better place on the internet LMK!


Peoppe who actually know things have better things to so than post here.


> but an AI salesman can 100% convince your boss to fire you and replace you with an AI that can't do your job, and when the bubble bursts, the money-hemorrhaging "foundation models" will be shut off and we'll lose the AI that can't do your job, and you will be long gone, retrained or retired or "discouraged" and out of the labor market, and no one will do your job.

Even if the big AI companies turn off their APIs, people will still be able to run local models as well as some other, new business spun up to run them as SaaS.


Isn't the training most of the cost? In which case the current models could have a very long lifetime even if new models are never trained again. They'll go gradually out of date, but for many purposes will still be useful. If they can pull new info from the web they may stay relevant for decades. It's only if running the chatbots is not cost effective that everything halts and my understanding is that the cost of that is lower relatively. Even now, older models are still being used. Also, performance optimizations seem likely to soon reduce the need for data center build out and reduce costs. Seems too soon to say where this is all going. Who even knows if the GPU chips will improve dramatically or if something else (more AI optimized processor architectures) will replace them? It's true that right now it looks like a bubble, but the future is still very much in flux, and the value of the models already created may not disappear overnight.


> I firmly believe the (economic) AI apocalypse is coming. These companies are not profitable. They can't be profitable. They keep the lights on by soaking up hundreds of billions of dollars in other people's money and then lighting it on fire.

This is what I don't like. Debating in extremes. How can AI have bad unit economics? They are literally selling compute and code for a handsome markup. This is classic software economics, some of the best unit economics you can get. Look at Midjourney: it pulled in hundreds of millions without raising a single dime.

Companies are unprofitable because they are chasing user growth and subsidising free users. This is not to say there isn't a bubble, but it's a rock-solid business and there to stay. Yes, music will stop one day, and there will be a crash, but I’d bet that most of the big players we see today will still be around after the shakeout. Anecdote: my wife is so dependent on ChatGPT that if the free version ever stopped being good enough, she’d happily pay for premium. And this is coming from someone who usually questions why anyone needs pay for software.


Generally I think the question is whether they actually are selling it at a markup. Inference is easier to reason about. I think the problem is financing training. The issue is the future value of inference seems to be being massively overstated to justify present day expenditures on training (particularly since the value of training today evaporates extremely quickly)


I largely agree. I don't think AI is ultimately useless, but I think it's about 10% as useful as the broader market seems to think it is. That said, every time I see another article like this, I think to myself "well it's not going to come crashing down any time soon". This is the nature of bubbles - they only collapse when nearly everyone is finally convinced they never will. Right now, stories about the AI bubble collapsing are everywhere, which means the time hasn't come yet.

I have an idea that the market may actually start to react positively to bad job numbers, as that could be taken as a signal that companies are shedding people to replace them with AI (even if that's not the actual reason for the bad numbers). If job numbers started suddenly improving and the unemployment rate dropping, it could be taken to mean that AI is not going to replace everyone after all.


> they only collapse when nearly everyone is finally convinced they never will

I get that rationale in some bubbles as that means people are not parking their money as cash where they can buy the dip and support the market (tell me if I'm widely off). But I think this case is different because there's actually VAST sums of money being spent in AI by some very big players who will need an return.


"So, you're saying a third of the stock market is tied up in seven AI companies that have no way to become profitable and that this is a bubble that's going to burst and take the whole economy with it?"

I said, "Yes, that's right."

Which companies are those?


Alphabet, Amazon, Apple, Meta, Microsoft, Nvidia and Tesla.

I dispute the "no way to become profitable" claim (literally all profitable right now, it's the private ones which probably aren't). I do have other negative sentiments for most of them, but those are the seven that represent about that share of the S&P500.


Right. The are also not “AI companies” in any meaningful way — only NVidia’s revenue and market cap is heavily driven by AI sales.

The author’s thesis seems to lack rigor.


> Further: the topline growth that AI companies are selling comes from replacing most workers with AI, and re-tasking the surviving workers as AI babysitters ("humans in the loop"), which won't work. Finally: AI cannot do your job, but an AI salesman can 100% convince your boss to fire you and replace you with an AI that can't do your job

This hits home. A lot of the supposed claims of improvements due to AI that I see are not really supported by measurements in actual companies. Or they could have been just some regular automation 10 years ago, except requiring less code.

If anything I see a tendency of companies, and especially AI companies, to want developers and other workers to work 996 in exchange for magic beans (shares) or some other crazy stupid grift.


So what metric would you look at that would support the idea that AI is inproving a company?


I guess anything other than just claims from people that have a stake in it?

If companies are shipping AI bots with a "human in the loop" to replace what could have been "a button with a human in the loop", but the deployment of the AI takes longer, then it's DEFINITELY not really an improvement, it's just pure waste of money and electricity.

Similarly, what I see different from the pre-AI era are way too many SV and elsewhere companies having roughly the same size and shipping roughly the same amount of features as before (or less!), but are now requiring employees to do 996. That's the definition of loss of productivity.

I'm not saying I hold the truth, but what I see in my day to day is that companies are masters at absorbing any kind of improvement or efficiency gain. Inertia still rules.


So would a lower headcount, stable/improving revenue be a metric you would look at?


If it's honest. A company claiming that they could fire X people due to AI right when customer service starts sucking? Then no.


>AI is the asbestos we are shoveling into the walls of our society and our descendants will be digging it out for generations

Seems a bit pessimistic. AGI may not be here next year to keep the bubble going but will probably happen in the next decade or two and do much of the stuff advertised. It's like the dotcom bubble - much of commerce, banking and the like did move to the internet but not till a while after the financial bubble burst.


> and when the bubble bursts, the money-hemorrhaging "foundation models" will be shut off

This is not a serious piece of writing.


Of course it isn't. It's a FUD piece.


Every day I see articles on HN discussing the AI bubble potentially crashing. The large number of such articles appearing daily is increasing my confidence that the AI space will be fine.


A good sign that an article is another pointless, naive AI doomerism piece is that they cite that atrocious MIT 95% "study".


A good sign someone is an AI huckster is they ignore the rest of the article and citations.


Plenty of hucksters around and for them I say three words: show me the money (free cashflows).


Um, that's four words.

Not saying you're wrong, though...


The whole thing was lost on the mag 7 being unprofitable and that he was so sick of talking about AI he decided to take his shot at making money writing a book about AI.

So many AI hucksters these days




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: