Maybe this is the disconnect for the token side: you seem to think they can't keep improving the margin to reach profitability. They are static and it will just get worse, not better.
I think deepseek instead just showed they haven't really bothered yet. They rather focus on growing, and capital is cheap enough for these firms that optimizing margins is relatively distracting. Obviously they do optimize, but probably not at the expense of velocity and growth.
And if they do seriously want to tackle margins, they should pull a groq/Google and go aggressively deep. Ex: fab something. Which... They do indeed fund raise on.
No, it feels more like the disconnect is that I think they're all compute-limited and you maybe don't? Almost every flop they use to serve a query at a loss is a flop they didn't use for training, research, or for queries that would have given them data to enable better training.
Like, yes, if somebody has 100k H100s and are only able to find a use for 10k of them, they'd better find some scale fast; and if that scale comes from increasing inference workloads by 10x, there's going to be efficiencies to be found. But I don't think anyone has an abundance of compute. If you've instead got 100k H100s but demand for 300k, you need to be making tradeoffs. I think loss-making paid inference is fairly obviously the worst way to allocate the
compute, so I don't think anyone is doing it at scale.
> I think deepseek instead just showed they haven't really bothered yet.
I think they've all cared about aggressively optimizing for inference costs, though to varying levels of success. Even if they're still in a phase where they literally do not care about the P&L, cheaper costs are highly likely to also mean higher throughput. Getting more throughput from the same amount of hardware is valuable for all their use cases, so I can't see how it couldn't be a priority, even if the improved margins are just a side effect.
(This does seem like an odd argument for you to make, given you've so far been arguing that of course these companies are selling at a loss to get more scale so that they can get better margins.)
> - You are hyper fixated on tokens, and not that owning a large % of distribution lets them sell other things . Eg, instead of responding to my point 2 here, you are again talking about token margin. Apple doesn't have to make money on transistors when they have a 30% tax on most app spend in the US.
I did not engage with that argument because it seemed like a sidetrack from the topic at hand (which was very specifically the unit economics of inference). Expanding the scope will make convergence less likely, not more.
There's a very good reason all the labs are offering unmonetized consumer products despite losing a bundle on those products, but that reason has nothing at all to do with whether inference when it is being paid for is profitable or not. They're totally different products with different market dynamics. Yes, OpenAI owning the ChatGPT distribution channel is vastly valuable for them long-term, which is why they're prioritizing growth over monetization. That growth is going to be sticky in a way that APIs can't be.
I agree they are compute limited and disagree that they are aggressively optimizing. Many small teams are consistently showing many optimization gain opportunities all the way from app to software to hardware, and deepseek was basically just one especially notable example of many. In my experience, there are levels of effort to get corresponding levels of performance, and with complexity slowdowns on everyone else, so companies are typically slow-but-steady here, esp when ZIRP rewards that (which is still effectively in place for OpenAI). Afaict OpenAI hasn't been pounding on doors for performance people, and generally not signalling they go hard here vs growth.
Re: Stickiness => distribution leadership => monetization, I think they were like 80/20 on UI vs API revenue, but as a leader, their API revenue is still huge and still growing, esp as enterprise advance from POCs. They screwed up the API market for coding and some others (voice, video?), so afaict are more like "one of several market share leaders" vs "leading" . So the question becomes: Why are they able to maintain high numbers here, eg, is momentum enough so they can stay tied in second, and if they keep lowering costs, stay there, and enough so it can stay relevant for more vertical flows like coding? Does bundling UI in enterprise mean they stay a preferred enterprise partner? Etc . Oddly, I think they are at higher risk of losing the UI market more so than the API market bc an organizational DNA change is likely needed for how it is turning into a wide GSuite / Office scenario vs simple chat (see: Perplexity, Cursor, ...). They have the position, but it seems more straightforward for them to keep it in API vs UI.
I think deepseek instead just showed they haven't really bothered yet. They rather focus on growing, and capital is cheap enough for these firms that optimizing margins is relatively distracting. Obviously they do optimize, but probably not at the expense of velocity and growth.
And if they do seriously want to tackle margins, they should pull a groq/Google and go aggressively deep. Ex: fab something. Which... They do indeed fund raise on.