Hacker News new | past | comments | ask | show | jobs | submit login

Capex was the theoretical moat, same as TSMC and similar businesses. DeepSeek poked a hole in this theory. OpenAI will need to deliver massive improvements to justify a 1 billion dollar training cost relative to 5 million dollars.



I don't know if you are, but a lot of people are still comparing one Deepseek training run to the entire costs of OpenAI.

The deepseek paper states that the $5mil number doesn't include development costs, only the final training run. And it doesn't include the estimated $1.4billion cost of the infrastructure/chips Deepseek owns.

Most of OpenAI's billion dollar costs is in inference, not training. It takes a lot of compute to serve so many users.

Dario said recently that Claude was in the tens of millions (and that it was a year earlier, so some cost decline is expected), do we have some reason to think OpenAI was so vastly different?


Anthropic’s ceo was predicting billion dollar training runs for 2025. Current training runs were likely in the tens/hundreds of millions of dollars USD.

Inference capex costs are not a defensive moat as I can rent gpus and sell inference with linear scaling costs. A hypothetical 10 billion dollar training run on proprietary data was a massive moat.

https://www.itpro.com/technology/artificial-intelligence/dol...




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: