Isn't the thing that costs everyone an arm and a leg at the moment the race for better models? So all of the training everyone is doing to get SOTA in some obscure AI benchmark? From all of the analysis I've read, inference is quite profitable for the AI companies. So at least for the last part:
> we'll have a ton of folks out there psychologically dependent on a product that is either priced out of their ability to pay or completely unavailable, and the ensuing mental health crises that might entail.
I doubt that this will become true. If there's one really tangible asset these companies are producing, which would be worth quite a bit in a bankruptcy it's the model architectures and weights, no?
> Isn't the thing that costs everyone an arm and a leg at the moment the race for better models? So all of the training everyone is doing to get SOTA in some obscure AI benchmark? From all of the analysis I've read, inference is quite profitable for the AI companies.
From what I've read: The cost to AI companies, per inference as a single operation, is going down. However, all newer models, all reasoning models, and their "agents" thing that's still trying desperately to be an actual product category all require magnitudes more inferences per request to operate. It's also worth noting that code generation and debugging, which is one of the few LLM applications I will actually say has a use and is reasonably good, also costs far more inferences per request to operate. And that number of inferences can increase massively with a sufficiently large chunk of code you're asking it to look at/change.
> If there's one really tangible asset these companies are producing, which would be worth quite a bit in a bankruptcy it's the model architectures and weights, no?
I mean, not really? If the companies enter bankruptcy that's a pretty solid indicator that the models are not profitable to operate, unless you're envisioning this as like a long-tail support model that you see with old MMO games, where a company picks up a hugely expensive to produce product, like LOTRO, and runs it with basically a skeleton crew of devs and support folks for the handful of users who still want to play it, and eeks out a humble if legitimate profit for doing so. I guess I could see that, but it's also worth noting that type of business has extremely thin margins, and operating servers for old MMO games is WAY less energy and compute intensive than running any version of ChatGPT post 2023.
Edit: Also, worth noting specifically in the case of OpenAI are it's deep and OLD ties to Microsoft. Microsoft doesn't OWN OpenAI in any meaningful sense, but it is incredibly core to it's entire LLM backend. IMO (not a lawyer) if OpenAI were to go completely belly up, I'm not even sure the models would go to any sort of auction, unless Microsoft was ready to just let them do so. I think they'd devour whatever of the tech stack is available whole without even really spending too much, if any, money on it and continue running it as is.
> we'll have a ton of folks out there psychologically dependent on a product that is either priced out of their ability to pay or completely unavailable, and the ensuing mental health crises that might entail.
I doubt that this will become true. If there's one really tangible asset these companies are producing, which would be worth quite a bit in a bankruptcy it's the model architectures and weights, no?