Yes, but the amount they have invested into training llama3 even if you include all the hardware is in the low tens of millions. There are a _lot_ of companies who can afford that.
Hell there are not for profits that can afford that.
Where are you getting that number? I find it hard to believe that can be true, especially if you include the cost of training the 400B model and the salaries of the engineers writing/maintaining the training code.
I mean anyone can throw out self evident general truisms about how there will always be new models and always new top dogs. It's a good generic assumption but I feel like I can make generic assumptions and general truisms just as well as the next person.
I'm more interested in divining in specific terms who we consider to be at the top currently, tomorrow and the day after tomorrow based on the specific things that have been reported thus far. And interestingly, thus far, the process hasn't been one of a regular rotation of temporary top dogs. It's been one top dog, Open AI's GPT, I would say that it currently is still, and when looking at what the future holds, it appears that it may have a temporary interruption before it once again is the top dog, so to speak.
That's not to say it'll always be the case but it seems like that's what our near future timeline has in store based on reporting, and it's piecing that near future together that I'm most interested in.
But the fact that open models are beating state of the art from 6 months ago is really telling just how little moat there is around AI.