this seems more like 'llm psychology' than evidence of a rolling model; in other words I would take that prompt as evidence that they don't want users to interrogate the cutoff date than I would that theyre somehow using a rolling model.
Nothing stops you continuously training a foundation model and serving checkpoints, but historically there were weird cliffs and instabilities where more training would make things worse rather than better. The trick is to introduce more data into the pre-training mix and keep training in ways that don't cause the model to regress. Presumably they've figured that out.
It's probably enabled by the huge datacenter xAI has. Most AI labs haven't built their own datacenter, and have to choose between doing experiments on new architectures, serving live traffic and doing more training on their existing models. Perhaps xAI can do all three simultaneously.
i didn't watch the livestream but some people in this thread said that heavy is an orchestration of grok-4s, would be interesting to see how that works
I can’t help but call out that o1-pro was great, it rarely took more than five minutes and I was almost never dissatisfied with the results per the wait. I happily paid for o1-pro the entire time it was available. Now, o3-pro is a relative disaster, often taking over 20 minutes just to refuse to follow directions and gaslight people about files being available for download that don’t exist, or provide simplified answers after waiting 20 minutes. It’s worse than useless when it actively wastes users time. I don’t see myself ever trusting OpenAI again after this “pro” subscription fiasco. To go from a great model to then just take it away and force an objectively terrible replacement, is definitely going the wrong way, when everyone else is improving (Gemini 2.5, Claude code with opus, etc). I can’t believe meta would pay a premium to poach the OpenAI people responsible for this severe regression.
I have never had o3-pro take longer than 6-8 minutes. How are you getting it to think for 20 minutes?! My results using it have also been great, but I never used o1-pro so I don't have that as a reference point.
This is not what Ira Glass meant by taste gap. What he rather means is that taste is important. It’s what gets you into the field and what makes you stick around. Happy to be corrected on this.
yes that was the gist of Ira Glass's quote, but he also added to it that it makes you feel frustrated when you have taste but are not creating things that live up to that taste, but that as a young artist you should push through that.
Here is a copy paste of the quote:
“Nobody tells this to people who are beginners, I wish someone told me. All of us who do creative work, we get into it because we have good taste. But there is this gap. For the first couple years you make stuff, it’s just not that good. It’s trying to be good, it has potential, but it’s not. But your taste, the thing that got you into the game, is still killer. And your taste is why your work disappoints you. A lot of people never get past this phase, they quit. Most people I know who do interesting, creative work went through years of this. We know our work doesn’t have this special thing that we want it to have. We all go through this. And if you are just starting out or you are still in this phase, you gotta know its normal and the most important thing you can do is do a lot of work. Put yourself on a deadline so that every week you will finish one story. It is only by going through a volume of work that you will close that gap, and your work will be as good as your ambitions. And I took longer to figure out how to do this than anyone I’ve ever met. It’s gonna take awhile. It’s normal to take awhile. You’ve just gotta fight your way through.”
― Ira Glass
Im a bit torn about this. If it ends up hurting OpenAI so much that they close shop, what is the incentive for another OpenAI to come up?
You can spend time making a good product and get breakthroughs and all it takes is for meta to poach your talent, and with it your IP. What do you have left?
But also, every employee getting paid at Meta can come out with the resources to start their own thing. PayPal didn't crush fintech: it funded the next twenty years of startups.
I can't explain why but I don't think money is it. Nor a new project or whatever can't be it either. Its just too small of a value proposition when you are already in openAI making banger models used by the world.
According to reports, the comp packages were in the hundreds of millions of dollars. I doubt anyone but execs are making that kind of money at OpenAI; its the sort of money you hope from a successful exit after years of efforts. I don’t blame them for jumping ship.
reply