The pattern I've noticed with a lot of open source LLMs is that they generally tend to underperform the level that their benchmarks say they should be at.
I haven't tried this model yet and am not in a position to for a couple days, and am wondering if anyone feels that with these.
They're not that good at arb. Real arb doesn't even really exist anymore. Even when it does, it's not JS that closes it. They market make, which is different.
I wonder if Sam did something in the name of his own philosophy, but was financially suicide. Like vastly underestimating the costs of training/inferencing to the board, but justifying it to himself because it's all going towards building AGI and that's what matters.
But them firing him also means that OpenAI's heavy hitters weren't that devoted to him either. Obviously otherwise they would all leave after him. Probably internal conflict, maybe between Ilya and Sam, with everyone else predictably being on Ilya's side.
>Like vastly underestimating the costs of training/inferencing to the board, but justifying it to himself because it's all going towards building AGI and that's what matters.
Sounds like SBF
What is the purpose of this 'AGI' again? Won't it just end up controlled by the military and cause problems for humanity if it's that amazing?
It's true at last anecdotally. Related, there's a running joke in some subset of the industry about the types of exotic derivatives that are so complex and esoteric "only french banks" trade them.
> He was so good at finance that at his previous job at a hedge fund, he supervised its biggest day of losses in company history.
Small irrelevant detail, but nags me nonetheless. Jane Street is not a hedge fund. They don't remotely operate like one. They're a prop market making firm.
Jane Street's version of this was absolutely intentional.
[1] https://www.reuters.com/article/business/high-frequency-trad...