Hacker Newsnew | past | comments | ask | show | jobs | submit | alpark3's commentslogin

You're thinking of Optiver's The Hammer, though every HFT firm has basically done or is doing something like this. [1]

Jane Street's version of this was absolutely intentional.

[1] https://www.reuters.com/article/business/high-frequency-trad...


Ah, my mistake. Optiver was The Hammer. The tower regulatory action was for spoofing.

https://www.cftc.gov/PressRoom/PressReleases/8074-19


> Since puppies turn into full grown dogs quite quickly, how often do you suggest I replace the puppy?

You must complete the mission before the puppy becomes a dog. Otherwise you must wait 14 years until you can get another puppy.


> You must complete the mission before the puppy becomes a dog.

The 'mission' being ...? One night stand? Fling? Established relationship?

Because people who find puppy ownership attractive probably won't stay with someone who re-homes the animal once they get what they really want.


Or, you can have two dogs!


The pattern I've noticed with a lot of open source LLMs is that they generally tend to underperform the level that their benchmarks say they should be at.

I haven't tried this model yet and am not in a position to for a couple days, and am wondering if anyone feels that with these.


I agree. Good human friends should provide a mix of a positive/feedback negative loop, providing a good gradient to train one's behavior on.

AI seems like just a positive feedback loop.


If this is what you're seeking though, you can ask it to do this, to challenge your views and question your assumptions.


They're not that good at arb. Real arb doesn't even really exist anymore. Even when it does, it's not JS that closes it. They market make, which is different.


Because it's lying to the client?


And why is that bad?

Your mindset would mean that Windows would have next to no backwards compatibility, for instance.


I wonder if Sam did something in the name of his own philosophy, but was financially suicide. Like vastly underestimating the costs of training/inferencing to the board, but justifying it to himself because it's all going towards building AGI and that's what matters.

But them firing him also means that OpenAI's heavy hitters weren't that devoted to him either. Obviously otherwise they would all leave after him. Probably internal conflict, maybe between Ilya and Sam, with everyone else predictably being on Ilya's side.


>Like vastly underestimating the costs of training/inferencing to the board, but justifying it to himself because it's all going towards building AGI and that's what matters.

Sounds like SBF

What is the purpose of this 'AGI' again? Won't it just end up controlled by the military and cause problems for humanity if it's that amazing?


Maybe I should clarify that I don't think it's all that matters, but that Sam might think that.


It's true at last anecdotally. Related, there's a running joke in some subset of the industry about the types of exotic derivatives that are so complex and esoteric "only french banks" trade them.


1/100th of a pede


Pedes must be crazy, a million legs like that


> He was so good at finance that at his previous job at a hedge fund, he supervised its biggest day of losses in company history.

Small irrelevant detail, but nags me nonetheless. Jane Street is not a hedge fund. They don't remotely operate like one. They're a prop market making firm.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: