Hacker News new | past | comments | ask | show | jobs | submit login

Source?



> I wouldn't call o1 a "system". It's a model, but unlike previous models, it's trained to generate a very long chain of thought before returning a final answer

https://x.com/polynoamial/status/1834641202215297487


That answer seems to conflict with "in the future we'd like to give users more control over the thinking time".

I've gotten mini to think harder by asking it to, but it didn't make a better answer. Though now I've run out of usage limits for both of them so can't try any more…


I'm not convinced there isn't more going on behind the scenes but influencing test-time compute via prompt is a pretty universal capability.


not in a way that it is effectively used - in real life all of the papers using CoT compare against a weak baseline and the benefits level off extremely quickly.

nobody except for recent deepmind research has shown test time scaling like o1


i am telling claude to give me not the obvious answer. that put thinking time up and the quality of answers is better. hope it helps.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: