Hacker News new | past | comments | ask | show | jobs | submit login

Be that as it may, long context window models which are good are not a mirage. By say late 2027, when the LLM providers figure out that they're using the wrong samplers, they will figure out how to get you 2 million output tokens per LLM call which stay coherent.



Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: