Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Most attention implementations can work across an arbitrarily long context.

The limiting factors are typically: 1. Often there are latency/throughput requirements for model serving which become challenging to fulfill at a certain context length. 2. The model has to be _trained_ to use the desired context length, and training becomes prohibitively expensive at larger contexts.

(2) is even a big enough problem that some popular open source models that claim to support large context lengths in fact are trained on smaller ones and use "context length extension" hacks like YaRN to trick the model into working on longer contexts at inference time.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: