Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Correct, but to be fair to readers (like me) the use of the term "infinite-length inputs" is misleading.

Still, really interesting work. The most salient bit is the discovery shown in Figure 2, summarized as:

> (1) The attention maps in the first two layers (layers 0 and 1) exhibit the "local" pattern, with recent tokens receiving more attention. (2) Beyond the bottom two layers, the model heavily attends to the initial token across all layers and heads.

> surprisingly large amount of attention score is allocated to the initial tokens, irrespective of their relevance to the language modeling task, as visualized in Figure 2. We term these tokens “attention sinks". Despite their lack of semantic significance, they collect significant attention scores. We attribute the reason to the Softmax operation, which requires attention scores to sum up to one for all contextual tokens. Thus, even when the current query does not have a strong match in many previous tokens, the model still needs to allocate these unneeded attention values somewhere so it sums up to one. The reason behind initial tokens as sink tokens is intuitive: initial tokens are visible to almost all subsequent tokens because of the autoregressive language modeling nature, making them more readily trained to serve as attention sinks.

StreamingLLM is basically a "hack" that fixes this odd behavior when we go around butchering the LLM's attention window.

This actually isn't the first time cracks have been shown in the usage of softmax and it makes me wonder if a different function might be better if we want context-length flexible LLMs.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: