Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Anyone can make a long context window. The key is if your model can make effective use of it or not.


The number of times I know that my instruction is in context, but it’s forgotten, is countless at this point for me. My experience, both ad a clinical psychologist and developers, is that there is a convergent trend in how I speak to both clients and AI. I can view much of my therapist's approach in how I try to highlight the important things to focus on to achieve progress. Often, it’s about helping the client articulate and understand what’s important to them and how they rank these priorities. The same applies to AI. It feels obvious now that the problem with attention and context is the lack of hierarchy or levels of importance. We know that we have, probably biologically based, three types of memory: short-term, intermediate, and long-term. Long-term memory is what you use with MCP, web search, and RAG. Shorter memory is the current response, and intermediate memory is the current context. When assume this, in my interactions with an agent, it makes perfect sense where they falter and what they forget, in the exact same way as people. It feels more and more like talking to a human, with same weaknesses in logic, reasoning, and focus.


I came here just to complain about that :-) All LLMs I used seem to give more weight to things at the beginning of the context window and omit many details. Eg. I tried this simple thing: pasted a friend's and my CV into Gemini and asked it to recommend topics for a joint conference presentation. Results depended greatly on the order of CVs pasted in.


The middle tends to be underweighted. The beginning and end get more attention.


That's because when they say "long context window" they're lying and they actually mean that they support a long input prompt that is still compressed into a small context window. (Typically by throwing out tokens in the middle.)

An actually large context window is impossible due to how LLM attention works under the hood.


Mamba-2 enters the chat.


There are “needle in the haystack” benchmarks for long context performance. It would be good to see those.


These aren’t really indicative of real world performance. Retrieving a single fact is pretty much the simplest possible task for a long context model. Real world use cases require considering many facts at the same time while ignoring others, all the while avoiding the overall performance degradation that current models seem susceptible to when the context is sufficiently full.


I agree, retrieving a single fact is necessary but not sufficient.


How do they make the context window longer? (serious question, I want to learn how this works)


You literally just shift the window over by to the next token once you reach the max amount of tokens you want for context window, NOT with what you train on, (only limited with memory now)

This has obvious issues since you're now losing information from the now unseen tokens which becomes significant if your context window is small in comparision of the answer/question you're looking at. That's why companies try to give stupidly large context windows. The problem is they're not training on the large context window, they're training on something smaller (2048 and above). Due to how attention is setup, you can train on a small amount of context and extrapolate it to any number of tokens possible since they train via ROPE which trains the model because on words and their offset to the neighboring words. This allows us to effectively x2,x3,x10,x100 the amount of tokens we generate vs train with with some form consistency BUT still cause a lot of issues consistency wise since the model approaches more of a "this was trained on snippets but not the entire thing" situation where it has a notion of the context but not fundamentally the entire combined context


That’s a very basic way to keep the LLM inferring past the context window size (there’s better, smarter ways) but that’s not at all what the question was which is how they train a 2M token length window. My understanding at a basic level is that you need corpuses that are >2M in length for training data which is where the problem comes in for - there’s only so much long form content and it’s swamped by all the smaller stuff. I think there’s probably tricks now but I suspect it’s still largely an open problem.


AFAIK nobody does that. They train on much much shorter text but with use tricks in the position encoding steps that can be extrapolated by the LLMs. Lile ROPE and YARN etc.


AFAIK (not much) it definitely helps to train on longer sequences even with rope/yarn and is needed if you care about long context performance (and not just the long context capability).


no one makes effective use of long context.


It's not the most energy efficient workflow, but I work on relatively small codebases and I made a tool that let's me dump all of it in an LLM with a single copy/paste. This works surprisingly well with Gemini 2.5 Pro (1.000.000 ctx).

The only real mistakes it makes are some model specific quirks, like occasionally stripping out certain array index operators. Other than that, it works fine with 150.000 token size conversations. I've gone up to 500.000 with no real issues besides a bit of a slowdown. It's also great for log analysis, which I have maximized to 900.000 tokens.


Long context window = huge amounts of vacant VRAM = our servers are fucking empty


But isn't context window dependent on model architecture and not available VRAM that you can just increase or decrease as you like?


Most attention implementations can work across an arbitrarily long context.

The limiting factors are typically: 1. Often there are latency/throughput requirements for model serving which become challenging to fulfill at a certain context length. 2. The model has to be _trained_ to use the desired context length, and training becomes prohibitively expensive at larger contexts.

(2) is even a big enough problem that some popular open source models that claim to support large context lengths in fact are trained on smaller ones and use "context length extension" hacks like YaRN to trick the model into working on longer contexts at inference time.


The model will use the full context if it's been designed well, but you can still increase the size of the window on models where it hasn't. It's just pointless. People who don't know much about LLMs will still think "bigger number is better" though.


No they can't, it's a N^2 algorithm, just fitting it in the context window is a challenge.

And sure maybe not 2mil of it is usable, but they're reliably pushing the frontier here.


If a model is not making use of the whole context window - shouldn't that be very noticeable when the prompt is code?

For example when querying a model to refactor a piece of code - would that really work if it forgets about one part of the code while it refactors another part?

I concatenate a lot of code files into a single prompt multiple times a day and ask LLMs to refactor them, implement features or review the code.

So far, I never had the impression that filling the context window with a lot of code causes problems.

I also use very long lists of instructions on code style on top of my prompts. And the LLMs seem to be able to follow all of them just fine.


I don't think there are any up-to-date leaderboards, but models absolutely degrade in performance the more context they're dealing with.

https://wandb.ai/byyoung3/ruler_eval/reports/How-to-evaluate...

>Gpt-5-mini records 0.87 overall judge accuracy at 4k [context] and falls to 0.59 at 128k.

And Llama 4 Scout claimed a 10 million token context window but in practice its performance on query tasks drops below 20% accuracy by 32k tokens.


That makes me wonder if we could simply test this by letting the LLM add or multiply a long list of numbers?

Here is an experiment:

https://www.gnod.com/search/#q=%23%20Calcuate%20the%20below%...

The correct answer:

    Correct:    20,192,642.460942328
Here is what I got from different models on the first try:

    ChatGPT:    20,384,918.24
    Perplexity: 20,000,000
    Google:     25,167,098.4
    Mistral:    200,000,000
    Grok:       Timed out after 300s of thinking


> Do not use a calculator. Do it in your head.

You wouldn't ask a human to do that, why would you ask an LLM to? I guess it's a way to test them, but it feels like the world record for backwards running: interesting, maybe, but not a good way to measure, like, anything about the individual involved.


I’m starting to find it unreasonably funny how people always want language models to multiply numbers for some reason. Every god damn time. In every single HN thread. I think my sanity might be giving out.


A model, no, but an agent with a calculator tool?

Then there's the question of why not just build the calculator tool into the model?


Since grok 4 fast got this answer correct so quickly, I decided to test more.

Tested this on the new hidden model of ChatGPT called Polaris Alpha: Answer: $20,192,642.460942336$

Current gpt-5 medium reasoning says: After confirming my calculations, the final product (P) should be (20,192,642.460942336)

Claude Sonnet 4.5 says: “29,596,175.95 or roughly 29.6 million”

Claude haiku 4.5 says: ≈20,185,903

GLM 4.6 says: 20,171,523.725593136

I’m going to try out Grok 4 fast on some coding tasks at this point to see if it can create functions properly. Design help is still best on GPT-5 at this exact moment.


Isn't that LLMs are not designed to do calculations?


They are not LMMs, after all…


Neither are humans.


But humans can still do it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: