At least in 3.5 it's very noticeable when the context drops. They could use summarization, akin to what they are doing when detecting the topic of the chat, but applied to question-answer-pairs in order to "compress" the information. But that would require additional calls into a summarization LLM so I'm really not sure if it is worth it. Maybe they dump some tokens they have on a blacklist or text snippets like "I want to" or replace "could it be that" with "chance of".