Well mostly but they can generate more state that can push old state out of context.
If an LLM were sufficiently trained to be able to roll-forward and correctly set the current state of some registers written into the conversation..? I wouldn't trust it though, leaves too much to chance.
I too make mistakes trying to keep track of things, I end up using tools too.
If an LLM were sufficiently trained to be able to roll-forward and correctly set the current state of some registers written into the conversation..? I wouldn't trust it though, leaves too much to chance.
I too make mistakes trying to keep track of things, I end up using tools too.