I'm pretty sure one could formulate way more than 25k words worth of propositions, where you would be able to determine if the proposition is true or not. This is due to your long term memory.
The GPT string is closer to short term memory, and there 25k words is way more than a human is capable of.
But a human author can offload much storage to long term (or some intermediate) memory.
In principle, GPT should be able to do so to, by basically retrain the model with the text it just created added as input. That way, it might be able to write texts that are billions of words long, but at a much greater cost of computing power, since this would require one instance of the model per book being written.
The GPT string is closer to short term memory, and there 25k words is way more than a human is capable of.
But a human author can offload much storage to long term (or some intermediate) memory.
In principle, GPT should be able to do so to, by basically retrain the model with the text it just created added as input. That way, it might be able to write texts that are billions of words long, but at a much greater cost of computing power, since this would require one instance of the model per book being written.