Sure but humans can’t do it at nearly the rate that GPT can, and GPT will never be applying critical thought to the memes it digests and forward on while humans sometimes do.
We are talking about a model that at its core is about making statistics of what the next word will be in a sentence based on an existing corpus. It gives that model the ability to find and summarize all of the existing content in relation to a prompt beyond what humans could do, but I still see no critical thinking there.
This isn't exactly accurate. It's not creating one word at a time, that's the illusion given by the way it illustrates the text on the screen. Doing that would be impossible to create code that compiles for example.