Yes, the theory isn't new, but getting experimental evidence is difficult in neuroscience. Here they used GPT-2 for quantitative predictions, which obviously was not available a decade ago. This way they could extend experiments from highly artificial signals to natural language.
I feel like it is one of the things you learn in a intro psych/neuroscience course. Also that the predictions are a way to deal with timing issues between parts and lag from your limbs and sensory perception? Kind of like the net code for a FPS.
Somehow, I doubt that my limited understanding of the human mind was a decade in advance of state of the art science.