Well said. I'd add that Sam Altman, CEO of OpenAI, also holds the same opinion. He has ulterior motives for everything he says, of course.
Every incremental improvement in utility of these tools requires exponentially more parameters and training data. Compare GPT4's 1.76 trillion params to GPT3.5's 175 million; nearly exactly 10x more. It's reasonable to say that getting the same relative improvement again will require another 10x parameters, and about 10x as much training data. GPT4 supposedly cost something like $20 million to train, so we're talking $200 million. We're talking entire data centers here. And the bigger problem is that GPT4 was already basically trained on the entire Internet (as far as we know). New information is being created all the time, but it will take many years -- decades, even -- before we have 10x as much useful information on the internet. And more and more of that information is regurgitated hallucinations from GPT4 itself and similar.
I'm not saying AI is ending -- there are lots of other avenues to explore. But I am saying we're hitting the limit of "just make it bigger" for LLMs.
Those tend to happen decades apart in a single field. And often the breakthroughs even come before the end of line does, so we jump a few decades.
Now, the GP didn't put out the reasoning why he thinks so. Without further context, there's nothing there requiring you to agree.