Keeping track of different usernames is hard, and I can't blame you for confusing me with the other guy :)
snark aside, I think he does have a point - afaik, we don't know what intelligence is, so it's kinda hard to make any argument about fundamental differences between "true" intelligence and LLM intelligence. I do feel like there should be something else. I saw somebody describe their mind as consisting of a "babbler" and a "critic" (a GAN, basically). The LLM would be the babbler while the critic is not yet implemented, and this sounds intuitively right to me. Then again, my intuition could be completely wrong and we may be able to get to human level intelligence with further scaling. I haven't seen any solid counterarguments yet. And not even the biggest LLM believers are denying the fact that it's not exactly the same thing as a human brain, but the question is whether it captures the gist of it.
Never have I said such a thing. Also, BPEs aren't a natural limitation of any transformer-based model, it's a trick to save compute for LLMs.