I don't get what the Chinese Room argument has to do with this (even assuming it makes any sense at all). You said that LLMs are just token predictors, and I fully agree with it. You didn't add any further qualifier, for example a limit to their ability to predict tokens. Is your previous definition not enough then? If you want to add something like "just token predictors that nevertheless will never be able to successfully predict tokens such as..." please go ahead.