If you read some of the studies of these new LLMs you'll find pretty compelling evidence that they do have a world model. They still get things wrong but they can also correctly identify relationships and real world concepts with startling accuracy.
It fails at _some_ arithmetic. Humans also fail at arithmetic...
In any case, is that the defining characteristic of having a good enough "world model"? What distinguishes your ability understand the world vs. an LLM? From my perspective, you would prove it by explaining it to me, in much the same way an LLM could.