This assumes our models perfectly model the world, which I don't think is true. I mean, we straight up know it's not true - we tell models what they can and can't say.
I guess it's a matter of semantics, but I reject the notion it's even possible to accurately model the world. A model is a distillation, and if it's not, then it's not a model, it's the actual thing.
There will always be some lossyness, and in it, bias. In my opinion.