And like the philosophers stone it does not exist. Remember the "Map vs Territory" discussion: you cannot have generic maps, only maps specialized for a purpose.
That's essentially the No Free Lunch (NFL) theorem, right?
The thing about the NFL theorem is that it assumes an equal weight or probability over each problem/task. It's impossible to find a search/learning algorithm that performs superiorly over another, 'averaged' over all tasks. But—and this is purely my intuition—the problems that humans want to solve, are a very small subset of all possible search/learning problems. And this imbalance allows us to find algorithms that work particularly well on the subset of problems we want to solve.
Coming back to representation and maps. Human understanding/worldview is a good example. Human understanding and worldview is itself a map of reality. This map models certain facts of the world well and other facts poorly. It is optimized for human cognition. But it's still broad enough to be useful for a variety of problems. If this map wasn't useful, we probably wouldn't have evolved it.
The point is, I do think there's a philosopher's pebble, and I do think there's a few free bites of lunch. These can be found in the discrepancy between all theoretically possible tasks and the tasks that we actually want to do.
Yes. All too easily we forget that the maps are not the territories.
LLMs are amazing we are creating better and better hyperdimentional maps of language but until we have systems that are not just crystallized maps of the language they were trained on we will never have something that can really think, let alone AGI or whatever new term we come up with.