Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

And like the philosophers stone it does not exist. Remember the "Map vs Territory" discussion: you cannot have generic maps, only maps specialized for a purpose.


That's essentially the No Free Lunch (NFL) theorem, right?

The thing about the NFL theorem is that it assumes an equal weight or probability over each problem/task. It's impossible to find a search/learning algorithm that performs superiorly over another, 'averaged' over all tasks. But—and this is purely my intuition—the problems that humans want to solve, are a very small subset of all possible search/learning problems. And this imbalance allows us to find algorithms that work particularly well on the subset of problems we want to solve.

Coming back to representation and maps. Human understanding/worldview is a good example. Human understanding and worldview is itself a map of reality. This map models certain facts of the world well and other facts poorly. It is optimized for human cognition. But it's still broad enough to be useful for a variety of problems. If this map wasn't useful, we probably wouldn't have evolved it.

The point is, I do think there's a philosopher's pebble, and I do think there's a few free bites of lunch. These can be found in the discrepancy between all theoretically possible tasks and the tasks that we actually want to do.


I don't know. Maps can vary in quality and expressiveness.

Language itself is a kind of map, and it has pretty universal reach.

"No Free Lunch (NFL) theorem" isn't quite mathematics, it is more in the domain of philosophy.


The NFL theorem (for optimization) has a mathematical proof, FYI. But I agree that there's a lot of room for interpretation.


i struggle to understand the connection between optimization/mathematical programming and the knowledge representation problem in classical AI.

I thought that the reference was to the general 'no free lunch' assumption: https://en.wikipedia.org/wiki/No_free_lunch_theorem


Yes. All too easily we forget that the maps are not the territories.

LLMs are amazing we are creating better and better hyperdimentional maps of language but until we have systems that are not just crystallized maps of the language they were trained on we will never have something that can really think, let alone AGI or whatever new term we come up with.


but Language itself is a kind of map, and it has pretty universal reach.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: