Hacker News new | past | comments | ask | show | jobs | submit login

The problem lies in our understanding of the concept "understanding". It is still pretty unclear at a fundamental level what it means to understand, learn, or conceptualize things.

This quickly leads to thought about consciousness and other metaphysical issues that have not been resolved, and probably never will be.




This is a non sequitur. We know what understanding means well enough for basically any technological object other than neural nets.


But we don't have a clue what "understanding" truly means when it comes to animals, including humans, which is the more relevant problem for this particular thread[1]. There is an intuitive sense of "understanding" but we don't have a good formalism around it, and this intuitive sense is easily tricked. But since we don't know how to formalize understanding, we are not yet capable of building AI which truly understands concepts (abstract and concrete) in the same way that birds and mammals do.

As a specific example, I suspect we are several decades away from AI being able to safely perform the duties of guide dogs for the blind, even assuming the robotics challenges are solved. The fundamental issue is that dogs seem to intuitively understand that blind people cannot see, and dogs can therefore react proactively to a wide variety of situations they would have never been trained on, and they can insist on disagreeing with blind people (intelligent disobedience) rather than being gaslit into thinking a dangerous crosswalk is actually safe. The approach of "it works well most of the time, but you gotta check its work" might fly for an LLM but humans need to trust seeing-eye dogs to make good decisions without human oversight.

In particular, being a seeing-eye AI seems much more difficult than fully autonomous driving, even considering that the time constraints are relaxed. Buildings are far more chaotic and unpredictable than streets.

[1] Note these concerns are not at all relevant for the research described in the article, where "learn" means "machine learning" and does not imply (or require) "understanding."


> we don't have a clue what "understanding" truly means when it comes to animals, including humans

Who is "we"? The philosophical literature has some very insightful things to say here. The narrow scientistic presumption that the answer must be written in the language of mechanism requires revisiting. Mechanism is intrinsically incapable of accounting for such things as intentionality.

Furthermore, I would not conflate human understanding with animal perception writ large. I claim that one feature that distinguishes human understanding vs. whatever you wish to call the other is the capacity for the abstract.

> we don't have a good formalism around it

This tacitly defines understanding as having a formalism for something. But why would it? What does that even mean here? Is "formalism" the correct term here? Formalism by definition ignores the content of what's formalized in order to render, say, the invariant structure conspicuous. And intentionality is, by definition, nothing but the meaning of the thing denoted.

> AI which truly understands concepts (abstract and concrete)

Concepts are by definition abstract. It isn't a concept if it is concrete. "Triangularity" is a concept, while triangles in the real are concrete objects (the mental picture of a triangle is concrete, but this is an image, not a concept). When I grasp the concept "Triangularity", I can say that I understand what it means to be a triangle. I have a possession, there is intentionality, that I can predicate of concrete instances. I can analyze the concept to determine things like the 180 degree property. Animals, I claim, perceive concrete instances only, as they have no language in the full human sense of the word.

AI has nothing to do with understanding, but simulation. Even addition does not, strictly speaking, objectively occur within computers (see Kripke's "quaddition"/"quus" example). Computers themselves are not objectively speaking computers (see Searle's observer-relativity). So the whole question of whether computers "understand" is simply nonsensical, not intractable or difficult or vague or whatever. Computers do not "host" concepts. They can only manipulate what could be said, by analogy, to be like images, but even then, objectively speaking, there is not fact of the matter that these things are images, or images of what is said to be represented. There is nothing about the representation of the number 2 that makes it about the number 2 apart from the conventions human observers hold in their own heads.


You seem to attack scientism for being narrow, which I find valid. However, if I understand it correctly, you then proceed to offer solutions by referring to other philosophical interpretations. I would say that those are also limited in a way.

My original intention was to suggest that, as there are multiple possible interpretations, and no good way to decide on which is best, that we simply do not get to fully understand how thinking works.

Science typically would shy away from the issue, by stating that it is an ill-defined problem. The Wittgenstein reference seems to do something similar.

Recent advancements in LLMs might give science a new opportunity to make sense of it all. Time will tell.


It was resolved just over 100 years ago by Wittgenstein. Either you fully define "understanding", in which case you've answered your question, or you don't clearly define it, it in which case you can't have a meaningful discussion about it because you don't even have an agreement on exactly what the word means.


That sounds clever, but it is actually devoid of both insight and information. There is nothing to be learned from that wit, witty as it is. Contrast with the question it is cleverly dismissing, which could potentially help us move technology forward.


Betrand paradox (in probability) (https://en.wikipedia.org/wiki/Bertrand_paradox_(probability)) is a great counter example of how you can have a meaningful discussion with a poorly defined question. A bit of a meta example, as the meaningful discussion of how what appears to be a well defined question isn't, and how specifically defining a question can give different answer. It also shows that there are multiple correct answers for different definitions of the question. While they disagree, they are all correct within their own definition.

Back to the topic of neural networks, just talking about why the question is hard to clearly define can be a meaningful discussion.


Didn't Wittgenstein shift a bit later in career. All 'word' play is built on other words, and language turns into a self-referential house of cards? Didn't his grand plan to define everything fall apart and he gave up on it?

(I might be mis-remembering later Wittgenstein)


That doesn't resolve the question. It just describes how to move on while ignoring the obvious elephant in the room


And how is that different from resolving the question?


The problem still exists


The problem doesn't exist in a meaningful sense if one can't define it clearly. Might as well say the wellabigomboopita still exists.


I thought problem was you couldn't get anywhere because of the question, but you're saying the problem is the question didn't get an answer that satisfies your taste?


The problem, as stated by the parent, is that it is "still pretty unclear at a fundamental level what it means to understand, learn, or conceptualize things."

Which Wittgenstein didn't resolve, he describes how to kick the can down the road. Which is fine, every science needs to make assumptions to move on, but in no way is that a "resolution" to the problem of "what it means to understand, learn, or conceptualize things."


A strict definition is almost never required outside maths. We got so far being unable to define "woman" it turns out.

The most naive meaning of understanding, such as "demonstrating ability to apply a concept to a wide range of situations" is good enough for many cases, Goedel be damned.


Agree. But computers are applied math. So it stands to reason that neutral networks require a strict definition


Your brain is a neural network. All education is therefore applied maths as well, right?


That's a pretty useless answer. Just because you cannot fully define something doesn't mean you cannot define parts of it or have different useful definitions of it.


I am not referring to a formal definition of the word "understanding", but to a potential theory about the processes involved.

Wittgenstein's "Whereof one cannot speak, thereof one must be silent." does not help with that.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: