> we don't have a clue what "understanding" truly means when it comes to animals, including humans
Who is "we"? The philosophical literature has some very insightful things to say here. The narrow scientistic presumption that the answer must be written in the language of mechanism requires revisiting. Mechanism is intrinsically incapable of accounting for such things as intentionality.
Furthermore, I would not conflate human understanding with animal perception writ large. I claim that one feature that distinguishes human understanding vs. whatever you wish to call the other is the capacity for the abstract.
> we don't have a good formalism around it
This tacitly defines understanding as having a formalism for something. But why would it? What does that even mean here? Is "formalism" the correct term here? Formalism by definition ignores the content of what's formalized in order to render, say, the invariant structure conspicuous. And intentionality is, by definition, nothing but the meaning of the thing denoted.
> AI which truly understands concepts (abstract and concrete)
Concepts are by definition abstract. It isn't a concept if it is concrete. "Triangularity" is a concept, while triangles in the real are concrete objects (the mental picture of a triangle is concrete, but this is an image, not a concept). When I grasp the concept "Triangularity", I can say that I understand what it means to be a triangle. I have a possession, there is intentionality, that I can predicate of concrete instances. I can analyze the concept to determine things like the 180 degree property. Animals, I claim, perceive concrete instances only, as they have no language in the full human sense of the word.
AI has nothing to do with understanding, but simulation. Even addition does not, strictly speaking, objectively occur within computers (see Kripke's "quaddition"/"quus" example). Computers themselves are not objectively speaking computers (see Searle's observer-relativity). So the whole question of whether computers "understand" is simply nonsensical, not intractable or difficult or vague or whatever. Computers do not "host" concepts. They can only manipulate what could be said, by analogy, to be like images, but even then, objectively speaking, there is not fact of the matter that these things are images, or images of what is said to be represented. There is nothing about the representation of the number 2 that makes it about the number 2 apart from the conventions human observers hold in their own heads.
You seem to attack scientism for being narrow, which I find valid. However, if I understand it correctly, you then proceed to offer solutions by referring to other philosophical interpretations. I would say that those are also limited in a way.
My original intention was to suggest that, as there are multiple possible interpretations, and no good way to decide on which is best, that we simply do not get to fully understand how thinking works.
Science typically would shy away from the issue, by stating that it is an ill-defined problem. The Wittgenstein reference seems to do something similar.
Recent advancements in LLMs might give science a new opportunity to make sense of it all. Time will tell.
Who is "we"? The philosophical literature has some very insightful things to say here. The narrow scientistic presumption that the answer must be written in the language of mechanism requires revisiting. Mechanism is intrinsically incapable of accounting for such things as intentionality.
Furthermore, I would not conflate human understanding with animal perception writ large. I claim that one feature that distinguishes human understanding vs. whatever you wish to call the other is the capacity for the abstract.
> we don't have a good formalism around it
This tacitly defines understanding as having a formalism for something. But why would it? What does that even mean here? Is "formalism" the correct term here? Formalism by definition ignores the content of what's formalized in order to render, say, the invariant structure conspicuous. And intentionality is, by definition, nothing but the meaning of the thing denoted.
> AI which truly understands concepts (abstract and concrete)
Concepts are by definition abstract. It isn't a concept if it is concrete. "Triangularity" is a concept, while triangles in the real are concrete objects (the mental picture of a triangle is concrete, but this is an image, not a concept). When I grasp the concept "Triangularity", I can say that I understand what it means to be a triangle. I have a possession, there is intentionality, that I can predicate of concrete instances. I can analyze the concept to determine things like the 180 degree property. Animals, I claim, perceive concrete instances only, as they have no language in the full human sense of the word.
AI has nothing to do with understanding, but simulation. Even addition does not, strictly speaking, objectively occur within computers (see Kripke's "quaddition"/"quus" example). Computers themselves are not objectively speaking computers (see Searle's observer-relativity). So the whole question of whether computers "understand" is simply nonsensical, not intractable or difficult or vague or whatever. Computers do not "host" concepts. They can only manipulate what could be said, by analogy, to be like images, but even then, objectively speaking, there is not fact of the matter that these things are images, or images of what is said to be represented. There is nothing about the representation of the number 2 that makes it about the number 2 apart from the conventions human observers hold in their own heads.