>Am I really dealing in semantics? Or have I just learned the graph-like latent representation for (statistical or reliable) invariant relationships in a bunch of syntax?
This and the rest of the comment are philosophical skepticism, and Kant blew this apart back when Hume's "bundle of experience" model of human subjects was considered an open problem in epistemology.
>All our simple ideas in their first appearance are deriv’d from simple impressions, which are correspondent to them, and which they exactly represent.
>...he is so confident the correspondence holds that he challenges anyone who doubts it to produce an example of a simple impression without a corresponding simple idea, or a simple idea without a corresponding simple impression...
In other words, Hume thought that your ideas about things are a result, and only a result, of your impression of the thing. Knowledge must be, then, a posteriori. Indeed he reduces our "selves" into "bundles", which is to say nothing more than an accumulation of the various impressions we've received while living.
The problem with this is that it raises the question: How do we come up with novel thoughts that are not just reproducing things we have observed? (This is called synthetic a priori knowledge)
You can see at this point that this question is very similar to the one posed to AI right now. If it's nothing more than a bundle of information related to impressions it has received (by way of either text or image corpora), then can it really ever create anything novel that doesn't draw directly from an impression?
Kant delivered a decisive response to this with his Critique of Pure Reason and his Prolegomena to Any Future Metaphysics. He focused initially on Hume's biggest skepticism (about causality). Hume claimed that when we expect an effect from a cause, it's not because we're truly understanding how an effect can proceed from a cause, but rather because we've just observed it often enough that we expect it out of habit. Kant addresses this and expands it to any synthetic a priori statements.
He does so by dispensing with the idea that we can truly know everything there is to know about concepts. We simply schematize an understanding of the objects after observing them over time and use our sense of reason to generalize that collection of impressions into objects in our mind. From there, we can apply categorical reasoning that can be applied without the need for empirical evidence, and then produce synthetic a priori statements, to include expecting a specific effect from a specific cause. This is opposed to Hume, who said:
>It is far better, Hume concludes, to rely on “the ordinary wisdom of nature”, which ensures that we form beliefs “by some instinct or mechanical tendency”, rather than trusting it to “the fallacious deductions of our reason”
Hume's position was somewhat of a dead end, and Kant rescued philosophy (particularly epistemology and metaphysics) from it in many peoples' estimation.
The big difference between us and LLMs is that (right now), LLMs don't have a thinking component that transcends their empirical data modeling. It is conceivable that someone might produce an "AI" system that uses the LLM as a sensory apparatus and combines it with some kind of pure logic reasoning system (ironically, the kind of thing that old school AI focused on) to give it that kind of reasoning power. Because without something applying reasoning, all we have is some statistical patterns that we hope can give the best answer, but which can't guarantee anything.
This and the rest of the comment are philosophical skepticism, and Kant blew this apart back when Hume's "bundle of experience" model of human subjects was considered an open problem in epistemology.