I just asked ChatGPT who made the first cardboard box, and it too believes the first story on this list: "The first cardboard box was invented by Sir Malcolm Thornhill in England in 1817. He created a machine that could make sheets of paper and then fold them into boxes."
No, it doesn't. It doesn't believe anything, it's just generating a story for you that sounds credible enough for you to go "yes, this is what an answer would look like". That's its job. That's its only job. Literally everything it says is fabricated, and if it happens to be the truth, that's a coincidence.
It effectively "believes" some things, as it will consistently emit certain statements in response to certain types of queries. It considers that information part of a good response.
There is information stored in its model. That information might not be correct.
Using anthropomorphic terms for ChatGPT does more harm than good. There is no belief, there are no hallucinations, there is no intelligence, and pretending that people understand those terms don't apply is just willfully ignoring that people really don't. People really are that gullible when it comes to things they don't understand, even if you aren't.
If you have to use anthropomorphic terms, then there's only one that applies: it lies.
Nothing it tells you is in any way shape or form "true", it's only ever plausible, and if it's true, that's still just a coincidence because it has no concept of data validation against reality. It's just an autocompleter; it's algorithmically incredibly simple software that's been written exclusively for the purpose of "finishing a story, given a prompt" and the fact that the prompt can be in the form of a question makes zero difference for that, so that's the part that ChatGPT leaned into hard.
Using anthropomorphic terms is a bit cringy at best, but in general, they actively interfere with both people's understanding of what these things are, and their ability to talk about them based on, ironically, a true understanding of them, rather than people's hallucinations about what this current generation of LLM autocompleters is.
> There is no belief, there are no hallucinations, there is no intelligence, and pretending that people understand those terms don't apply is just willfully ignoring that people really don't. People really are that gullible when it comes to things they don't understand, even if you aren't.
No, you're pretending something is settled as "incorrect", when it's not, trying to unilaterally force one viewpoint on the issue. "It's just an automcomplete and cannot believe anything" is not something agreed upon by all experts/philosophers of LLMs/consciousness. Some "behaviourist" philosopher might easily agree that ChatGPT does indeed believe it, for example.
Cite studies (real ones, not popular science opinion pieces) or I call BS on this claim. The academic world is more aware than anyone how little this is "AI" and how far from even a whiff of AGI this is. Even if the question of "can we argue that the outward appearance of conversational intelligence implies actual intelligence" is one that by definition should be discussed in science philosophy context, now more than ever before, and can get you "easy" funding/grant money to publish papers on.
Here's[1] John McCarthy[2], a noted computer scientist, ascribing beliefs to systems much more simple than ChatGPT. Searle, of the Chinese room fame, talks about it in his lecture/program here.[3] ChatGPT is only much more capable and finds many more academic defenders.
> But quite often in the AI literature the distinction is blurred in ways that would in the long run prove disastrous to the claim that AI is a cognitive inquiry. McCarthy, for example, writes, '-Machines as simple as thermostats can be said to have beliefs, and having beliefs seems to be a characteristic of most machines capable of problem solving performance" (McCarthy 1979).
Here's[4] an article talking about ChatGPT specifically, asserting that philosophers like Gilbert Ryle[5], who coined the phrase "ghost in the machine, agree that "ChatGPT has beliefs":
> What would Ryle or Skinner make of a system like ChatGPT — and the claim that it is mere pattern recognition, with no true understanding?
> It is able not just to respond to questions but to respond in the way you’d expect if it did indeed understand what was being asked. And, to take the viewpoint of Ryle, it genuinely does understand — perhaps not as adroitly as a person, but with exactly the kind of true intelligence we attribute to one.
Having a degree in AI, I'm well familiar with those names, and also that any publications from before the pivotal Google paper are not particularly relevant to the current generation of "what people now call AI".
As for the Chinese Room argument: that's literally the argument against programs having beliefs. It's McArthy argument for demonstrating that even if the black box algorithmic system seems to outwardly be intelligent, it has demonstrably nothing to do with intelligence.
An mp3 player repeating something does not believe it.
There is no way even at any level of abstraction and squinting just right to use a term like that for what chatgpt is or does. It's fancy auto-complete. Literally matching and mashing up patterns against other writings by probability. That's it. Auto-complete is neither understanding nor believing.
Basically, you reject using any normal words for intelligence when discussing an AI, even when it's a metaphor, or clearly labeled as not being literally true.
I'm not sure how that's productive, but feel free.
I reject that it's productive to use the wrong words for things especially when a lot of people already have practically no grasp of it and are already primed to have the wrong understanding, and using the wrong words only facilitates that wrong understanding. I'm quite sure how that's counter-productive, and please do not feel free.
To add, put it this way, it's not pedantry: I really do not need my mother in law thinking that she is interacting with an entity when it turns up in some interface she uses. And she does, or is about 1mm from doing so.
That's the harm. "quacks like a duck" is not good enough to just let people operate as though it's a duck, and they are, and it's not ok and it's not harmless and it's not their own fault, it's yours and mine.
In what sense do you use the word "fabricated"? In the sense that it invented a falsehood with an intent to deceive, or in that it says things based upon prior exposure?
Fabricated as in manufactured, in the same way that a hallucination fabricates/manufactures information. There's no intent to deceive because hallucinations have no capacity for intent.