Cite studies (real ones, not popular science opinion pieces) or I call BS on this claim. The academic world is more aware than anyone how little this is "AI" and how far from even a whiff of AGI this is. Even if the question of "can we argue that the outward appearance of conversational intelligence implies actual intelligence" is one that by definition should be discussed in science philosophy context, now more than ever before, and can get you "easy" funding/grant money to publish papers on.
Here's[1] John McCarthy[2], a noted computer scientist, ascribing beliefs to systems much more simple than ChatGPT. Searle, of the Chinese room fame, talks about it in his lecture/program here.[3] ChatGPT is only much more capable and finds many more academic defenders.
> But quite often in the AI literature the distinction is blurred in ways that would in the long run prove disastrous to the claim that AI is a cognitive inquiry. McCarthy, for example, writes, '-Machines as simple as thermostats can be said to have beliefs, and having beliefs seems to be a characteristic of most machines capable of problem solving performance" (McCarthy 1979).
Here's[4] an article talking about ChatGPT specifically, asserting that philosophers like Gilbert Ryle[5], who coined the phrase "ghost in the machine, agree that "ChatGPT has beliefs":
> What would Ryle or Skinner make of a system like ChatGPT — and the claim that it is mere pattern recognition, with no true understanding?
> It is able not just to respond to questions but to respond in the way you’d expect if it did indeed understand what was being asked. And, to take the viewpoint of Ryle, it genuinely does understand — perhaps not as adroitly as a person, but with exactly the kind of true intelligence we attribute to one.
Having a degree in AI, I'm well familiar with those names, and also that any publications from before the pivotal Google paper are not particularly relevant to the current generation of "what people now call AI".
As for the Chinese Room argument: that's literally the argument against programs having beliefs. It's McArthy argument for demonstrating that even if the black box algorithmic system seems to outwardly be intelligent, it has demonstrably nothing to do with intelligence.