The writer is speaking from the perspective of the traditional philosophical understanding of a thinking being.
No, LLMs are not thinking beings with internal state. Even these "reasoning" models are just prompting the same LLM over and over again which is not true "logic" the way you and I think when we are presented with a new problem.
The key difference is they do not have actual logic, they rely on statistical calculations and heuristics to come up with the next set of words. This works surprisingly well if the thing has seen all text written, but there will always be new scenarios, new ideas it has not encountered and no these are not better than a human at those tasks and likely never will be.
However, what is happening is that our understanding of intelligence is being expanded, and our belief that we are going to be the only intelligent beings ever is under threat and that makes us fundamentally anxious.
No, LLMs are not thinking beings with internal state. Even these "reasoning" models are just prompting the same LLM over and over again which is not true "logic" the way you and I think when we are presented with a new problem.
The key difference is they do not have actual logic, they rely on statistical calculations and heuristics to come up with the next set of words. This works surprisingly well if the thing has seen all text written, but there will always be new scenarios, new ideas it has not encountered and no these are not better than a human at those tasks and likely never will be.
However, what is happening is that our understanding of intelligence is being expanded, and our belief that we are going to be the only intelligent beings ever is under threat and that makes us fundamentally anxious.