Lets say that you want to make a flying car, that can also double as a submarine.
Nobody has done this yet. So information doesn't exist on how to do it. An LLM may give you some generic answers from training sets on what engineering/analysis tasks to do, but it won't be able to give you a complex and complete design for one.
A model that can actually solve problems would be able to design you one.
I literally just gave you an example of one it can't solve, despite having a vast knowledge of mechanical and aeronautical subjects. All the examples are obviously in its training set.
Here is another better example - none of these models can create a better ML accelerator despite having a wide array of electrical and computer engineering knowledge. If they did, OpenAI would pretty much be printing their own chips like Google does.
In your previous comment you stated that LLMs can only solve problems that are in their training set (e.g. "all we are gonna get is better and better googles"). But that's not true as I pointed out.
Now your argument seems to be that they can't solve all problems or, more charitably, can't solve highly complex problems. This is true but by that standard, the vast majority of humans can't reason either.
Yes, the reasoning capacities of current LLMs are limited but it's incorrect to pretend they can't reason at all.
If LLM is trained on python coding, and its trained separately on just plain english language on how to decode cyphers, it can statistically interpolate between the two. That is a form of problem solving, but its not reasoning.
This is why when you ask it fairly complex problems on how to make a bicycle using a CNC with limited work space, it will tell you generic answers, because its just staistically looking at a knowledge graph.
A human can reason, because when there is a gray area in a knowledge graph, they can effectively expand it. If I was given the same task, I would know that I have to learn things like CAD design, CNC code generation, parametric modeling, structural analysis, and so on, and I could do that all without being prompted to do so.
You will know when AI models will start to reason when they start asking questions without ever being told explicitly to ask questions through prompt or training.