LLMs can be cajoled into producing algorithms.
In fact this is the Chain-of-Thought optimisation.
LLMs give better results when asked for a series of steps to produce a result than when just asked for the result.
To ask if LLMs “think” is an open question and requires a definition of thinking :-)
LLMs can be cajoled into producing algorithms.
In fact this is the Chain-of-Thought optimisation.
LLMs give better results when asked for a series of steps to produce a result than when just asked for the result.
To ask if LLMs “think” is an open question and requires a definition of thinking :-)