As best as I have understood, the LLMs output is directly related to the state of the network as a result of the context. Thinking is the way we use intermediate predictions to help steer the network toward a what is expected to be a better result through learned patterns. Reasoning are strategies for shaping that process to produce even more accurate output, generally having a cumulative effect on the accuracy of predictions.
It doesn’t? Reasoning is not an analysis; it is the application of learned patterns for a given set of parameters that results in higher accuracy.
Permit my likely inaccurate illustration:
You’re pretty sure 2 + 2 is 4, but there are several questions you could ask: are any of the numbers negative, are they decimals, were any numbers left out? Most of those questions are things you’ve learned to ask automatically, without thinking about it, because you know they’re important. But because the answer matters, you check your work by writing out the equation. Then, maybe you verify it with more math; 4 ÷ 2 = 2. Now you’re more confident the answer is right.
An LLM doesn’t understand math per se. If you type “2 + 2 =”, the model isn’t doing math… it’s predicting that “4” is the next most likely token based on patterns in its training data.
“Thinking” in an LLM is like the model shifting mode and it starts generating a list of question-and-answer pairs. These are again the next most likely tokens based on the whole context so far. “Reasoning” is above that: a controlling pattern that steers those question-and-answer sequences, injecting logic to help guide the model toward a hopefully more correct next token.