When albertgoeswoof reasons about a puzzle he models the actual actions in his head. He uses logic and visualization to arrive at the solution, not language. He then uses language to output the solution, or says he doesn't know if he fails.
When LLMs are presented with a problem they search for a solution based on the language model. And when they can't find a solution, there's always a match for something that looks like a solution.
I'm reminded of the interview where a researcher asks firemen how they make decisions under pressure, and the fireman answers that he never makes any decisions.
Or in other words, people can use implicit logic to solve puzzles. Similarly LLMs can implicitly be fine-tuned into logic models by asking them to solve a puzzle, insofar as that logic model fits in their weights. Transformers are very flexible that way.
When LLMs are presented with a problem they search for a solution based on the language model. And when they can't find a solution, there's always a match for something that looks like a solution.