For true reasoning you really need to introduce the ability for the circuit to intentionally decide to do something different that is not just a random selection or hallucination - otherwise we are just saying that state machines "reason" for the sake of using an anthropomorphic word.
This restriction makes it impossible to determine if something is reasoning. An LLM may well intentionally make decisions; I have as much evidence for that as I have for anybody else doing so, ie. zilch. I'm not even sure that I make intentional decisions, I can only say that it feels like I do. But free will isn't really compliant with my model of physical reality.