Purely symbolic AI has been tried and found wanting. Decades of research by hundreds of extremely bright people explored a large number of promising-looking approaches to no avail. Intuition tells us thinking is symbolic; the failure of symbolic systems tells us intution is most likely wrong.
What is interesting about current LLM-based systems is that they follow exactly the model suggested by this paper, by bolting together neural systems with symbol manipulation systems - to quote the paper "connectionism can be seen as a complementary component in a hybrid nonsymbolic/symbolic model of the mind, rather than a rival to purely symbolic modeling."
They are clearly also kludges. As you say, they are built on shaky foundations. But the success - at least compared to anything that has gone before - of kludged-together neural/symbolic systems suggests that the approach is more fertile than any that has gone before. They are also still far, far away from the AGI that has been predicted by their most enthusistic proponents.
My best guess is that future successful hard-problem-solving systems will combine neurosymbolic processing with formal theorem provers, where the neurosymbolic layer constructs proposals with candidate proofs, to submit to symbolic provers to test for success.
I think there is a misunderstanding - the whole point of my comment was that LLMs are lacking sensory input which could link the neural activations to real-world objects and thus provide a grounding of their computations.
I agree with you that purely symbolic AI systems had severe limitations (just think of those expert systems of the past), but the direction must not only go towards higher-level symbolic provers but also towards lower level sensory data integration.
What is interesting about current LLM-based systems is that they follow exactly the model suggested by this paper, by bolting together neural systems with symbol manipulation systems - to quote the paper "connectionism can be seen as a complementary component in a hybrid nonsymbolic/symbolic model of the mind, rather than a rival to purely symbolic modeling."
They are clearly also kludges. As you say, they are built on shaky foundations. But the success - at least compared to anything that has gone before - of kludged-together neural/symbolic systems suggests that the approach is more fertile than any that has gone before. They are also still far, far away from the AGI that has been predicted by their most enthusistic proponents.
My best guess is that future successful hard-problem-solving systems will combine neurosymbolic processing with formal theorem provers, where the neurosymbolic layer constructs proposals with candidate proofs, to submit to symbolic provers to test for success.