It depends how those symbols are encoded. Techniques like attention, and systems like transformers that are built on top of them, are often produce highly interpretable execution traces simply because their patterns of activity are very revealing of how they are going about solving the task. It's harder to interrogate their learned weights in any free-standing way, of course.
But the neuro-symbolic concept learning paper I gave already shows the potential translucency of these kinds of hybrid systems: the linguistic interface (its a VQA task) allows one to simply look up the feature vectors and programs associated with particular phrases or English nouns. Similarly, the scene is parsed in an explicitly interpretable way, with bounding boxes for the various objects on which reasoning will commence. This 'bridge' theme between natural language and the underlying task space is really powerful, and it probably makes sense to figure out how we include them for systems that have nothing to do with natural language.
https://arxiv.org/abs/1901.11390 contains another great example of how interpretable such models can be, especially if they are generative. Take a look at those segmentations!
But the neuro-symbolic concept learning paper I gave already shows the potential translucency of these kinds of hybrid systems: the linguistic interface (its a VQA task) allows one to simply look up the feature vectors and programs associated with particular phrases or English nouns. Similarly, the scene is parsed in an explicitly interpretable way, with bounding boxes for the various objects on which reasoning will commence. This 'bridge' theme between natural language and the underlying task space is really powerful, and it probably makes sense to figure out how we include them for systems that have nothing to do with natural language.
https://arxiv.org/abs/1901.11390 contains another great example of how interpretable such models can be, especially if they are generative. Take a look at those segmentations!
Lastly, https://arxiv.org/abs/1604.00289 lays out this vision in a lot more detail.