Language is just a form, what exactly is encoded inside of the model can be very different. And to encode logical reasoning inside of the weights with activation functions is more than possible.
Models solving IMO level problems imo proves it.
I also think you greatly overestimate human intelligence, the fact we got AGI is nothing but barely side effect of evolution.
Isn’t this what Tao is addressing in the link, that LLMs haven’t encoded reasoning? Success in IMO is misleading because they are synthetic problems with known solutions that are subject to contamination (answers to similar questions are available in the textbooks and online).
He also discusses his view on the similarity and differences between mathematics and natural language.Tao says mathematics is driven entirely by efficiency, so presumably using natural language to do mathematics is a step backwards.
Models solving IMO level problems imo proves it.
I also think you greatly overestimate human intelligence, the fact we got AGI is nothing but barely side effect of evolution.