Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> And Chomskian linguistics have more or less collapsed with the huge success of statistical methods.

People have been saying this for decades. But the hype around large language models is finally starting to wane and I wouldn't be surprised if in another 10 years we hear again that we "finally disproved generative linguistics" (again?)

Also, how many R's are in "racecar"?



Counterpoint: What progress has generative linguistics made in the same amount of time that deep learning has been around? It sure doesn't seem to be working well.

Also, the racecar example is because of tokenization in LLMs - they don't actually see the raw letters of the text they read. It would be like me asking you to read this sentence in your head and then tell me which syllable would have the lowest pitch when spoken aloud. Maybe you could do it, but it would take effort because it doesn't align with the way you're interpreting the input.


>What progress has generative linguistics made in the same amount of time that deep learning has been around? It sure doesn't seem to be working well.

Working well for what? Generative linguistics has certainly made progress in the past couple of decades, but it's not trying to solve engineering problems. If you think that generative linguistics and deep learning models are somehow competitors, you've probably misunderstood the former.


Also being able to count number of letters of a word is not required for language capability in the Chomskian sense at least.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: