> And Chomskian linguistics have more or less collapsed with the huge success of statistical methods.
People have been saying this for decades. But the hype around large language models is finally starting to wane and I wouldn't be surprised if in another 10 years we hear again that we "finally disproved generative linguistics" (again?)
Counterpoint: What progress has generative linguistics made in the same amount of time that deep learning has been around? It sure doesn't seem to be working well.
Also, the racecar example is because of tokenization in LLMs - they don't actually see the raw letters of the text they read. It would be like me asking you to read this sentence in your head and then tell me which syllable would have the lowest pitch when spoken aloud. Maybe you could do it, but it would take effort because it doesn't align with the way you're interpreting the input.
>What progress has generative linguistics made in the same amount of time that deep learning has been around? It sure doesn't seem to be working well.
Working well for what? Generative linguistics has certainly made progress in the past couple of decades, but it's not trying to solve engineering problems. If you think that generative linguistics and deep learning models are somehow competitors, you've probably misunderstood the former.
People have been saying this for decades. But the hype around large language models is finally starting to wane and I wouldn't be surprised if in another 10 years we hear again that we "finally disproved generative linguistics" (again?)
Also, how many R's are in "racecar"?