People have been saying that about code for several decades now. I have on my shelf a book published in 1980 discussing this very theme.
Data compression on a massive scale and NLP search on top of that will not be the thing that finally does it. Code is logically constrained so it can be load bearing.
If NLP coding is ever solved that might change. But LLMs did not solve NLP, they improved massively on the state of the art but they are still riddled with glaring issues like devolving into nonsense often and in unpredictable ways.
All LLM-as-AI hype hinges on some imaginary version of it that is just around the corner and solves the current limitations. It's not about what is there, but what ought to be in the mind of people who think it's the silver bullet.
What you're missing is that we now understand things about the functional nature of language itself that nobody had the faintest clue about in 1980.
I was big into writing text adventures in those days, where the central problem was how to get the computer to understand what the user was saying. It was common for simple imperative sentences to be misinterpreted in ways that made players want to scream in frustration. Even the best text parsers — written by the greatest minds in the business — could seem incredibly obtuse, because they were. Now the computer is writing the fucking game.
You had to be there to understand what a big deal this is, I guess. If you went back in time and brought even our current primitive LLM tech with you, you'd be lucky not to be burned at the stake. NLP is indeed 'solved,' in that it is now more of a development problem than a research problem. The Turing Test has been passed: you can't say for sure if you're arguing with a bot right now. That's pretty cool.
But you're right. The NLP interface is cool. It's kinda like VR. It would be awesome if it worked the way we dream it could.
Maybe that's why we keep getting hung up on implementations that make it seem like they got it figured out. Even when they clearly haven't, we still avert our eyes from the fraying edges and make believe.
Maybe that's why both fields are giant money pits.
... which is my point as well. You can no longer tell if your interlocutor is even human, and yet you're still thinking and talking about books written in the 1980s.
(NLP wasn't much of a money pit back in the 80s, I know that much. If it was, somebody else must've been getting all the money...)
Data compression on a massive scale and NLP search on top of that will not be the thing that finally does it. Code is logically constrained so it can be load bearing.
If NLP coding is ever solved that might change. But LLMs did not solve NLP, they improved massively on the state of the art but they are still riddled with glaring issues like devolving into nonsense often and in unpredictable ways.
All LLM-as-AI hype hinges on some imaginary version of it that is just around the corner and solves the current limitations. It's not about what is there, but what ought to be in the mind of people who think it's the silver bullet.