I could see it maybe being important once github codepilot is embedded in it? You tell it roughly what you want and then adapt by hand. But it is kinda funny seeing parent make such claims so early
> The problem with speech to code has always been that precise syntax is hard
The biggest problem is that talking sucks. You presumably can handle voice input as well as is possible, yet here we are typing to you anyway, and for good reason. Even if the natural language part is nailed, you may as well type in that natural language.
I imagine it will bring some quality of life improvements to those with certain disabilities, but I don't see why the typical developer would want to go in that direction.
I don't want to disparage their work, because it's really impressive, but "fill null values of column Fare with average column values" is closer to AppleScript than it is to natural language.
It solves the issue of trying to speak obscure code syntax like “close parenthesis semicolon newline”.
That’s enough to lower the barrier to entry for many people; I don’t know how good it is practically but it’s disingenuous to suggest it’s not offering a novel solution to an old problem.
The example puts it quite well. You kind of know what you want to achieve, step by step, but are not so comfortable with your tools.
Usually this kind of exploratory work involves a lot of Googling and copy-pasting snippets from Stackoverflow without putting too much time in trying to deeply understand things. If you get out what you want - great, if not, back to Google.
Why?