That's currently the same issue it has for programming. It takes me longer to transcribe and factcheck the output than it would have for me to just write it from scratch. Writing the code stops being the bottleneck in software pretty quickly, and instead translating and merging business logic into existing software takes the most time.
For me, even if it were able to never make a mistake it would still be more annoying asking it questions over and over again to refine the output. Why would I use English (or whatever language) which is ambiguous compared to writing the code myself. I don't know I'm not buying the hype anymore. It is a cool tool but the idea that it would replace just writing code seems crazy to me.
Because most engineers document, in English and great detail, the designs of their systems. So the English already exists and will continue to. Taking this English and turning it into code will be the job of AI, not a programmer.
I do use ChatGPT very often for Python and SQL right now, but you really have to know what you are doing. The errors are sublte but such that the code more often than not would not run.
Yep, so I definitely wouldn't use it for code like that at all. I'd probably just tell it to give me more of an overview of the code.
But know that in the future, ChatGPT will be hooked up to a runtime system and compiler with which it can know how to fix errors itself. In fact, plugins already allow some of this.