And just like webdev, each of those were done in a different platform and require arcane incantations and 5h of doc perusing to make it work on your system.
Maybe it's because of how I use it, but the code ChatGPT gives me has always been super helpful and 99% correct. But, we have a policy at work not to use it for work product so I have to spend time changing enough of it where it's different, and I'm never copy/pasting anything. Enough changes to the structure and variables to make it sufficiently different that it can't be considered pasting company data into GPT, ask my question(s), see what comes back out, refactor/type manually into my IDE, test. I'd say one out of every 8-9 times I get something objectively wrong - a method that doesn't exist, something not compiling, etc. But it's faster than using google/DDG, especially with some prompting so that it just spits back code and not 5th-grade level explanatory paragraphs before and after. And well over half the time it does exactly what I need or sufficiently close that my initial refactoring step gets me the rest of the way.
Would you say that this satisfies the spirit of the company policy? Or is it a bit of a hack to get around it?
I ask because we are about to produce a similar policy at work. We can see the advantages of it, but likewise, we can't have company data held in their systems.
The policy is to not send any "sensitive company data" into ChatGPT, which I 100% agree with. How we implement a given Vue component or a particular API isn't sensitive or particularly novel so if I strip the business logic out I do honestly believe I'm complying with the spirit of the policy.
at some point someone makes a service where you can let AI take over your computer directly. Easier that way! Curling straight to shell taken to next level.