I fully agree with this take and think a lot of people at this point are really just being uncharitable to those using AI productively + unwilling to admit their own faults when they fail to see this.
How can anybody who has managed or worked with inexperienced engineers, or StackOverflow developers, not see how helpful AI is for delegating the kinds of tasks with that particular flavor of content and scope? And how can anybody who is currently working with those kinds of developers not see how much it's helping them improve the quality of their work? (and yes, it's extremely frustrating to see AI used poorly or for people to submit code for review that they did not even review or even understand themselves. But the fact that that's even possible, that it often times still works, really tells you something... And given the right feedback, most offenders do eventually understand why they ought not to do this, I think)
Even for more experienced engineers, for the kind of "unimportant / low priority, uninteresting" work that requires a lot of context and knowledge to get done but isn't really a good use of experienced engineers' time, AI can really lower the barrier to starting and completing those tasks. Let's say my codebase doesn't have any docstrings or unit tests - I can feed it into an LLM and immediately get mediocre versions of all of that and just edit it into being good enough to merge. Or let's say I have an annoying unicode parsing bug, a problem with my regex, or something like that which I can reproduce in tests or a dev environment: a lot of the time I can just give the LLM the part of the code I suspect the bug resides within, tell it what the bug symptoms are and ask it to fix it, and validate the fix.
To be honest and charitable to those who do struggle to use AI this way, since it's most likely just a theory of mind issue (they don't understand what the AI does and doesn't know, and what context it needs to understand them and give them what they want), it could very well be influenced by being somewhere on the autism spectrum or just difficulty with social skills. Since AI is essentially a fresh wipe of the same stranger every time you start a conversation with it (unless you use "memory" features designed for consumer chat rather than coding), it never really gets to know you or understand your quirks like most people that regularly interact with those with social difficulties. So I suppose to a certain extent it requires them to "mask" or interact in a way they're unfamiliar with when dealing with computer tools.
A lot of people for whatever reason seem also to have decided to become emotionally/personally invested in "AI stupid" to the point that they will just flat out refuse to believe there is value in being able to type some little compiler error or stacktrace into a textbox and 80% of the time get a custom fix in 10% of the time it would have taken to do the same thing on google search+stackoverflow.
How can anybody who has managed or worked with inexperienced engineers, or StackOverflow developers, not see how helpful AI is for delegating the kinds of tasks with that particular flavor of content and scope? And how can anybody who is currently working with those kinds of developers not see how much it's helping them improve the quality of their work? (and yes, it's extremely frustrating to see AI used poorly or for people to submit code for review that they did not even review or even understand themselves. But the fact that that's even possible, that it often times still works, really tells you something... And given the right feedback, most offenders do eventually understand why they ought not to do this, I think)
Even for more experienced engineers, for the kind of "unimportant / low priority, uninteresting" work that requires a lot of context and knowledge to get done but isn't really a good use of experienced engineers' time, AI can really lower the barrier to starting and completing those tasks. Let's say my codebase doesn't have any docstrings or unit tests - I can feed it into an LLM and immediately get mediocre versions of all of that and just edit it into being good enough to merge. Or let's say I have an annoying unicode parsing bug, a problem with my regex, or something like that which I can reproduce in tests or a dev environment: a lot of the time I can just give the LLM the part of the code I suspect the bug resides within, tell it what the bug symptoms are and ask it to fix it, and validate the fix.
To be honest and charitable to those who do struggle to use AI this way, since it's most likely just a theory of mind issue (they don't understand what the AI does and doesn't know, and what context it needs to understand them and give them what they want), it could very well be influenced by being somewhere on the autism spectrum or just difficulty with social skills. Since AI is essentially a fresh wipe of the same stranger every time you start a conversation with it (unless you use "memory" features designed for consumer chat rather than coding), it never really gets to know you or understand your quirks like most people that regularly interact with those with social difficulties. So I suppose to a certain extent it requires them to "mask" or interact in a way they're unfamiliar with when dealing with computer tools.
A lot of people for whatever reason seem also to have decided to become emotionally/personally invested in "AI stupid" to the point that they will just flat out refuse to believe there is value in being able to type some little compiler error or stacktrace into a textbox and 80% of the time get a custom fix in 10% of the time it would have taken to do the same thing on google search+stackoverflow.