If we take out most of frontend work, and the easy backend/Ops tasks where writing the code/config is 99% of the work, i think my overall productivity with the latest gen (basically Opus 4.5) improve by 15-20%. I also am _very_ sure that with the previous generation (Sonnet 4, sonnet 4.5, Codex 5.1), my team overall velocity decreased, even taking into account the frontend and the "easy" tasks. The amount of production bug we had to deal with this year is crazy. To much code is generated, and me and the other senior on my team just can't carefully review everything, we have to trust sometime (especially data structures).
The worse part is reading a PR, and catching a reintroduced bug that was fixed a few commit ago. The first time i almost lost my cool at work and said a negative thing to a coworker.
This would be my advice to juniors (and i mean basically: devs who don't yet understand the underlying business/architecture): use the AI to explain how stuff work, generate basic functions maybe, but write code logic/algorithm yourself until you are sure you understand what you're doing and why. Work and reflect on the data structures by yourself, even if generated by the AI, and ask for alternatives. Always ask for alternatives, it helps understanding.
You might not see huge productivity gains from AI, but you will improve first, and then productivity will improve very fast, from your brain first, then from AI.
Just to add to your advice to juniors working with AI:
* Force the AI to write tests for everything. Ensure those tests function. Writing boring unit tests used to be arduous. Now the machine can do it for you. There's no excuse for a code regression making it's way into a PR because you actually ran the tests before you did the commit, right? Right? RIGHT?
* Force the AI to write documentation and properly comment code, then (this is the tricky part) you actually read what it said it was doing and ensure that this is what you wanted it to do before you commit.
Just doing these two things will vastly improve the quality and prevent most of the dumb regressions that are common with AI generated code. Even if you're too busy/lazy to read every line of code the AI outputs just ensuring that it passes the tests and that the comments/docs describe the behavior you asked for will get you 90% of the way there.
I had a colleague, senior software developer with masters degree in CS who said: why should I write tests if I can write a new feature to close sprint scope faster?
The irony is when company did lay off him due to covid the actual velocity of the team increased.
Sometimes the AI is all too good at writing tests.
I agree with the idea, I do it too, but you need to make sure the test don't just validate the incorrect behavior or that the code is not updated to pass the test in a way that actually "misses the point".
I've had this happen to me on one or two tests every time
Even more important, those tests need to be useful. Often unit tests are simply testing the code works as written which is generally doing more harm than good.
To give some further advice to juniors: if somebody is telling you writing unit tests is boring, they haven’t learned how to write good tests. There appears to be a large intersection between devs who think testing is a dull task and devs who see a self proclaimed speed up from AI. I don’t think this is a coincidence.
Writing useful tests is just as important as writing app code, and should be reviewed with equal scrutiny.
For some reason Gemini seems to be worse at it than Claude lately. Since mostly moving to 3 I've had it go back and change the tests rather than fixing the bug on what seems to be a regular basis. It's like it's gotten smart enough to "cheat" more. You really do still have to pay attention that the tests are valid.
Yep. It's incredibly annoying that obviously these AI companies are turning the "IQ knob" on these models up and down without warning or recourse. First OpenAI, then Anthropic and now Google. I'm guessing it's a cost optimization. OpenAI even said that part out loud.
Of course, for customers it is just one more reason you need to be looking at every AI outputs. Just because they did something perfect yesterday doesn't mean they won't totally screw up the exact same thing today. Or you could say it's one more advantage of local models: you control the knobs.
> The worse part is reading a PR, and catching a reintroduced bug that was fixed a few commit ago. The first time i almost lost my cool at work and said a negative thing to a coworker.
Losing your cool is never a good idea, but this is absolutely a time when you should give negative feedback to that coworker.
Feedback is what reviews are for; in this case, this aspect of the feedback should neither be positive nor neutral.
>> Kernighan's Law - Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.
Now question is..
is AI providing solutions smarter than the developer using it might have produced?
And perhaps more importantly, How much time it takes AI to write code and human to debug it, even if both are producing equally smart solutions.
The worse part is reading a PR, and catching a reintroduced bug that was fixed a few commit ago. The first time i almost lost my cool at work and said a negative thing to a coworker.
This would be my advice to juniors (and i mean basically: devs who don't yet understand the underlying business/architecture): use the AI to explain how stuff work, generate basic functions maybe, but write code logic/algorithm yourself until you are sure you understand what you're doing and why. Work and reflect on the data structures by yourself, even if generated by the AI, and ask for alternatives. Always ask for alternatives, it helps understanding. You might not see huge productivity gains from AI, but you will improve first, and then productivity will improve very fast, from your brain first, then from AI.