You are either a very fast producer or a very slow reader. Claude and Gemini are much faster at producing code than I am, and reviewing their code - twice over, even - still takes less time than writing it myself.
The amount of time I spend going back and forth between the implementation and the test cases to verify that the tests actually fully cover the possible failure cases alone can easily exceed the time spent writing it, and that's assuming I don't pull the branch locally and start stepping through it in the debugger.
The idea that AI will make development faster because it eliminates the boring stuff seems quite bold because until we have AGI, someone still needs to verify the output, and code review tends to be even more tedious than writing boilerplate unless you're speed-reading through reviews.
This speaks to the low quality assurance bar that most of the software industry lives by.
If you're programming for a plane's avionics, as an example, the quality assurance bar is much, much higher. To the point where any time-saving benefits of using an LLM are most likely dwarfed by the time it takes to review and test the code.
It's easy to say LLM is a game-changer when there are no lives at stake, and therefore the cost of any errors is extremely low, and little to no QA occurs prior to being pushed to production.
But you definitely don't understand it nearly as well as if you wrote it. And you're the one that needs to take responsibility for adding it to your codebase.