If some bits of meaningful (containing essential complexity) code imply other bits with mostly accidental complexity, writing them by hand is a waste of time.
But isn't it also a waste to use data centers full of GPUs to process terabytes of text to accomplish the same thing better programming language design could?
We spend that much GPU compute not to just generate random boiler plate code but to create the real code.
If it wouldn't be beneficial for writing code, which it is, we wouldn't use it.
But yes if we could create better languages or systems, it would be a waste. But we tried multiply new programming languages, we have no code platforms etc.
It does look like though that LLM is still better than all of those approaches.
I have to write very little boilerplate code as it is with the tooling I choose. And a lot of it is generated by scripts using some input from me. I don't need cloud GPUs to write code at all.
Its about writing code faster and potentially better. Cloud GPUs can also generate unit tests etc.
I primarily use it for languages i don't use often enough, nonetheless its only a question of time until it doesn't make sense anymore to write code yourself.
Like getter/setter in java for all your attributes.
I would never consider imports boilerplate code.