It really depends on how you use it. I really like using AI for prototyping new ideas (it can run on the background while I work on the main project) and for getting the boring grunt work (such as creating CRUD endpoints on a RESTful API) out of the way. Leaving me more time to focus on the code that really is challenging and need a deeper understanding of the business or the system as a whole.
The boring stuff like crud always needs design. Else you end up with a 2006 era PHP-like "this is a rest api" spaghetti monster. The fact that AI cant do this (and probably never will) is just another showstopper.
I tried AI, but the code it produces (on a higher level) is of really poor quality. Refactoring this is a HUGE PITA.
> how can I utilize AI without degenerating my own abilities?
Couldn't the same statement, to some extent, be applied to using a sorting lib instead of writing your own sorting algorithm? Or how about using a language like python instead of manually handling memory allocation and garbage collection in C?
> What I want above all is AI that helps me become better at my job and continue to build skills and knowledge
So far, on my experience, the quality of what AI outputs is directly related to the quality of the input. I've seen some AI projects made by junior devs that a incredibly messy and confusing architecture, despite they using the same language and LLM model that I use? The main difference? The AI work was based on the patterns and architecture that I designed thanks to my knowledge, which also happens to ensure that the AI will produce less buggy software.
I think their is a huge difference between using a library and using python instead of C/Rust etc. You use those because they are fundementally more efficient at the expense of having to worry about efficient memory use. Robust programming is a trade off and the speed of development might be worth it but it also could be so problematic that the project just never works. A sort library is an abstraction over sorting its extension to your language pool you now have the fundemental operator sort(A). Languages kind of transend the operator difference.
I think the problem the OP is trying to get at is that if we only program at the level of libs we lose the ability to build fundementally cooler/better things. Not everyone does that of course but AI is not generating fundementally new code its copy pasting. Copy Pasting has its limits especially for people in the long term. Copy paste coders don't build game engines. They don't write operating systems. These are esototeric to some people as how many people actually write those things! But there is a craftsmanship lost in converting more people to Copy Paste all be it with inteligence.
I personally lean on the side that this type of abstraction over thinking is problematic long term. There is a lot damage being done on people not necessiarly in Coding but in Reading/Writing especially in (9-12 grade + college). When we ask people to write essays and read things, AI totally short circuits the process but the truth is no one gets any value in the finished product of an essay about "Why columbus coming to the new world cause X,Y or Z". The value is from the process of thinking that used to be required to generate that essay. This is similar to the OPs worry. You can say well we can do both and think about it as we review AI outputs. But human's are lazy. We don't mull over the calculator thinking about how some value is computed something we take it and run. I think there is lot more value/thinking in the application of the calculated results so calculator didn't destroy mathematical thinking but the same is not necessiarly true in how AI is being applied. The fact of your observation of inn junior dev's output proves support to my view. We are short circuiting the thinking. If those juniors can learn the patterns than there is no issue but it's not guarenteed. I think the uncertainity is the the OPs worry but maybe restated in a better way.
no, it's more like asking a junior dev to write the sorting algorithm instead of writing it yourself. using a library would be like using an already verified and proven one algorithm. that's not what AI code provides.
AI is a tool. As every other tool under the sun, it has strengths and weaknesses, it's our job, as software engineers to try it out and understand when/how to use it on our workflows, or if if fits our use cases at all.
If you disagree with the above statement, try replacing "AI" with "Docker", "Kubernetes", "Microservices architecture", "NoSQL", or any other tool/language/paradigm that was widely adopted in the software development industry until people realized it's awesome for some scenarios but not a be-all and end-all solution.
> This must be the case for so many discarded appliances these days, especially underengineered ones with common issues.
While it's true that lots of those old appliances are easily fixable, depending on how old they are it's better to replace due to other factors.
I just recently replaced my 10 years old washing machine instead of fixing it. I was absolutely surprised by the difference. The newer one uses less electricity, less water, washes and dries in half the time, and is absolutely silent.
Yeah, but how fast can you write compared to how fast you think?
How many times have you read a story card and by the time you finished reading it you thought "It's an easy task, should take me 1 hour of work to write the code and tests"?
In my experience, in most of those cases the AI can do the same amount of code writing in under 10 minutes, leaving me the other 50 minutes to review the code, make/ask for any necessary adjustments, and move on to another task.
I don't know anyone who can think faster than they can type (on average), they would have to have an IQ over 150 or something. For mere mortals like myself, reasoning through edge cases and failure conditions and error handling and state invariants takes time. Time that I spend looking at a blinking cursor while the gears spin, or reading code. I've never finished a day where I thought to myself "gosh darn, if only I could type faster this would be done already".
You could be fast if you were coding only the happy path, like a lot of juniors do. Instead of thinking about trivial things like malformed input, library semantics, framework gotchas and what not.
Completely agree with you. I was working on the front-end of an application and I prompted Claude the following: "The endpoint /foo/bar is returning the json below ##json goes here##, show this as cards inside the component FooBaz following the existing design system".
In less than 5 minutes Claude created code that:
- encapsulated the api call
- modeled the api response using Typescript
- created a re-usable and responsive ui component for the card (including a load state)
- included it in the right part of the page
Even if I typed at 200wpm I couldn't produce that much code from such a simple prompt.
I also had similar experiences/gains refactoring back-end code.
This being said, there are cases in which writing the code yourself is faster than writing a detailed enough prompt, BUT those cases are becoming exception with new LLM iteration. I noticed that after the jump from Claude 3.7 to Claude 4 my prompts can be way less technical.
(GP) I wouldn't, but it would get me close enough that I can do the work that's more intellectually stimulating. Sometimes you need the people to do the concrete for a driveway, and sometimes you need to be signing off on the way the concrete was done, perhaps making some tweaks during the early stages.
Yeah, there was already alternatives before pix, like PicPay/Mercado Pago, and Pix just "killed" them (people still use to be clear, but just as a normal payment app)
What changed my point of view regarding LLMs was when I realized how crucial context is in increasing output quality.
Treat the AI as a freelancer working on your project. How would you ask a freelancer to create a Kanban system for you? By simply asking "Create a Kanban system", or by providing them a 2-3 pages document describing features, guidelines, restrictions, requirements, dependencies, design ethos, etc?
Which approach will get you closer to your objective?
The same applies to LLM (when it comes to code generation). When well instructed, it can quickly generate a lot of working code, and apply the necessary fixes/changes you request inside that same context window.
It still can't generate senior-level code, but it saves hours when doing grunt work or prototyping ideas.
"Oh, but the code isn't perfect".
Nor is the code of the average jr dev, but their codes still make it to production in thousands of companies around the world.
"How to ask for it" is the most important part. As soon as you realize that you have to provide the AI with CONTEXT and clear instructions (you know, like a top-notch story card on a scrum board), the quality and assertiveness of the results increase a LOT.
Yes, it WON'T produce senior-level code for complex tasks, but it's great at tackling down junior to mid-level code generation/refactoring, with minor adjustments (just like a code review).
So, it's basically the same thing as having a freelancer jr dev at your disposal, but it can generate working code in 5 min instead of 5 hours.