Vibe Coding is a trigger word for devs who insist it's a pointless exercise because it doesn't do 100% of the job. Devs don't seem to realize that's not the point - the point is you can hire less devs if you're only worried about the remaining 20%.
Currently, AIs emulate a less skilled, junior developer. They can certainly get you up and running, but adding junior developers doesn’t speed up a lot of projects. What we are seeing is people falling into the “mythical man month” trap, where they believe that adding another coding entity will reduce the amount of work humans do, but that isn’t how most projects come out.
To put it simply, it doesn’t matter if AI does 80% of the work if that last 20% takes 5x longer. As long as you need a human in the loop who understands the code, that human is going to have to spend the normal amount of time understanding the problem.
Indeed. My roommate has just been put on a new project at his workplace. No AI involved anywhere. But he inherited a half-done project. Code is even 90% done. But he is spending so much time trying to understand all that existing code, noting down the issues it has which he'll need to fix. It's not just completing the remaining 10%. It's understanding and fixing and partially reworking the existing 90%. Which he has to do, since he'll be responsible for the thing once released. It's approaching a point where just building it from scratch on his own would have been more time efficient.
It seems to me that LLM output creates a similar situation.
Yeah but AI coding does speed up some simple tasks. Sometimes by a lot.
But we have to endure these tedious self-congratulatory "mwa ha well it's still not as good as my code" posts.
No shit. Nobody is saying AI can write a web browser or a compiler or even many far simpler things.
But it can do some very simple things like making basic websites. And sure it gets a lot of stuff wrong and you have to correct it, or fix it yourself. But it's still usually faster than doing everything manually.
This post feels like complaining about cruise control because it isn't level 5 autonomy. Nobody should use it because it doesn't do everything perfectly!
> This post feels like complaining about cruise control because it isn't level 5 autonomy.
It's nothing like that, because cruise control works reliably. There is never a situation where cruise control randomly starts going 90mph or 10mph while I have it set to 60mph. LLMs on the other hand...
This is why I disagree with people who argue (as you did) "it really does speed up simple tasks". No it doesn't, because even for simple tasks I have to check its work every time. In less than the time it takes me to do that, I could've written the code myself. So these tools slow me down, they don't speed me up.
> In less than the time it takes me to do that, I could've written the code myself.
This hasn't been my experience at all. At worst you skim the code and think "nah that's total nonsense, I'll write it myself from scratch", but that only takes a few seconds. So at worst it wastes a few seconds.
Usually though it spits out a load of stuff, which definitely requires fixing up and tweaking, but is usually way faster than doing it all.
Obviously it depends on the domain too. I wouldn't ask it to write a device driver or something UVM or whatever. But a website interface? Sure. "Spawn a process in C and capture its stdout"? Definitely. There's no way you are doing that faster by hand.
Honestly, I'm not sure if there is any correspondence between an AI and a particular skill level of developer. A junior developer won't know most of the things an AI does; but unlike an AI, they can be held accountable for a particular assignment. I feel like AI is more like "a skilled consultant who doesn't know that much about your situation and refuses to learn more than the bare minimum, but will spend an arbitrary amount of time on self-contained questions or tasks, without checking the output too carefully." Which is exactly as useful yet infuriating as it sounds.
Remember that 80% of your time and resources is going to be spent finishing up the last 20% of the project. If the first 80% is borked by LLM code salad, you’re going to need to spend time fixing that code and making it actually work. That might take just as much time, if not more, than only using AI as an assistant (i.e. code completion) instead of the main source of code.
I'm currently 2x to 10x as productive with Cursor. The larger the project, the lower my multiplier.
However, on small tasks and bug fixes, it often fixes the bug before I've even root caused it. It's amazing when I can focus on throwing it information about the bug then have it think in the background while I continue researching. In a surprising number of simpler cases, it one-shots the fix and eliminates any need to root cause (this is a bit easier when it's a feature you understand intimately).
Exactly. I see these threads over and over, and it's just senior devs complaining about how it's the tool's fault, and not that they haven't put the time in to learn the new tool
The cycle of tool and framework re-skilling is constant in industry, and those trying to fight the wave always lose. And this one is a tidal wave. UPDATE YOUR SKILLS FOLKS!
Also this article is immensely distracting.