This illusion that senior engineers are too good to fall in all the traps laid by vibe coding really needs to stop.
Just yesterday I was reading the comment of a Principal Engineer saying "of course vibe coding carries a huge risk for more junior engineers that don't understand when the LLM does something that looks right but is actually wrong", as if just because they have more experience THEY CAN.
No, you can't either. Not a single soul on this planet can review the thousands of lines of vibe coded bullshit that LLMs spit out.
Here's my guide to Gen AI / LLM Vibecoding for Expert Programmers: don't.
So, why would you review thousands of lines at a time? That means you didn't break the problem down appropriately. PRs should be around 100-200 lines of code, including tests. You build your tasks appropriately.
Reviewing thousands of lines at a time is always a failure state.
The number of lines you have to review at once doesn't matter.
Vibe coding is letting AI write code so you spend less time writing the same amount of code (ideally not more, in practice definitely more).
If more code is written you have to review more code. Doesn't matter if you break it down in 10 lines PRs or if you review a million lines at once, you still end up having to review all the code generated.
Is this a 1-person dev team? Because on teams, one person has to submit a PR and then someone else on the team has to code review it, right? So then you're doubling the review time of the code. If you wrote the code yourself, you would already know that it works, then you submit the PR, and then someone else reviews the code one time, not twice. And typically reading and understanding code you did not write takes longer than writing the code yourself. It does not seem like "AI" coding is really saving anyone any time, and is probably wasting more time than it saves.
It entirely matters. It matters because as humans, we can only keep so much context in mind. That means that if you are looking at 100-200 lines at a time, you can think about the architecture. You can modify the code, whether it's fixing or refactoring. It means that it can only get too far off base.
It also means that sometimes, you say "this is bad code," refine the prompt, and run it again.
Yes, it means that you as a code reviewer is a bottleneck. It means that you are limited to the productivity gain that can exist. We are talking 10-15% productivity gains per person in mature code bases, not some magic replacement.
But if you're worried about reviewing code, maybe we shouldn't allow junior programmers to contribute to codebases either. After all, you might make a mistake in reviewing that.
PRs are as long as they need to be and as short as they need to be. This idea that any problem can be decomposed into 100-200 line changes is ridiculous. That's not realistic in many cases (esp. for refactoring work, etc.)
There are certainly exceptions. And if a problem cannot be decomposed, then you have a "stop the world" formal code review with a team-wide multi-hour dive into the code.
In my world, though, we use the refactoring tools such as feature flags and incremental improvements. We can use stacked pull requests. It requires training and discipline but it's absolutely doable for 99% of tasks.
I think that's going a bit too far. The big thing is to have the AI spit out small digestible modules, check those for correctness, and then glue them together. The same way a person normally writes code, you are just having the AI do the grunt work.
This does have the caveat that reading code is usually harder than writing it, so the total time savings is far less than what AI companies claim. You only get in real danger when you don't review the code and just YOLO it into production.
It takes way more time to explain, and then re-explain, and then re-re-re-explain to the LLM what I want the code to do. No, it isn't because I don't understand LLMs, it's because LLMs don't understand, period. Trying to coax a fancy word predictor to output the correct code can be extremely frustrating especially when I know how to write the code.
Usually if you have to re re re explain, it means you didn't leave those details in the first prompt. So writing out the code yourself, you'd still get into this trap because you discover as you write. Just like you discover the details as the LLM writes.
Do you have access to GPT-10 or something like this? Because my experience is that you can give as much detail as you want and you WILL need to re re re explain regardless.
I found that once I spend enough time to actually fully understand what LLM wrote, I’ve burned through my efficiency gains. If that’s the case, why bother?
It depends. I had an LLM whip up a JavaScript function "theThursdayAfterNextSunday"
JavaScript isn't my primary language and date functions are always a pain. But I know enough to review and test the code quickly. It doesn't change a 1-week project into a 4-hour one, but it can change a 20-minute project into a 5-minute one.
Because I’m not just trying to get efficiency gains. I’m literally trying to build next gen products of good quality.
I suppose for your actual job (if you happen to be in the IDGAF mode), yeah, why work harder and not smarter? That’s a different story altogether, as many will be mailing it in.
Notice that the examples I show are very minimal changes. If you prompt it to solve the right problems which involves minimal code, then the diffs are easy to manage. There are lots of problems like this. Leave the hard ones for yourself. The only big diff it should ever make is for a refactor.
You sound like a cynical junior engineer. As a tech lead I have to review thousands of lines of code from human engineers and let me tell you it’s no different than reviewing llm code
As a senior engineer, I can say I probably wouldn’t like working with you.
For a dev worth paying for, the crucial details of some code is discussed prior to the PR in a way that the PR review becomes an afterthought. You can follow this process with an LLM but the PR review is still brutal. It doesn’t do what you say and it doesn’t learn (in a deterministic way a good human dev does).
High performing teams do not have many changes nor comments in a PR (on average, obviously).
As a tech lead I have reviewed code written by junior engineers and written by AI, and there is a very clear difference between the two.
You also seem to be missing the point that if vibe coding lets your engineers write 10x the amount of code they previously could in the same working hours, you now have to review 10x that amount.
It's easy to see how there is an instant bottleneck here...
Or maybe you're saying that the same amount of code is written when vibe-coding than when writing by hand, and if that's the case then obviously there's absolutely no reason to vibe-code.
If that is true, how come huge companies like Microsoft and Salesforce and Google keep boasting about the increasing percentage of their code that is written by LLMs?
To add to that you can't patent anything created by an AI either.
> What do you know that they don't?
Oh they are well aware of this.
At enterprise level coding, everything normally needs a programmers certificate of origin as it were. So AI generated code is tagged within the source.
A good example of this (pre-gpt) was SCO vs IBM. Where SCO claimed IBM stole their code. IBM could prove the full history of code they created because of these procedures.
So.. if a person vibe codes a whole application and it gets stolen/copied, legally they have no way to protect that.
[edit] Reading further for UK when it comes to computer generated the law is a bit vague when it comes to vibe coding. There are laws to protect digital works. [edit]
> While prompt engineering can require skill, the Copyright Office found that a user’s textual input alone does not constitute authorship unless it demonstrably shapes the final output in an original, creative, and expressive way.
I imagine the big tech company's lawyers have decided that the way most professional software engineers use LLMs to help them write code counts as "demonstrably shapes the final output in an original, creative, and expressive way".
Just yesterday I was reading the comment of a Principal Engineer saying "of course vibe coding carries a huge risk for more junior engineers that don't understand when the LLM does something that looks right but is actually wrong", as if just because they have more experience THEY CAN.
No, you can't either. Not a single soul on this planet can review the thousands of lines of vibe coded bullshit that LLMs spit out.
Here's my guide to Gen AI / LLM Vibecoding for Expert Programmers: don't.