Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I wonder about codebase maintainability over time.

I hypothesize that it takes some period of time for vibe-coding to slowly "bit rot" a complex codebase with abstractions and subtle bugs, slowly making it less robust and more difficult to maintain, and more difficult to add new features/functionality.

So while companies may be seeing what appears to be increases in output _now_, they may be missing the increased drag on features and bugfixes _later_.



Up until now large software systems required thousands of hours of work and efforts of bright engineers. We take established code as something to be preserved because it embeds so my knowledge and took so long to develop. If it rots then it takes too long to repair or never gets repaired.

Imagine a future where the prompts become the precious artifact. That we regularly `rm -rf *` the entire code base and regenerate it with the original prompts perhaps when a better model becomes available. We stop fretting about code structure or hygiene because it won't be maintained by developers. Code is written for readability and audibility. So instead of finding the right abstractions that allow the problem to be elegantly implemented the focus is on allowing people to read the code to audit that it does what it says it does. No DSLs just plain readable code.


I can imagine that, but... given your prompt(s?) will need to contain all your business rules, someone will have to write prompt(s?) in a way that make it possible for the AI to produce something that works with all the requirements.

Because if you let every stakeholder add their requirements to the prompts, without checking that it doesn't contradict others, you'll end up with a disaster.

So you need someone able to gather all the requirements and translate it in a way that the machine (the AI) can interpret to produce the expected result (a ephemeral codebase).

Which means you now have to carefully maintain your prompts to be certain about the outcome.

But if you still need someone to fix the codebase later in the process, you need people with two sets of skills (prompts and coding) when, with the old model, you only needed coding skills.


I’m concerned that it might not be easy to vibecode a security fix for a complex codebase, especially when the flaw was introduced by vibecoding.


My new favourite genre of schadenfreude are solo-preneur SaaS vibe-coders.

They burn a pile of money. Maybe it’s their life savings, their parents’ money or their friends or some unlucky investors. But they go in thinking they’re going to join the privileged labourers without putting any of the time to develop the skills and without paying for that labour. GenAI the whole thing. And they post about it on socials like they’re special or something.

Then boom. A month later. “Can everyone stop hacking me already, I can’t make this stop. Why is this happening?”

Mostly I feel sorry for the people who get duped into paying for this crap and have their data stolen.

There’s like almost zero liability for messing around like this.


I wonder whether we have the same talk when the C compiler first came out.

People may worry that the "ASM" codebase will be bit-rot and no one can understand the compiler output or add new feature to the ASM codebase.


My guess is that the discussion trended around performance and not correctness since compilers are pretty well understood. Why a LLM output what they do are not understood by anyone to the same degree.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: