After I discovered how to use git worktrees in Codex to work in three conversations in parallel, I am able to build apps with a scope that simply was not realistic before.
You obviously are not reviewing the generated code in any detail before merging it. This is not sustainable for the project as it will grow to be too large for what it needs to be.
There was one feature/screen that Codex built in a single 5k LOC file.
It was still perfectly capable of developing the feature and it was working as expected.
I had it break it down into multiple files, but if I wouldn’t have seen it during the MR review, I would not have noticed. The large file did not seem to degrade the performance of the agent.
It would be interesting to discover how large of a project in KLOC an agent can continue to effectively maintain without messing things up due to the large size.
I am definitely not the person to shed any light on what is going on, but you've added to my feeling that these adapters are all incomprehensible, so I'll try and do the same for you.
I have a USB C ethernet adapter (a Belkin USB-C to Ethernet + Charge Adapter which I recommend if you need it). I ran out of USB C ports one day, and plugged it through a USB C to USB A adapter instead. I must have done an fast.com speed-test to make sure it wasn't going to slow things down drastically, and found that the latency was lower! Not a huge amount, and I think the max speed was quicker without the adapter. But still, lower latency through a $1.50 Essager USB C to USB A adapter, bought from Shein or Shopee or somewhere silly!
I tried tons of times, back and forward, with the adapter a few times, then without the adapter a few times. Even on multiple laptops. As much as I don't want to, I keep seeing lower latency through this cheap adapter.
Next step, I'll try USB C to USB A, then back through a USB A to USB C adapter. Who knows how fast my internet could be!
Perhaps a year ago “vibe coding” was indicative of a low quality product.
It seems many have not updated their understanding to match today’s capabilities.
I am vibe coding.
That does not mean I am incompetent or that the product will be bad. I have 10 years of experience.
Using agentic AI to implement, iterate, and debug issues is now the workflow most teams are targeting.
While last year chances were slim for the agent to debug tricky issues, I feel that now it can figure out a lot once you have it instrument the app and provide logs.
It sometimes feels like some commenters stick with last year’s mindset and feel entitled to yell about ‘AI slop’ at the first sign of an issue in a product and denigrate the author’s competence.
No, it is still indicative of a low quality product. And I say that as someone who has probably been agentic coding longer than you have.
Indicative in my dictionary doesn't mean definitive. It just makes it much more likely. You can make quality products while LLMs write >99% of the code. This has been possible for more than a year, so it's not a lack of updating of beliefs that is the issue. I've done so myself. Rather, 90% of above products are low quality, at a much higher rate than say, 2022, pre-GPT. As such, it's an indicator. That 10% exists, just like pearls can hide in a pile of shit.
As others have said the reason is time investment. You can takes 2 months to build something where the LLM codes 99%. Or you can take 2 hours. HN, and everywhere else, is flooded by the latter. That's why it's mostly crap. I did the former. And luckily it led to a good result. Not a coincidence.
This applies far beyond coding. It applies to _everything_ done with LLMs. You can use them to write a book in 2 hours. You can use them to write a book in 2 years.
I've been neck deep in a personal project since January that heavily leverages LLMs for the coding.
Most of my time has been spent fitting abstractions together, trying to find meaningful relationships in a field that is still somewhat ill-defined. I suppose I could have thrown lots of cash at it and had it 'done' in a weekend, but I hate that idea.
As it stands, I know what works and what doesn't (to the degree I can, I'm still learning, and I'll acknowledge I'm not super knowledgeable in most things) but I'm trying to apply what I know to a domain I don't readily understand well.
That's probably more a personal preference than objective measurement. A lot of people already spent most of their dev time in the terminal, so for someone like myself that uses neovim claude code or codex cli are much easier than using the GUIs.
People say that but the quote. " I can sooner imagine the end of the world than the end of capitalism." Always comes back to me.
Personally I think it won't be communism but communalism.
We are paying for tens of thousands of those machines, although everyone knows they are stupidly expensive and incredibly slow.
reply