Hacker Newsnew | past | comments | ask | show | jobs | submit | user34283's commentslogin

We also pay $300/month for Azure Desktop VMs.

We are paying for tens of thousands of those machines, although everyone knows they are stupidly expensive and incredibly slow.


At this point it’s undeniable for my use cases.

After I discovered how to use git worktrees in Codex to work in three conversations in parallel, I am able to build apps with a scope that simply was not realistic before.


Three? Across how many projects?

One, thus the git worktrees.

You might think that this would lead to a mess with merge conflicts, but the agent can resolve them automatically.

I added an instruction to AGENTS.md so that before handoff it fetches and rebases, resolving conflicts if needed plus rerunning the tests.


You obviously are not reviewing the generated code in any detail before merging it. This is not sustainable for the project as it will grow to be too large for what it needs to be.

I will see if that becomes a blocker.

There was one feature/screen that Codex built in a single 5k LOC file.

It was still perfectly capable of developing the feature and it was working as expected.

I had it break it down into multiple files, but if I wouldn’t have seen it during the MR review, I would not have noticed. The large file did not seem to degrade the performance of the agent.


It would be interesting to discover how large of a project in KLOC an agent can continue to effectively maintain without messing things up due to the large size.

I have a RTL8157 5 Gbps adapter from CableMatters.

Interestingly it seems to get burning hot on the MacBook M1 Pro while it remains cool on the M5 Pro model.

Maybe the workload is different, but I would not rule out some sort of hardware or driver difference. I only use a 1G port on my router at the moment.


Huh! That's very interesting.

I am definitely not the person to shed any light on what is going on, but you've added to my feeling that these adapters are all incomprehensible, so I'll try and do the same for you.

I have a USB C ethernet adapter (a Belkin USB-C to Ethernet + Charge Adapter which I recommend if you need it). I ran out of USB C ports one day, and plugged it through a USB C to USB A adapter instead. I must have done an fast.com speed-test to make sure it wasn't going to slow things down drastically, and found that the latency was lower! Not a huge amount, and I think the max speed was quicker without the adapter. But still, lower latency through a $1.50 Essager USB C to USB A adapter, bought from Shein or Shopee or somewhere silly!

I tried tons of times, back and forward, with the adapter a few times, then without the adapter a few times. Even on multiple laptops. As much as I don't want to, I keep seeing lower latency through this cheap adapter.

Next step, I'll try USB C to USB A, then back through a USB A to USB C adapter. Who knows how fast my internet could be!


I used it last night for iOS app development and it felt like a noticeable improvement.

With the Pro plan it was available in both Codex and ChatGPT already when I first checked, which was within an hour of the release.


Based on my experience with Claude Code on the $20 plan I would not think so.

Opus 4.7 would blow through the session limits in 2-4 prompts. It was a noticeable further decrease in usage quota, which was already tight before.

Based on Anthropic‘s description 4.7 was trained to think longer.

With GPT 5.5 yesterday, I felt it completes task noticeably faster than 5.4. I kept the xhigh effort setting.


Perhaps a year ago “vibe coding” was indicative of a low quality product.

It seems many have not updated their understanding to match today’s capabilities.

I am vibe coding.

That does not mean I am incompetent or that the product will be bad. I have 10 years of experience.

Using agentic AI to implement, iterate, and debug issues is now the workflow most teams are targeting.

While last year chances were slim for the agent to debug tricky issues, I feel that now it can figure out a lot once you have it instrument the app and provide logs.

It sometimes feels like some commenters stick with last year’s mindset and feel entitled to yell about ‘AI slop’ at the first sign of an issue in a product and denigrate the author’s competence.


No, it is still indicative of a low quality product. And I say that as someone who has probably been agentic coding longer than you have.

Indicative in my dictionary doesn't mean definitive. It just makes it much more likely. You can make quality products while LLMs write >99% of the code. This has been possible for more than a year, so it's not a lack of updating of beliefs that is the issue. I've done so myself. Rather, 90% of above products are low quality, at a much higher rate than say, 2022, pre-GPT. As such, it's an indicator. That 10% exists, just like pearls can hide in a pile of shit.

As others have said the reason is time investment. You can takes 2 months to build something where the LLM codes 99%. Or you can take 2 hours. HN, and everywhere else, is flooded by the latter. That's why it's mostly crap. I did the former. And luckily it led to a good result. Not a coincidence.

This applies far beyond coding. It applies to _everything_ done with LLMs. You can use them to write a book in 2 hours. You can use them to write a book in 2 years.


I've been neck deep in a personal project since January that heavily leverages LLMs for the coding.

Most of my time has been spent fitting abstractions together, trying to find meaningful relationships in a field that is still somewhat ill-defined. I suppose I could have thrown lots of cash at it and had it 'done' in a weekend, but I hate that idea.

As it stands, I know what works and what doesn't (to the degree I can, I'm still learning, and I'll acknowledge I'm not super knowledgeable in most things) but I'm trying to apply what I know to a domain I don't readily understand well.


Are TUIs not yesterday’s hot thing?

The way I work now in the Codex desktop app is that I spin up 3-5 conversations which work in their dedicated git worktree.

So while the agent works and runs the test suite I can come back to other conversations to address blockers or do verification.

Important is that I can see which conversation has an update and getting desktop notifications.

Maybe I could set this up with tabs in the Terminal, but it does not sound like the best UX.


That's probably more a personal preference than objective measurement. A lot of people already spent most of their dev time in the terminal, so for someone like myself that uses neovim claude code or codex cli are much easier than using the GUIs.

Yes, I think they have been for years. C2PA Content Credentials are supported in cameras and some phones already today.

I figure capitalism may soon become obsolete. But I don’t think this speculation is going to make for interesting discussion on here.

I find the technical discussion more interesting and could do without some of the moral grandstanding in the comments.


People say that but the quote. " I can sooner imagine the end of the world than the end of capitalism." Always comes back to me. Personally I think it won't be communism but communalism.

With Opus 4.6 on the $20 plan the limits were bad, but at least you could do a short session.

I find that with Opus 4.7 I can do two messages. Once I had a short session with 4-5 messages and it consumed $10 in extra usage.

This relegated Claude to a backup option in addition to Codex, which has the better desktop app anyway, and much better usage limits.

I’m considering to even cancel Claude entirely.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: