Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Except it doesn't. It's much less to verify the tests.

This is only true when there is less information in those tests. You can argue that the extra information you see in the implementation doesn't matter as long as it does what the tests say, but the amount of uncertainty depends on the amount of information omitted in the tests. There's a threshold over which the effort of avoiding uncertainty becomes the same as the effort involved in just writing the code. Whether or not you think that's important depends on the problem you're working on and your tolerance for error and uncertainty, and there's no hard and fast rule for that. But if you want to approach 100% correctness, you need to attempt to specify your intentions 100% precisely. The fact that humans make mistakes and miscommunicate their intentions does not change the basic fact that a human needs to communicate their intention for a machine to fulfill that intention. The more precise the communication, the more work that's involved, regardless of whether you're verifying that precision after something generates it or generating it yourself.

> I can get the same level of uncertainty in far less time with an LLM. That's what makes it great.

I have a low tolerance for uncertainty in software, so I usually can't reach a level I find acceptable with an LLM. Fallible people who understand the intentions and current function of a codebase have a capacity that a statistical amalgamation of tokens trained on fallible people's output simply do not have. People may not use their capacity to verify alignment between intention and execution well, but they have it.

Again, I'm not denying that there's plenty of problems where the level of uncertainty involved in AI generated code is acceptable. I just think it's fundamentally true that extra precision requires extra work/there's simply no way to avoid that.



> I have a low tolerance for uncertainty in software

I think that's what's leading you to the unusual position that "This is only true when there is less information in those tests."

I don't believe in perfection. It's rarely achieved despite one's best efforts -- it's a mirage. What we can realistically look for is a statistical level of reliability that tests help achieve.

At the end of the day, it's about delivering value. If you can on average deliver 5x value with an LLM because of the speed, or 1.05x value because you verified every line of code 3 times and avoided a rare bug that both the LLM and you didn't think about testing (compared to the 1x value of a non-perfectionist developer), then I know which one I'm choosing.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: