Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think that's going a bit too far. The big thing is to have the AI spit out small digestible modules, check those for correctness, and then glue them together. The same way a person normally writes code, you are just having the AI do the grunt work.

This does have the caveat that reading code is usually harder than writing it, so the total time savings is far less than what AI companies claim. You only get in real danger when you don't review the code and just YOLO it into production.



If you have to describe the code to the ai and then read through each line of it anyways, why not just write the code yourself?


It takes way more time to explain, and then re-explain, and then re-re-re-explain to the LLM what I want the code to do. No, it isn't because I don't understand LLMs, it's because LLMs don't understand, period. Trying to coax a fancy word predictor to output the correct code can be extremely frustrating especially when I know how to write the code.


Usually if you have to re re re explain, it means you didn't leave those details in the first prompt. So writing out the code yourself, you'd still get into this trap because you discover as you write. Just like you discover the details as the LLM writes.


Do you have access to GPT-10 or something like this? Because my experience is that you can give as much detail as you want and you WILL need to re re re explain regardless.


Because it's significantly faster at both typing and looking up small details than you are.


But I can write code faster than I can read it




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: