Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My biggest concern with tools like this is reproducibility and maintainability. How deterministically can we go from the 'source' (natural language prompts) to the 'target' (source code)? Assuming we can't reasonably rebuild from source alone, how can we maintain a link between the source and target so that refactoring can occur without breaking our public interface?


This is a valid concern and we are still experimenting with how to do this right. A combination of preserving the reasoning history, having the generated code, and using tests to enforce the public interface (and fix it if anything breaks) looks promising.

I think the crucial part is indeed not being able to deterministically go from NL to code but to take an existing state of the codebase and spec and "continue the work".


Pretty simple it’s just like any abstraction. This AI will not work when nobody would deliver the answer beforehand. LLMs are given inputs of existing code. When you abstract that you better hope you have good code in it.

So my question would be, what is the use case?

I guess it’s more like planing software and not implementing it.

You can pretty well plan your software with ChatGPT. But it will just help you not really doing the job.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: