Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Natural language to command line is the kind of functionality people dream about but it seems incredibly dangerous. How can I trust it to do what I intend every time? ChatGPT certainly isn't good enough as I've learned trying to use it on files I thankfully had a backup of.


You can't trust it.

You need the same (and probably more) robust controls in place as you would use for a more junior developer.

A human will be influenced by the potential outcome of any proposed solution. For example they might worry about the impact to their employment status if they are wrong, while an LLM will just spit out an answer that is "most likely correct".




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: