Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Why would the LLM walk you through and not just do the nuanced task on its own?



I assume the human maintains some of the necessary context in their meat memory.


IMO, for many real business use cases, the hallucinations are still a big deal. Once we have models that are more reliable, I think it makes sense to go down that path - the AI is the interface to the software.

But until we're there, a system that just provides guidance that the user can validate is a good stepping stone - and one I suspect is immediately feasible!




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: