Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Not sure if this is where your head is, but I think there's a lot of value in integrating LLMs directly into complex software. Jira, Salesforce, maybe K8s - should all have an integrated LLMs that can walk you through how to perform a nuanced task in the software.


Imagine good error messages, with hints for mitigation and maybe smart retry w/ mitigations applied.


Why would the LLM walk you through and not just do the nuanced task on its own?


I assume the human maintains some of the necessary context in their meat memory.


IMO, for many real business use cases, the hallucinations are still a big deal. Once we have models that are more reliable, I think it makes sense to go down that path - the AI is the interface to the software.

But until we're there, a system that just provides guidance that the user can validate is a good stepping stone - and one I suspect is immediately feasible!


Walkthrough is generally performed once or not so frequently. It would be a bad investment if you just use it for just this use case


A beginner tutorial is also not used frequently by users, but that doesn't make it a bad investment. I an LLM can help a lot with getting familiar with the tool it could be pretty valuable, especially after a UI rework etc.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: