Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm sorry, but I feel like I have to amend your scenarios to reflect the accuracy of LLMs:

> Quick [inconsequential] fact checks, quick [inconsequential] complicated searches, quick [inconsequential] calculations and comparisons. Quick [inconsequential] research on an obscure thing.

The reason that amendment is vital is because LLMs are, in fact, not factual. As such, you cannot make consequential decisions on their potential misstatements.



These are simply implementation failures. You should be using them to gather information and references that are verifiable. There are even hallucination detectors that do some of this for you automatically.

If you are treating LLMs like all-knowing crystal balls, you are using them wrong.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: