Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There are systems built on top of LLMs that can reach out to a vector database or do a keyword search as a plug in. There’s already companies selling these things, backed by databases of real cases. These work as advertised.

If you go to ChatGPT and just ask it, you’ll get the equivalent of asking Reddit: a decent chance of someone writing you some fan-fiction, or providing plausible bullshit for the lulz.

The real story here isn’t ChatGPT, but that a lawyer did the equivalent of asking online for help and then didn’t bother to cross check the answer before submitting it to a judge.

…and did so while ignore the disclaimer that’s there every time warning users that answers may be hallucinations. A lawyer. Ignoring a four-line disclaimer. A lawyer!



> If you go to ChatGPT and just ask it, you’ll get the equivalent of asking Reddit: a decent chance of someone writing you some fan-fiction, or providing plausible bullshit for the lulz.

I disagree. A layman can’t troll someone from the industry let alone a subject matter expert but ChatGPT can. It knows all the right shibboleths, appears to have the domain knowledge, then gets you in your weak spot: individual plausible facts that just aren’t true. Reddit trolls generally troll “noobs” asking entry-level questions or other readers. It’s like understanding why trolls like that exist on Reddit but not StackOverflow. And why SO has a hard ban on AI-generated answers: because the existing controls to defend against that kind of trash answer rely on sniff tests that ChatGPT passes handily until put to actual scrutiny.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: