Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> He had, he told Judge Castel, even asked the program to verify that the cases were real. It had said yes.

It turns out, asking an unreliable narrator if it's being reliable is not a sound strategy.



Remember that professor who fed student essays to ChatGPT and asked if it wrote them. Mostly ChatGPT would reply yes, and the professor proceeded to fail the students.


https://news.ycombinator.com/item?id=35963163

("Texas professor fails entire class from graduating- claiming they used ChatGTP [sic]", 277 comments)

https://news.ycombinator.com/item?id=35980121

("Texas professor failed half of class after ChatGPT claimed it wrote their papers", 22 comments)


My main takeaway is that failing the second half of the class and misspelling ChatGPT leads to > 10x engagement.


Err, out of abundance of caution, the misspelling of "ChatGPT" which I [sic]'d is original to the Texas A&M professor, who repeated the misspelling multiple times in his email/rant. The HN poster quoted the professor literally, and I am thus transitively [sic]'ing the professor – not the HN poster. I am not mocking an HN poster's typo.


That's interesting. Unlike Reddit, maintaining an article's actual title isn't a priority on this site, and moderation is only too happy to change it at their whims. I'm surprised that the spelling wasn't corrected by mods out of pedantry.


It still leaves the burning question whether it's half or the whole pie. :V


My main takeway is that the guy who registers chatgtp.com is going to make a lot of money by providing bogus answers to frivolous questions :-)


There will be lots of data stealing ChatGPT plugins


Funnily enough chatgpt had no more idea about that than about these legal cases, it lives in a state of perpetual hallucination and making stuff up is its only mode of operation.


It hallucinates a sequence of tokens, and we hallucinate meaning.


We with computer science knowledge understand this, but not the mass who are buying into the hype.


I wonder if this is a tactic so the court to deems this lawyer incompetent rather than giving the (presumably much harsher) penalty for deliberately lying to the court?


I don't think the insanity plea works out well for lawyers. I'm not sure if "I'm too stupid to be a lawyer" is that much better than "I lied to the courts".


This explanation is a cause of an expansion of the scope of the show cause order for the lawyer to additional bases for sanctions, as well as its expansion to the other involved lawyer and their firm, so if it was a strategic narrative, it backfired spectacularly already.


Why assume malice? Asking ChatGPT to verify is exactly what someone who trusts ChatGPT might do.

I'm not surprised this lawyer trusted ChatGPT too much. People trust their lives to self driving cars, trust their businesses to AI risk models, trust criminal prosecution to facial recognition. People outside the AI field seem to be either far too trusting or far too suspicious of AI.


Quoted directly from my last session with ChatGPT mere seconds ago:

> Limitations

May occasionally generate incorrect information

May occasionally produce harmful instructions or biased content

Limited knowledge of world and events after 2021

---

A lawyer who isn't prepared to read and heed the very obvious warnings at the start of every ChatGPT chat isn't worth a briefcase of empty promises.

WARNING: witty ending of previous sentence written with help from ChatGPT.


I agree the lawyer shouldn't have trusted ChatGPT, but I'm not comfortable with the idea that the lawyer bears all the responsibility for using ChatGPT and Microsoft/OpenAI bear no responsibility for creating it.

"May occasionally generate incorrect information" is not a sufficient warning. Even Lexis-Nexis has a similar warning: "The accuracy, completeness, adequacy or currency of the Content is not warranted or guaranteed."

And in any case, it seems like you agree with me that the lawyer was incompetent rather than malicious.


Maybe be a long run tactic to prevent future clients from switching to ChatGPT based solutions.


I mean...he doesn't have to say it: he is clearly incompetent!


The trick is, you need two LLMs, one which always lies, and one which always tells the truth. Then you ask either LLM whether the other LLM would say it's reliable.


It is quite ironic that a lawyer made this mistake.

That's like asking the accused if he did it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: