Remember that professor who fed student essays to ChatGPT and asked if it wrote them. Mostly ChatGPT would reply yes, and the professor proceeded to fail the students.
Err, out of abundance of caution, the misspelling of "ChatGPT" which I [sic]'d is original to the Texas A&M professor, who repeated the misspelling multiple times in his email/rant. The HN poster quoted the professor literally, and I am thus transitively [sic]'ing the professor – not the HN poster. I am not mocking an HN poster's typo.
That's interesting. Unlike Reddit, maintaining an article's actual title isn't a priority on this site, and moderation is only too happy to change it at their whims. I'm surprised that the spelling wasn't corrected by mods out of pedantry.
Funnily enough chatgpt had no more idea about that than about these legal cases, it lives in a state of perpetual hallucination and making stuff up is its only mode of operation.
I wonder if this is a tactic so the court to deems this lawyer incompetent rather than giving the (presumably much harsher) penalty for deliberately lying to the court?
I don't think the insanity plea works out well for lawyers. I'm not sure if "I'm too stupid to be a lawyer" is that much better than "I lied to the courts".
This explanation is a cause of an expansion of the scope of the show cause order for the lawyer to additional bases for sanctions, as well as its expansion to the other involved lawyer and their firm, so if it was a strategic narrative, it backfired spectacularly already.
Why assume malice? Asking ChatGPT to verify is exactly what someone who trusts ChatGPT might do.
I'm not surprised this lawyer trusted ChatGPT too much. People trust their lives to self driving cars, trust their businesses to AI risk models, trust criminal prosecution to facial recognition. People outside the AI field seem to be either far too trusting or far too suspicious of AI.
I agree the lawyer shouldn't have trusted ChatGPT, but I'm not comfortable with the idea that the lawyer bears all the responsibility for using ChatGPT and Microsoft/OpenAI bear no responsibility for creating it.
"May occasionally generate incorrect information" is not a sufficient warning. Even Lexis-Nexis has a similar warning: "The accuracy, completeness, adequacy or currency of the Content is not warranted or guaranteed."
And in any case, it seems like you agree with me that the lawyer was incompetent rather than malicious.
The trick is, you need two LLMs, one which always lies, and one which always tells the truth. Then you ask either LLM whether the other LLM would say it's reliable.
It turns out, asking an unreliable narrator if it's being reliable is not a sound strategy.