Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Thanks for the great context. The lawyer should be disbarred. He doubled down when he was caught, and then blamed chatGPT. What do you bet he was trying to settle really quickly to make this all go away.

Here is the direct link to the chatGPT hallucination the lawyer filed in response to the judge's order to produce the actual text of the case: https://www.courtlistener.com/docket/63107798/29/1/mata-v-av...



Did he "double down" or did he genuinely not understand that ChatGPT was making stuff up the whole time?


There are plenty of people on the internet (including here) who think ChatGPT is a "smart expert" and who don't understand that ChatGPT can easily make up stuff that looks very convincing at first glance.

And if you challenge them, they also double down and say "ChatGPT is the future" etc.

So the lawyer is not alone...


So you're telling me GPT lied when it disproved Church-Turing using only 3 lines of Coq?

What purpose is a machine that cannot be trusted? If I wanted that I'd use the cloud...


And I'm putting 90% of the blame for that onto the mainstream media, reporting irresponsibly and perhaps maliciously in order to milk a fear cycle out of AI. And some of it on OpenAI/Google execs, now cynically drumming up inexistent existential threats to facilitate regulatory capture.


Is it an interesting and relevant story that many people would be keen on reading? Nah, must be a "msm" conspiracy. Can't wait for the Weinstein take on this.


Stuff like it is certainly the future though. Primary flaw of OpenAI's GPTs are that they're just dumb text transformers, "what's the next best word to follow". It just so happens that it can answer a variety of questions factually simply because it was trained on real (and imaginary) facts.

It needs to be backed up by a repository of hard knowledge and only uses the transformer part to generate a sentence based on this knowledge.

At the same time, as much as it currently hallucinates it's still nothing compared to the misinformation that humans perpetuate on a daily basis.


The original bogus citations may be excusable as a genuine misunderstanding of ChatGPT, i.e. he falsely thought he had a research assistant feeding him accurate quotes.

But there is simply no good faith excuse for filing the transcripts of the cases without as much as skimming them, once doubts had been raised. I’m not a lawyer, but even a cursory look at the Varghese case transcript shows that it’s gibberish: The name of the plaintiff changing inexplicably, the plaintiff filing bankruptcy (of two different kinds) as a tactical move, etc. Another transcript purports to be about a passenger suing an airline over being denied an exit row seat. As soon as you start reading the “transcripts”, you see that something is seriously off about them, compared to the two real (but irrelevant) cases cited.


Of course I do not know, but he should have come clean. "Hey, I can't find this case in WestLaw, but chatGPT found it and produced it". Instead he just submitted it as-is right out of chatGPT. Alarm bells had to be going off in his mind that a federal court decision in a lawsuit was less than 5 pages


Forward thinking for him to try out ChatGPT for his work. Nothing wrong with experimenting with a potential helpful tool.

But just as I review and correct code snippets it produces, he should have verified the results because nothing indicated to him that they are any good (besides the fact that they were well written).

I'm pretty sure plenty of other lawyers are experimenting with ways to use ChatGPT without being quite as naive.

This is 100% on this guys uncritical thinking.


Doesn’t matter. He is responsible for what he files.


It matters in terms of remediation: incompetence implies that lawyers require better technical education on LLMs, while malice implies that the lawyer has violated an already established rule or law.

Lawyers undergo continuing legal education throughout their careers; in many (most?) jurisdictions, it’s mandatory. “LLMs are not legal search engines” as a CLE topic in the next decade would not surprise me remotely.


Either way it's inexcusable, they should be disbarred if they are this incompetent.


Understand, some lawyers finished last in class. Cramming for a bar exam != intelligence.

Don't let one dumbass, be a example of how all lawyers are.


I don’t think I implied that all lawyers are like this?


Intended or not, the plural here, when discussing a single lawyer, left me with that impression:

"incompetence implies that lawyers require better technical education on LLMs"

Others may have a different take?


That should be read as "the presence of incompetence implies that those incompetent lawyers [...]." Sorry if you found the phrasing ambiguous.

(The only really important part of the original comment is the part about CLEs: we have an entire professional educational system that ought to be able to accommodate subjects like this.)


It should.


Don’t test in production is the remediation. And if you don’t know that already you need to go back to law school.


Yes but be merciful to an unfortunate fool who believed in technology! ChatGPT proved, like the Ouija board, to be the very voice of Satan himself for this lawyer. Bwahahahaaaaaah!8-)


I think the big question is... what was this guy doing 2 years ago? Was his stuff real work, or was he finding a less sophisticated way of phoning it in?

It seems improbable that someone who did all the hard work and knew how to do it would suddenly stop doing that. Such work ethics tend to be habit-forming, or so I had thought.


The real icing on the cake was that he initially had some doubts, so he asked ChatGPT if the cases it had cited were genuine.

ChatGPT said yes. And that was good enough for him.


The above link wasn't the only hallucination(!)

The lawyer kept digging the hole deeper and deeper, and (as a non-expert) I agree that it seems that the lawyer is at serious risk of being disbarred.

Interesting documents are from #24 onwards:

- #24 (https://storage.courtlistener.com/recap/gov.uscourts.nysd.57...): "unable to locate most of the case law cited in Plaintiff’s Affirmation in Opposition, and the few cases which the undersigned has been able to locate do not stand for the propositions for which they are cited"

- #25 (https://storage.courtlistener.com/recap/gov.uscourts.nysd.57...) & #27 (https://storage.courtlistener.com/recap/gov.uscourts.nysd.57...): order to affix copies of cited cases

- #29: attached the cases - later revealed to be a mixture of made up (bogus) for some, vs irrelevant for others

- #30 (https://storage.courtlistener.com/recap/gov.uscourts.nysd.57...): "the authenticity of many of these cases is questionable" - polite legal speak for bogus. And "these cases do exist but submits that they address issues entirely unrelated to the principles for which Plaintiff cited them" - irrelevant. And a cutting aside that "(The Ehrlich and In re Air Crash Disaster cases are the only ones submitted in a conventional format.)" - drawing attention to the smoking gun for the bogus cases

- #31 (https://storage.courtlistener.com/recap/gov.uscourts.nysd.57...): an unhappy federal judge: "The Court is presented with an unprecedented circumstance. A submission filed by plaintiff’s counsel in opposition to a motion to dismiss is replete with citations to non-existent cases. ... Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations" ---- this PDF is worth reading in full, it is only 3 pages & excoriating

- #32 affidavits, including the ChatGPT screenshot

- #33 (https://storage.courtlistener.com/recap/gov.uscourts.nysd.57...): an even more unhappy judge: invitation for the lawyer & law firm to explain why they "ought not be sanctioned"


A dry quote from the defendant in #24 above:

"Putting aside that there is no page 598 in Kaiser Steel..."




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: