Hacker News new | past | comments | ask | show | jobs | submit login

Here’s something scary I recently learned: Cedar-Sinai in LA (major hospital) used to have 15 lawyers on contracts. Now they have 1 and an AI app reviewing contracts.

Those are 14 lawyers gone. That’s more than 3% on “productivity”, but 14 people who lost their jobs. And that’s now with the current state of things.




Lawyer here - there are fields, and law is definitely one of them, where labor is the major cost.

That labor is not often used sanely.

It is common to use lawyers costing hundreds per hour to do fairly basic document review and summarization. That is, to produce a fairly simple artifact.

Not legal research, not opinionated briefing.

But literal: Read these documents, produce a summary of what they say.

While I can't say this is the same as what you are talking about ("contracts review" means many things to many people), i'm not even the slightest bit surprised that AI is starting to replace remarkably inefficient uses of labor in law.

I will add: Lots of funding being thrown at AI legal startups around on products that do document review and summarization, but that's not the big fish, and will be commodity very quickly.

So i expect there will be an ebb and flow of these sorts of products as the startups either move on to things that enable them to capture a meaningful market (document review ain't it), or die and leave these companies hanging :)


But how do you know if AI is able to pick out the salient bits for summarization? Like some nasty point poison pill hidden in there? Wouldn’t you want an expert for such things?


Your last statement hints at a big consideration: accountability. One lawyer on a formerly 15 lawyer staff is accountable for 15 lawyers worth of potential mistakes, and we know that “but the AI did it!” doesn’t hold water in law.


There's a bunch of assumptions here.

The main one i think is probably wrong is that there was 15 lawyers worth of work being done before (when measured by some average lawyer standard).

For example, it's possible there was only really 1 lawyer worth of work being split 15 ways, so each lawyer was really only responsible for 1/15th of an average lawyers amount of work :)

In that scenario, they'd only be responsible for 1 average lawyers worth of mistakes now.

Is that realistic? Who knows. I've definitely seen that level of "waste" (for lack of a better term) before in law firms :)

Even in the scenario you are positing, it's not obvious it matters as much as you seem to think it does.

If the per-lawyer mistake rate was low enough, it may be that 15x that rate simply does not matter.

These kinds of contracts are fairly standardized, and so they are mostly looking at the differences from last time. Those differences are often not legal as much as factual. IE the table of costs changed, not the legal responsibility.

So the main thing mistakes get you is maybe cost (if mistakes matter at all).

This isn't like they are seeing brand new from scratch contracts constantly that require brand new analysis.

Even if they were, like I said, the main issue with a mistake is cost.

For all we know, the AI company also agreed to indemnify them for a certain rate of mistakes or something (which wouldn't be hard to get insurance for).

I'm not actually a fan of AI taking necessary jobs, but I think the view here that this is sort of life or death is strange.

I'd be much more worried about AI handling criminal defense in some semi-autonomous fashion than this.


> There's a bunch of assumptions here.

Undoubtedly. Happy to be disabused of my misgivings.

> The main one i think is probably wrong is that there was 15 lawyers worth of work being done before (when measured by some average lawyer standard). For example, it's possible there was only really 1 lawyer worth of work being split 15 ways, so each lawyer was really only responsible for 1/15th of an average lawyers amount of work :)

> In that scenario, they'd only be responsible for 1 average lawyers worth of mistakes now. Is that realistic? Who knows. I've definitely seen that level of "waste" (for lack of a better term) before in law firms :) Even in the scenario you are positing, it's not obvious it matters as much as you seem to think it does.

> If the per-lawyer mistake rate was low enough, it may be that 15x that rate simply does not matter.

Well having done quite a bit of work with attorneys no longer practicing law, I’m definitely familiar with the gripes about inefficiencies and running up hours— especially during litigation in the larger firms. Even not being as efficient as they could be, assuming 1400% inefficiency or whatever seems much less reasonable than assuming 0% inefficiency. It’s obviously not either of those extremes, but I have a hard time imagining it’s even close to the former.

> These kinds of contracts are fairly standardized, and so they are mostly looking at the differences from last time. Those differences are often not legal as much as factual. IE the table of costs changed, not the legal responsibility. So the main thing mistakes get you is maybe cost (if mistakes matter at all).

> This isn't like they are seeing brand new from scratch contracts constantly that require brand new analysis. Even if they were, like I said, the main issue with a mistake is cost.

> I don’t actually know what kind of contracts they were working on so I’ll have to take your word on that.

> For all we know, the AI company also agreed to indemnify them for a certain rate of mistakes or something (which wouldn't be hard to get insurance for).

I was involved with the AI legal tool scene indirectly for about a decade, but haven’t been for a couple years, and am only getting info indirectly from people I know that still are. (Actually clicking through the top results on Google, I’m actually on a first-name basis with the first founder there was a picture of. I didn’t know he started a new company though so I guess we’re not THAT close!) My knowledge could be out of date, but I’ve not seen one of these services offer indemnity for mistakes and ostensibly for good reason — the latest data I’ve seen shows that attorney-targeted legal tools make more mistakes than people hoped. I also know nothing about legal insurance, but I don’t think it would be smart to insure an organization that just canned 94% of their counsel in favor of tools known to not be particularly reliable when their workload probably has not changed. Whether they did it because either they care more about payroll than reliability, or they had the poor judgement to maintain 1500% staffing levels until then, it still seems like a pretty poor bet.

> I'm not actually a fan of AI taking necessary jobs, but I think the view here that this is sort of life or death is strange.

I certainly don’t think it’s life or death, and of all the places in our society that could use a little more efficiency, legal services is right up there. That said, the fact that it’s not life or death also doesn’t mean that it’s totally fine either.

> I'd be much more worried about AI handling criminal defense in some semi-autonomous fashion than this.

Haha— frankly, I don’t give a damn if the hospital signs a terrible contract that costs them a bazillion dollars as long as they don’t pull a Steward and stop purchasing basic medical supplies.

SURELY public defenders are an attractive target for the outright person-replacing sort of efficiencies, but I have a hard time imagining that would pass muster. I can definitely see some supposedly adversarial plea agreement system being implemented by more authoritarian jurisdictions as an incremental expansion of the NN sentence-recommendation type of tools. My gut says the bigger semi-automation risk there is overworked public defenders’ being lulled into false confidence in legal and general office LLM type tools (messages summaries, auto scheduling appointments, etc) without having the time to give them the scrutiny they need. I’d be shocked if that wasn’t already happening though. Hey maybe with a bunch of attorneys having newfound time on their hands they can bone up on criminal law and provide some relief for the public defender staffing crisis.


I mean, it's usually not that adversarial, but lawyers miss this stuff too sometimes.

Like anything else, it's a question of performance - if the AI misses it at the same or less rate than the lawyers, ...

If not, it's a question of whether the higher rate is acceptable. For these kinds of contracts, that's mostly about cost.


Probably the bigger factor is consequence, not just rate at what’s missed.

But zoom out and we see job loss that could be stated as productivity, but what do those lawyers and law degrees do that’s more productive for society? They’re already near the top of an information economy; we’d need to invent an entire next phase. That takes time and pain management; and yet those 14 jobs are gone now.

I used to think all of this was much further away and we had time. But now I’m seeing that we don’t actually need hallucinations fixed, actual AGI, or major quality boosts before displacement begins.


Going to be interesting to see the MTTL (mean time to lawsuit) on this. Sounds grossly negligent. I feel kinda sorry for the lonely lawyer.


Highly doubtful it is either negligence, or something they will get sued over. I don't even quite understand what you think they will get sued over.

Most of the policing would happen by courts and bar associations.


As someone who spends a lot of time battling mis-use/over-use of PII constantly, I am adopting MTTL as a term of art. :D


MTTL really applies more to a startup portfolio, has long does the average startup from an incubator operate before it receives it's first lawsuit.

For your situation, you really want to measure how many PII records you handle per lawsuit. That way you can accurately measure the lawsuit cost per record and compare it to revenue per record to see if you're profitable.


That's great. That means potentially slightly cheaper healthcare. A company shouldn't need so many lawyers unless it's a legal firm.


Do you have an article that has more information about this? I'd really like to learn more about what happened.


Can you share a source/article?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: