Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> No, it did not “double-check”—that’s not something it can do! And stating that the cases “can be found on legal research databases” is a flat out lie. What’s harder is explaining why ChatGPT would lie in this way. What possible reason could LLM companies have for shipping a model that does this?

At what point does OpenAI (or any other company) become legally responsible for this kind of behavior from their LLM's? I'm not excusing the lawyer for their reckless and irresponsible use of a tool they didn't understand, but it's becoming increasingly clear that people are trusting LLM's far more than they should.

In my opinion it's dangerous to keep experimenting on the general population without holding the experimenters accountable for the harm that occurs.



OpenAI or any other company becomes liable when they market a product to be used in place of lawyers (or doctors, engineers, or whatever other profession)

as long as we're hiring professionals to do these jobs, part of that real actual human's job is to accept the liability for their work. if a person want to use a tool to make their job easier, it's also their job to make sure that the tool is working properly. if the human isn't capable of doing that, then the human doesn't need to be involved in this process at all - we can just turn the legal system over to the LLMs. but for me, i'd prefer the humans were still responsible.

in this case, "the experimenter" was the lawyer who chose to use ChatGPT for his work, not OpenAI for making the tool available. and yes, i agree, the experimenter should be held accountable.


> At what point does OpenAI (or any other company) become legally responsible for this kind of behavior from their LLM's?

When they sell their bots to areas where lying is illegal. I.e., when a company pretends to do law.

OpenAI doesn't pretend ChatGPT is a lawyer and for good reason. The lawyer who decided to outsource his work is an idiot and can't shift blame to the tool he decided to abuse.


>At what point does OpenAI (or any other company) become legally responsible for this kind of behavior from their LLM's?

When AutoCAD is responsible for an architect's shitty design.


Never?

Unless they advertise it as having the capability, it's got nothing to do with them.

If I hit someone with a hammer, that shit's on me, not the manufacturer.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: