Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In the other discussion of this topic, a lot of people said the lawyer should be disbarred, but personally I think people should be able to trust the tools marketed by trillion dollar tech companies, and a lot of the blame should be placed on Microsoft/OpenAI for overhyping ChatGPT and understating how likely it is to mislead.

And every response from ChatGPT should be preceded by a warning that it cannot be trusted.



> And every response from ChatGPT should be preceded by a warning that it cannot be trusted.

It kind of is - the ChatGPT site has this as a permanent fixture in the footer:

> ChatGPT may produce inaccurate information about people, places, or facts.

That's arguably ineffective though - even lawyers evidently don't read the small print in the footer!


Monty Python nicely addressed this, over 50 years ago.

> Mr. Hilton: Oh, we use only the finest juicy chunks of fresh Cornish ram's bladder, emptied, steamed, flavoured with sesame seeds, whipped into a fondue, and garnished with lark's vomit.

> Inspector: LARK'S VOMIT?!?!?

> Mr. Hilton: Correct.

> Inspector: It doesn't say anything here about lark's vomit!

> Mr. Hilton: Ah, it does, on the bottom of the box, after 'monosodium glutamate'.

> Inspector: I hardly think that's good enough! I think it's be more appropriate if the box bore a great red label: 'WARNING: LARK'S VOMIT!!!'

> Mr. Hilton: Our sales would plummet!

https://youtu.be/3zZQQijocRI

Really, it should open every conversation with “by the way, I am a compulsive liar, and nothing I say can be trusted”. That _might_ get through to _some_ users.


Humor aside I disagree. They are basically three types of people, the one who learns by reading, if you learn by observation, the rest just have to pee on the electric fence for themselves.


Worse, it's buried in the middle of other fine print:

> Free Research Preview. ChatGPT may produce inaccurate information about people, places, or facts. ChatGPT May 24 Version

And it really understates the problem. It should say: Warning! ChatGPT is very likely to make shit up.


Especially lawyers.

Half the job of lawyers is making people add useless warnings to everything that then everybody ignore.

May contain sesame. Your mileage may vary. All the characters are fictional.


It's right there on the home page under "Limitations"

"May occasionally generate incorrect information"

Everyone knows gasoline is flammable but there's still people that smoke while filling their gas tank.


There is a warning each time you create a new thread, and always at the bottom of the page.

I think people should check (on the same page as the tool itself) if the tool advertises itself as unreliable.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: