Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It doesn't lie like a duck. It unintentionally says falsehoods. Lying is intentional.


That's irrelevant to whether it lies like a duck or not.

The expression "if it X like a duck" means precisely that we should judge a thing to be a duck or not, based on it having the external appereance and outward activity of a duck, and ignoring any further subleties, intent, internal processes, qualia, and so on.

In other words, "it lies like a duck" means: if it produces things that look like lies, it is lying, and we don't care how it got to produce them.

So, Chat-GPT absolutely does "lie like a duck".


I know what the expression means and tend to agree with the duck test. I just disagree that ChatGPT passes the "lying duck" test. A "lying duck" would be more systematic and consistent in its output of false information. ChatGPT occasionally outputs incorrect information, but there's no discernable motive or pattern, it just seems random and unintentional.

If it looked like ChatGPT was intentionally being deceptive, it would be a groundbreaking discovery, potentially even prompting a temporary shutdown of ChatGPT servers for a safety assessment.


Abductive reasoning aside, people are already anthropomorphizing GPT enough without bringing in a loaded word like "lying" which implies intent.

Hallucinates is a far more accurate word.


What bothers me about "hallucinates" is the removal of agency. When a human is hallucinating, something is happening to them that is essentially out of their control and they are suffering the effects of it, unable to tell truth from fiction, a dysfunction that they will recover from.

But that's not really what happens with ChatGPT. The model doesn't know truth from fiction in the first place, but the whole point of a useful LLM is that there is some level of control and consistency around the output.

I've been using "bullshitting", because I think that's really what ChatGPT is demonstrating -- not a disconnection from reality, but not letting truth get in the way of a good story.


> we should judge a thing to be a duck or not, based on it having the external appereance and outward activity of a duck, and ignoring any further subleties, intent, internal processes, qualia, and so on.

and the point here is we should not ignore further subtleties, intent, internal process, qualia, etc because they are extremely relevant to the issue at hand.

Treating GPT like a malevolent actor that tells intentional lies is no more correct than treating it like a friendly god that wants to help you.

GPT is incapable of wanting or intending anything, and it's a mistake to treat it like it does. We do care how it got to produce incorrect information.

If you have a robot duck that walks like a duck and quacks like a duck and you dust off your hands and say "whelp that settles it, it's definitely a duck" then you're going to have a bad time waiting for it to lay an egg.

Sometimes the issues beyond the superficial appearance actually are important.


>and the point here is we should not ignore further subtleties, intent, internal process, qualia, etc because they are extremely relevant to the issue at hand.

But the point is those are only relevant when trying to understand GPTs internal motivations (or lack thereof).

If we care for the practical effects of what it's spits out (the function the same as if GPT has lied to us), then calling them "hallucinations" is as good as calling them "lying".

>We do care how it got to produce incorrect information.

Well, not when trying to access whether it's true or false, and whether we should just blindly trust it.

From that practical aspect, most people care about (than about whether it has "intentions"), we can ignore any of its internal mechanics.

Thus treating it like it "beware, as it tends to lie", will have the same utility for most laymen (and be a much easier shortcut) than any more subtle formulation.


It doesn't really matter.

This always bugs me about how people judge politicians and other public figures not by what they've actually done, but some ideal of what is in their "heart of hearts" and their intentions and argue that they've just been constrained by the system they were in or whatever.

Or when judging the actions of nations, people often give all kinds of excuses based on intentions gone wrong (apparently forgetting that whole "road to hell is paved with good intentions" bit).

Intentions don't really matter. Our interface to everyone else is their external actions, that's what you've got to judge them on.

Just say that GPT/LLMs will lie, gaslight and bullshit. It doesn't matter that they don't have an intention to do that, it is just what they do. Worrying about intentions just clouds your judgement.


Hear hear!

Too much attention on intentions is generally just a means of self-justification and avoiding consequences and, when it comes right down to it, trying to make ourselves feel better for profiting from systems/products/institutions that are doing things that have some objectively bad outcomes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: