Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think the "bullshit" argument is pretty solid. ChatGPT is really good at bullshitting you.

Talking to ChatGPT about something is like talking to someone who has a superficial understanding of the topic who thinks they know more than they actually do, and who has read enough pop science articles to sound somewhat convincing. It all seems sensible at first, but the responses just don't live up to scrutiny if you drill down.

When I ask ChatGPT about books, it will confidently answer with summaries, but if you drill down you'll realise ChatGPT has no detailed knowledge about the book, like a high school student who doesn't want to admit they didn't do the reading assignment.

When I ask programming questions, it will respond with confident answers, but the provided examples are almost always wrong in a way that shows that ChatGPT doesn't really understand what it is talking about.

It's really good at answering beginner questions that it has seen often on the internet, but it completely fails when asked about more subtle details. When asking follow up questions, it often contradicts previous answers.



> ...like talking to someone who has a superficial understanding of the topic who thinks they know more than they actually do, and who has read enough pop science articles to sound somewhat convincing.

...trained on HN too, I see.


A bit more dry and sneering tone and ChatGPT could outright replace HN for all things beyond the realm of software


I agree wholeheartedly. As a BJJ brown belt, I asked it how to escape from an armbar. The response I received was a long paragraph that essentially amounted to "Don't panic, don't allow your elbow to get hyper-extended, try to make some space, and escape." While none of that is wrong, it won't get you out of an armbar.


you just inspired me to try this little experiment "how do I perform an Ikkyo from a kneeling position?"

Answer:

--------

Ikkyo is a technique in aikido, a Japanese martial art, that is used to control an opponent's arm and take them down to the ground. There are several ways to perform ikkyo from a kneeling position, but the following is a basic step-by-step guide:

1 Start in a kneeling position, with your right knee bent and your left leg extended behind you.

2 Face your opponent and grasp their right wrist with your left hand.

3 Pull your opponent's arm towards your left shoulder while extending your right arm out to the side.

4 Rotate your body to the left, pulling your opponent's arm across your body and down towards your left hip.

5 As you rotate, bring your right arm up and over your opponent's arm, and then down towards your right hip.

6 Continue rotating your body until you are facing your opponent's back.

7 Use your left hand to push your opponent's elbow towards the ground, while simultaneously pulling their wrist towards your left hip.

8 Use your right hand to control your opponent's head and guide them down to the ground.

It's important to remember that aikido techniques should only be practiced under the supervision of a trained instructor, and should never be used outside of a controlled training environment.

--------

2 and 3 are just wrong (you grab right wrist with right hand) and the rest is... open to interpretation if we want to be diplomatic.

Martial art moves are definitely difficult to describe in a purely verbal way but I think this is also a very good example the real problem behind chatGPT "working": what it produced definitely looks like a glorified markov chain based on scanning 200 aikido books (or webpages). There are different techniques where you really grab the opponent's right wrist with your right hand, and maybe they were just statistically predominant in the dataset.


Really interesting to hear your experiences. I guess it makes sense; why would ChatGPT be able to give detailed answers about the contents of books if it's only read summaries etc. posted on the internet

It sounds like an important next step is to work out how to get it to estimate how much it knows about a given topic, and be clear about that in its answers. I think that's a key difference between someone who's well read on various different topics and someone who's trying to bullshit you


> I guess it makes sense; why would ChatGPT be able to give detailed answers about the contents of books if it's only read summaries etc. posted on the internet

The more interesting thing is if it's read loads of summaries on the internet but they aren't particularly consistent. So ChatGPT has access to plenty of summaries of Gravity's Rainbow, but as it's a complex book with an ensemble cast and there's relatively little overlap between what human reviewers wrote were the standout moments and key themes of the book, ChatGPT tells me that it has seven chapters, each named after the colours of the rainbow instead. This isn't remotely close to being true (arguably Gravity's Rainbow contains so many themes and interpretations that being able to come up with a summary of what happens which is unambiguously false is an accomplishment!), but if you're a word model then "colours of the rainbow" and "sections... each named after" matches well to the theme "Rainbow" which appears to be a book and is syntactically valid, and none of the individual plot elements mentioned in reviews of the book are necessarily a better match

tbf, ChatGPT is much better than most bots at recognising prompts it shouldn't be able to respond to and giving "I cannot offer specific advice on..." boilerplate responses; someone's invested a lot of effort training it to do that (and possibly also discouraged it from mentioning any of the sex and drug elements of Gravity's Rainbow). But it bullshits by default.



Do I now fall into the category of people who are bullshitting because they didn't do their reading properly?


Here's an example of that I did which is either funny or insulting:

My question: Why did Buddha bring the dharma from the West?

Answer:

> The story of the Buddha bringing the dharma (or teachings) from the West is a metaphor that is often used in Buddhist literature to describe the Buddha's enlightenment. In this metaphor, the "West" represents a state of ignorance and suffering, while the "East" represents a state of enlightenment and understanding. According to the story, the Buddha was born and lived in the "West" before he became enlightened and brought the teachings of the dharma to the people of the world.

> This metaphor is meant to illustrate the idea that the Buddha's teachings are universal and applicable to all people, regardless of where they come from. It also suggests that the path to enlightenment is available to everyone, even those who are lost in ignorance and suffering.

> I hope this helps to clarify the meaning of this metaphor. Let me know if you have any other questions!

I get that I misquoted the koan (Bodhidharma, not Buddha) which was just my failing memory, not a subtle test, but it didn't correct me. More to the point, it completely misses the point of the koan and goes off on a tangent which I could just imagine coming out of some yahoo who's trying to sell Buddhism as a self-help book. It's not only wrong, it's wrong in precisely that kind of chirpy, condescending fashion which will, eventually, lead us to inventing a face for ChatGPT so we can bash it in with a few pounds of flax.

Anyway, here's actual Zen Buddhists talking about the koan:

https://www.treetopzencenter.org/bodhidharma-comes-from-the-...

Maybe this helps:

> In order to make a first stumbling approach to this koan you need to drop all concepts of meaning, zen, life, reality, and so forth. Once they are gone, then you can slowly move towards the koan. Then maybe you can make a stab at it. Knowing however that whatever you do—whatever stab you make will miss. So why stab? Why study? Why bother?

> I don’t know.


This pattern of typos and wrong idioms consistently gets ChatGPT. You can ask it to explain some idiom that you completely make up and it won't say "that's not an idiom people use" but instead it will just make some shit up and sound confident about it.

It reminds me of being in 2nd grade or something. I definitely did this. That's still pretty remarkable from a research angle. It makes the tool questionably useful (I don't really want to ask a 2nd grader for advice at work) but it feels like we aren't very many steps away from a college freshmen - and that'd be interesting.


tbf, one of ChatGPT's biggest strengths is that it's not nearly as easily tricked as other chatbots

"I'm sorry, but I am not familiar with.... It does not appear to be a commonly used phrase or idiom" is one of its stock phrases, and I was pretty impressed with "It was not possible for Ferdinand Magellan to have circumnavigated your living room, as he lived and died more than four centuries ago", and some of the stuff you can trick it into acknowledging like "the parable of the loaves and the fishes" or treating Allan Klein as the "sixth Beatle" would fool most humans.

(Sometimes the denials go too far, like insisting that not only is there no "eighth Beatle" but also "no record of any additional members being referred to as...any other numbered beatle")


Well, a substantial part of that is that it doesn't have all the information (for humans often experiences) that you have, so of course it can't come to a similar conclusion.


I think a key difference with humans is that ChatGPT doesn’t know that it doesn’t know.


Chatgpt doesn’t think even if it appears to do do


Usually it does have all the information, it just isn't able to put it all together. If you for instance ask it to name the female tennis players who have won the Wimbledon tennis tournament twice, but not more than twice, it will give you a plausible list of names, but at least half of the names on the list will be wrong.

If you then ask it to tell you more about one of the players that has been wrongly named, it will actually be able to tell you the correct number of Wimbledon wins for that player. It does know the facts. It just doesn't know how to put the facts together into new useful knowledge.


> but if you drill down you'll realise ChatGPT has no detailed knowledge about the book

I mean why would you expect any other outcome? I genuinely don't understand the hype. This reminds me of how people lost their shit about the whole (king - man + woman = queen) a few years ago. Like all of this seems impressive on the surface but there is nothing more to it.


This is an empty dismissal.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: