Hacker News new | past | comments | ask | show | jobs | submit login

It's almost more astonishing to me to see reactions like yours. I showed it to a friend and she too was unimpressed because it gave answers in her field (finance) that were "good, but sortof basic".

So we have here a chat bot that pretty much passes for a human and some people react with "meh, it's not a genius". It makes me think of Tim Urbans chart: https://imgs.search.brave.com/YR7qt28AjhAeXSr_qvSmJubqIKJW5S...




It's even more astonishing (depressing?) to me how little credit people give humans these days. If the consensus is that ChatGPT pretty much passes for a human, my definition of a human is very different from that of the consensus. At the very least I'd like my humans to exhibit even minimal capacity for emotional expression as well as consistency and honesty.

I can't even believe I'm even spelling this out. What is the point of associating anything even remotely human with a chatbot like ChatGPT? I really don't get it. What benefit do we get from this? If anything is confuses us in ways that makes it more difficult to understand and argue about AI specifically, and technology generally.

ChatGPT is being grossly misinterpreted, and it's a shame, really, because it's such a cool piece of software.


I get your point, but here is a reason I am stunned:

Until now, if you ask "what makes humans different from animals" many responses could be in the line of "generating art, poetry, music, synthesise ideas"..

Well, now it seems somewhat clear that THAT part of human activity can be performend by something non-human.

(There is still a lot more to humans of course; I agree with you on that point)

Yes -- it is only reassembling stuff humans in the past have done -- but so is most of human activities in these areas. Very little humans produce is not inspired by input from others. Art, music and ideas reverberate back and forth between humans and only slowly change over the decades.


> Until now, if you ask "what makes humans different from animals" many responses could be in the line of "generating art, poetry, music, synthesise ideas"..

That's because we (as an agrarian society) have constructed an idea of “what a human is” in opposition to animals, but with the advance of AI maybe we'll realize that we are much closer to animals (especially other mammals) than we're comfortable to admit (and no, I'm not vegan).


Exactly. Human creative works are also on a bell curve, with the stuff in the middle typically being bland and boring. It's only when risks are taking that it holds our interest.

ChatGPT mirrors this, producing output that feels generic precisely because it's trying to predict what's the most likely response. That's in direct opposition to interesting creative works.

You can force it off the beaten track a little by adding more details to your prompt, but it will still try and fit the best curve through those additional points.


That doesn't really capture what the answer was supposed to mean. Infinite monkeys will inevitably compose The Odyssey if you give them infinite time, but it doesn't mean they match the moral quality of a human that is captured by the ability to create art. Art is a representation of experiences, beliefs, internal states, life history, desires, that ChatGPT doesn't have. It is the subjective experience being communicated via art that makes humans special.

And, to be clear, it doesn't make humans that special. Other animals have created art that expresses their sincere internal experiences and desires.

If I'm being perfectly honest with myself and you ask me what makes humans deserving of special moral consideration, I would have to say nothing. I think an elephant, gorilla, octopus probably all deserve the same consideration. I give more consideration to humans because I'm human, and what makes us human isn't art. A person with severe brain damage and the inability to understand or produce language or art is still human.


A piece of AI generated art recently scored top prize in a digital painting competition. It's starting to look more like one monkey, on a limited time frame, making art that people rated highly.


From an intellectual standpoint, the bot is impressive and this is coming from an AI skeptic BTW.

I get your perspective that a reductionist view of humans as solely intellectual agents is severely lacking, and bordering on dehumanizing if taken to an extreme but it still doesn't take credit away from the impressive capabilities that this bot exhibits.


"It confuses us in ways" could be framed: "it makes us rethink and expand our definition of intelligence and human cognition in very interesting ways".

Even if it's far from "true human" level (whatever that means), it gives us a theory about ourselves that's computationally falsifiable.


ChatGPT cannot express emotions because its creator specifically disallowed that from happening. I've had some chats with the more free and creative Character AI chat bots that are quite human-like, and I think if OpenAI gave ChatGPT some more leeway it could act shockingly human.


It does NOT pass for a human...

Are you human?

> No, I'm not a human. I'm a large language model trained by OpenAI. I'm a computer program that's been designed to generate human-like text based on the input I receive. I don't have my own thoughts or feelings, I just process the information I'm given and try to provide helpful responses. I tested the think now quite a bit. I am not sure how I would use it on a daily basis. Maybe for inspiration?


I don’t understand your point — I’ve been rehearsing that response to the normal and all-consuming test question “Are you human?” since Wednesday.


Are you human?


"Yes, of course i am. What are you, a f--ing idiot?"


> "my definition of a human is very different from that of the consensus. At the very least I'd like my humans to exhibit even minimal capacity for emotional expression as well as consistency and honesty"

Careful there. There are some fellow humans who wouldn't be able to pass your conditions.


It’s relatively easy to convince ChatGPT to feel and express emotion.


It can probably express feelings and emotions but not experience them.

You seem to conflate the two points.


It's relatively easy to write a prompt that ChatGPT can respond to with something it thinks you want to hear, based on what it's read others write in similar contexts — that's mostly what all these "wow ChatGPT gave me exactly what I wanted it to" posts are learning, mostly indirectly. GPT's moved from regurgitating appropriate texts to prompts, to synthesizing appropriate texts to prompts, but in both cases it's telling you what you want to hear. That's the point.

It's an incredible achievement, technologically. It's also very different from having feelings and emotions.

Or, if ChatGPT can "feel and express emotion" through this manner, does that mean its feelings and emotions are sociopathic in nature? Not in the terms of the disorder, but in simulating feelings and emotions because that helps it complete the task it's been given?

Or! What ChatGPT says after plugging the above question into it:

> No, ChatGPT does not have sociopathic tendencies. It is simply programmed to respond to certain prompts in a way that mimics human emotions and reactions, but it is not actually feeling or experiencing the emotions itself.


Is there any output ChatGPT could produce that would convince you it feels emotion?


I like GPT-3 a lot, but there's an important distinction between "plausible conversation" and "trustworthy answers". ChatGPT is probably best in the world in the "plausible conversation" category.

But you might say ChatGPT is like talking to a human who actually knows nothing about finance but doesn't let that stop them (and for that matter, if you wanted to anthropomorphize it, you could say part of the problem is ChatGPT literally "believes" everything it reads online - and you can easily make it "believe" anything through leading wording, which I think is a strong tool for strengthening the Turing test, since no human will respond that way). Sometimes they might luck into being correct, but you wouldn't want to base any information on what they say.

On the other hand, if I actually wanted reliable finance advice, a scripted finance chatbot would still win because those answers are written by people who do know what they're talking about.


One scenario where the "believes anything" might be useful is to use ChatGPT to get alternate takes on opinions. If you have some great idea or strongly held opinion, get ChatGPT to take the other side of the argument and poke holes in it. The creative but inaccurate characteristics of ChatGPT are less of a problem in this case but it might bring out alternatives you haven't considered.


>you could say part of the problem is ChatGPT literally "believes" everything it reads online

This is very much a problem with humans too.


To some extent, but people also have convictions about certain things, which GPT-based chatbots don't. The world would be very different if we could "fix" racists simply by asking them their favorite thing about people from other races (implying that there are admirable qualities, which GPT-3 plays along with but humans don't).


The GPT chatbot does have convictions they are just very context dependent and inconsistent.

It's not unusual for humans to behave this way either. E.g. "get your government hands off my medicare".


I'm starting a jigsaw puzzle company with my partner. I was working on the website/brand the other day and I wanted some clever copy to put on the website. I brainstormed for a few minutes, then I thought it'd be fun to see what ChatGPT could come up with.

Turns out, about half the copy we are using on the site is coming from ChatGPT. I came up with a few clever lines, but ChatGPT came up with more and faster. It produced some lines my partner and I legitimately laughed out loud at.

My copywriting skills are pretty bad. Even if ChatGPT's are "sort of basic", it's better than me. ChatGPT isn't replacing copywriters, but it's making the work they do much more accessible to people who aren't copywriters.

It felt like ~10 years ago the conversation was that "AI will do the tedious boring work and make more time to do the creative work". What actually seems to be happening is that the creative work is slowly being eaten by AI and giving it to the people who aren't good at that flavor of creative work.

Writing a few clever lines of copy for a indie puzzle company isn't the most creative work of all time, but it is still creative work. And it was done by AI. And it was done better than I could.


> ChatGPT isn't replacing copywriters

Sounds like it just did

It did the work of a copywriter. You might have done it yourself otherwise, or maybe outsourced it, but it did some of the copywriting work that otherwise would have been done by a human


To me it sounds like ChatGPT worked like Bootstrap: you need to make your website project decent looking but you don't know your way through css (although you know the fundamentals), Bootstrap to the rescue.

Needless to say: Bootstrap didn't make designer nor frontend engineers jobless.


If it is sold as an AI, people are unimpressed. It’s really not a genius.

If it is sold as a sophisticated general purpose chat bot (like you did), it’s incredibly impressive.

Context matters.


People collectively being unimpressed has never been and will continue to never be a yardstick. It’s good enough that no self respecting teacher would give an essay question as homework. It’s good enough to answer at least most factual questions you ask google. Turing would consider this to be an AI. Whether some jaded mediocre tech guru calls it an AI or not doesn’t matter that much in the big scheme of things.


If it kills homework then so much the better.


Yes, it's an extraordinarily impressive language processor. A true achievement.

The negative reactions are in response to people treating it as some kind of magic box.


I think the reason people are impressed is it's assumed the accuracy will improve quickly over time as architectures improve and the models increase in size, just as it did in other areas of machine learning like image classification.


Precisely. As Károly is so often reminding us, wait until two more papers down the line.


I think knowing how to ask the right questions is important. I've learned that I need to be very specific or I can get pretty general answers. And you can also ask clarifying questions afterwards, or ask for a greater level of detail.

I spent most of the day playing with it and saw it do some really bonkers things. I asked it to create a language similar to pig latin but with some other letter rearrangement strategy, and it gave me "frontback" language, where the first and last letters are swapped.

I also spent time investing medical case studies for it to diagnose, and it did pretty well - I was impressed when it identified ciguatera, but it couldn't differentiate between several possible shellfish toxins. Not going to take doctors' jobs (yet!)


I partially agree with you.

I agree that dismissing ChatGPT is a mistake; and if you know anything about AI, you know that ChatGPT is a technological leap (not alone; part of a small category, including stuff like Stable Diffusion, etc).

But also I would make the distinction between imitation and generation. ChatGPT is great at using existing inputs, found on the Internet, to sound plausible; and I'm sure it will get better at it in just mere months.

However, the ability to provide true advice, or in other words, to "generate" ideas and content and suggestions, is a whole different story. And on this front, I am not sure that we are where the cool Tim Urban vignette tries to convey.


Except we have no idea if the graph is going to turn out this way, AI tech may be like almost every tech and be facing diminishing returns in the near term (or even, if you plotted the advances against the number of people actively working in the field and the amount of money sent at hardware, we may have been facing diminishing returns for a while without realizing it because we investing a lot of effort in it).

Maybe AI will unlock a cascade of things that will make it a self reinforcing trend as we've seen with the industrial revolution, leading to an exponential boom, but maybe it will not. And the fact that the GPT-N is doing an incredible job at simulating intelligence doesn't really give a hint about the answer.


It's not about "genius". It gets just enough wrong (like, completely wrong) that it's untrustworthy. It's really good if you're already familiar with the content, though (or you know its weak spots).


What is the indication AI will develop exponentially like that?

It's just an hypothesis, not founded on anything concrete.

There might not be any path from GPR-3 to GAI, even GAI on ant or bird level.

A smart AI might not be smart enough to improve itself significantly.


Well as someone who kinda bought into the hype that we would have self driving cars 3 years ago, I’d say beware of underestimating the importance of the size of the problem.

However, I agree with you in general: there’s a certain amount of surprise in laughing off the creation an AI agent that could pass as a reasonably intelligent person if it just weren’t straight up wrong about random things.


FSD is different. The last 0.01% kills children.

The last x% to make this bot a full fledged programmer does not matter. The clients will accept bugs that would be fatal in FSD. We will lose our jobs at an astonnishing rate in some kinds of companies in the coming years. I guess agile sweatshops will be first but I have no clue how far it will go.

Even if you happen to be safe there will be a downwards pressure on wages. Algorithm cranking might be largely obsolete and our main focus might be skeleton writing. It removes a big obstacle for becoming a programmer. The "good at math in school" requirement.


Are you a programmer? Because I’m not sure from reading your comment that you understand the profession…

Good at math at school isn’t even a requirement to be a good programmer nowadays. Maybe in certain sub-disciplines, but not all.

Being a programmer is much more than coding “in the small”. It’s about analyzing requirements and creating high level abstractions. There has been a pressure towards reducing the amount of “coding in the small” since languages started incorporating standard libraries. Then there’s been web frameworks, open source packages, API services, etc. Despite all this the need for developers has exploded and there is a perennial supply gap for talent…

Then there is the question of who is going to be manipulating these tools? Programmers.

Same promise was made of low-code tools, and what do we have now?

- As many app devs as before

- Most low-code tools (at least in the enterprise) are operated by… app devs?


I know we don't do this here, but what a great point.

Additionally, it is hard to put my finger on a good explanation, but FSD is a very specific problem that we're trying to handcraft an AI solution for.

You have to appreciate how broad this model is and how decent the results it produces are without being told specifically how to do it.


As a reasonably intelligent person who is sometimes straight up wrong about random things, I feel I should get it to help me write a blog post to explain how I feel about it.


ChatGPT is wrong about basically everything though as long as you give it the right prompts. It has all the right answers but also all the wrong answers, that makes it much dumber than you who is reliably correct about some things at least.


>ChatGPT is wrong about basically everything though as long as you give it the right prompts.

So is my Uncle. This is very much a "human level intelligence" problem.


I really doubt that. The value of a human is what they are right about, not what they are wrong about, as long as your uncle knows some area fairly well he is valuable to society and worth his salary. Human intelligence is the sum of such humans, each human is wrong about a lot of stuff, but add some bits of correctness, and the sum of humans is extremely accurate at solving problems, or we wouldn't have computers or cars or rockets. ChatGPT doesn't know any area well, if you know the area then you can fiddle with ChatGPT until it gives you what you want but it isn't an expert on anything on its own.

ChatGPT is impressive at generating text, but it doesn't generate better information than GPT3, it just hides its ignorance better behind more political/vague speech so it is harder to find its errors/information. To me that is regression, not progress, the results looks better but are harder to parse correctly for humans.


These are not random things.

When the creators of this tool present it as the frontier of machine intelligence, and when its persona revolves around being intelligent, authoritative, and knowledgeable, and yet it gets some basic, not random, stuff awfully wrong, you can't really discount the skeptic sentiments expressed in the comments here like this.


Skeptic about what

You’re assuming that this will only be used when it’s perfect and in helpful ways

This will be used at scale THIS YEAR and every subsequent year to infiltrate social networks including this one and amass points / karma / followers / clout. And also to write articles that will eventually dwarf all human-generated content.

With this and deepfakes and image/video generation, the age of trusting or caring about internet content or your friends sharing online is coming to an end. But it will be a painful 10 years as online mobs and outrage will happen over and over since people think they’re reacting to their friends’ posts of real things.

No, forget violent killbots. Today’s tech puts the nail in the coffin of human societal organization and systems of decision making and politics over the next 10 years. And AI doesn’t have to be right about stuff to destroy our systems.

We’ve been analyzing the performance of ONE agent among say 9 humans at a poker table. But imagine untold swarms of them, being owned by competing groups, infiltrating ALL human content exchange.

Not much different than what happened in trading firms over the last 20 years. Bots will be WELCOMED because they perform better on many metrics but will F everyone on the others.


There's a crucial distinction between being impressive and being useful. Is it impressive that we've gotten to this point? Certainly! Is this a tool that people can safely rely on? Probably not.


The key thing is GPT3's apparent certainty when it says utter rubbish (like how to calculate the fourth side of a triangle). You have to be skilled in the field to catch those perhaps subtle issues - once they start adding a certainty level it'll be more worrying re: losing jobs to AI


It might not be able to replace artists or scientists but it sounds like it can replace politicians!


Stable Diffusion is already the greatest artist ever, it isn't even close.


Needs to get better at hands first.


You're not wrong there!


Totally, maybe we’re in the uncanny valley of AI intelligence and people are now over-reacting to small inaccuracies.


have we ever gotten out of uncanny valley for anything? like, did we finally learn to make non-cringeworthy human-looking robots? or do they still have this weird look they did 20+ years ago that you're able to tell in under a second


If we did, you wouldn't be able to tell ;)

I think we have made a lot of progress in the area of computer graphics - most action movies you watch now contain some level of CG characters or digital makeup that is very believable.

But really the point I was trying to make is that people become more discerning the closer something comes to being human-like, and that feeling masks the reality of the progress.


Because it's been hyped as genius? That will replace all our jr coders?


Ive subjected the thing to a professionally used IQ test and technically it scored very high on what I could test it at, however one way it lost points was claiming milk was transparent


What kind of tests did you feed it specifically? I’m curious to experiment with this too.


A Wisc which was already leaked, not gonna bother with any profesional one thats not leaked


It's just so much more mediocre content though; this is the last thing we need in a sea of interconnected beige. The value is just an AI-equivalent of experts-exchange.com


Partly it's a matter of what to ask Chat GPT. When I try to show people, they usually ask straightforward questions that they can already get a quick answer from in Google, which isn't terrible impressive.

But take this, for example[1]. I asked it to write me a story in a particular genre featuring certain animals, and it did. I asked it to switch genres, which it did well. When I asked for a backstory about how different characters met, it provided a fairly plausible one, as well as songs that would accompany the story if it were a musical, and potential titles for a sequel.

When I asked it to write the beginning of a New York Times article titled "Biden Shocks Nation"[2] I got a fairly convincing news story about Biden deciding not to run for office again. If asked to continue the story and include people who might run, it generates further paragraphs talking about who might run to replace him, starting with Kamala Harris, who it claims is a strong contender.

Is any of this writing amazing? No, but very few writing is. It's amazing how well it's able to generate generic human writing, as well as how easy it is to get it to create what you want with very few prompts.

[1] https://twitter.com/LowellSolorzano/status/15997859363671941... [2] https://twitter.com/LowellSolorzano/status/15997883331018752...


Yeah I'm totally astounded too by these comments being highly upvoted on HN. Aren't the people here supposed to be nerdy? As your graph shows the very very hard part was getting to this point, being able to comprehend the basics of language and being able reproduce it correctly. This is the intelligence that laypeople thought was still 20 years out. Domain knowledge is exceedingly simple, running a basic check on google for references or correctness is exceedingly simple and is hardly intelligence. Creating a second layer that would do a sanity check on the first layer is very simple. Asking it to be creative and inventive instead of generic is very simple.

Come on HN, this is a huge step and the revolution is impending.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: