Hacker News new | past | comments | ask | show | jobs | submit login

> If just predicting the next token can produce similar or better results than the almighty human intelligence on some tasks

But it's not better than almighty human intelligence, it _is_ human intelligence, because it was trained on a mass of some of the best human intelligence in all recorded history (I say this because the good stuff like Aristotle got preserved while the garbage disappeared (this was true until the recent internet age, in which garbage survives as well as the gold)).

> then maybe there's a bit of hubris in how smart we think we actually are

I feel like you could say this if ChatGPT or whatever obtained its knowledge some other way than direct guidance from humans, but since we hand-fed it the answers, it falls a little flat for me.

I'm open to persuasion.




chatgpt doesnt just feed us back answers we already taught it. It learned relationships and semantics so it can apply that knowledge to do something novel. For instance, I took the basic of a dream and told it to turn it into a short story. The short story wasn't bad. I said make it more exciting, it updated the story such that one of the cars exploded. I guess chatgpt learned excitement from michael bay.


(I'm going to be brusque for the sake of the argument, I very much could be wrong and I don't even know how much I believe of the argument I'm making.)

> chatgpt doesnt just feed us back answers we already taught it

True, there is some structure to the answers we already taught it that it statistically mimics as well.

> It learned relationships and semantics so it can apply that knowledge to do something novel

Can you provide an example of this novelty? I think we underestimate the depth and variety of things that humans have written about and put on the internet, and so while anything you ask ChatGPT to do might be outside of your own experience, it's highly likely that it's already been thought before and uploaded to the internet, and that ChatGPT is just parrotting back something to you that is very similar to what it has already seen.

This effect of ChatGPT having so much more experience/training data than any single human being such that it can convince any single human that it is original is an interesting one.

This is why I think, for example, that image generation will result in (a period of) "artistic inbreeding." Because there is so much that other humans have done that is outside of any individual's experience, we will accept e.g. Midjourney's output as something moving and original, when in reality it's just a slight variation on something that someone else has done before that we haven't seen.

(Again apologies for any rudeness, I respect your opinion and experiences and am enjoying the conversation.)


I took the basics of a dream I had, and asked it to turn it into a short story. the result was pretty good. Is it using stuff already to seed its responses? sure, but thats what we do to. Nothing you do or say wasn't taught to you. But these are not simply parroting responses.

I said this to chatgpt: I had a dream that me and my friend were in a car accident, and we had a choice in deciding how to use 1 hour. we could use it to live out our last final moments or spend that hour on trying to think of a solution to survive the wreck. turn this into a short story.

As they sat trapped in their overturned car, Maria and her best friend Rachel had a dilemma. They only had one hour left to live and had to choose how to spend it. Rachel suggested they spend their final moments together, talking about all their favorite memories. But Maria had a different idea. She proposed they use the hour to think of a solution to survive the wreck. Rachel was hesitant at first, but ultimately agreed. For the next hour, they brainstormed every possible escape plan they could think of. They tried pushing against the doors, breaking the windows, and even attempting to signal for help. Just as they were about to give up hope, Maria remembered a tool in the trunk of the car. She crawled over to retrieve it and used it to pry open the door. Rachel followed her lead, and they finally escaped the car just in time. As they stumbled away from the wreck, both girls were in shock that they had made it out alive. They hugged each other tightly, grateful for the hour they spent trying to find a solution instead of giving up. From that moment on, they made a promise to never take a single moment for granted.


> Nothing you do or say wasn't taught to you.

If nothing we do or say wasn't taught to us then where did all human knowledge come from in the first place? This doesn't hold up.

(Again, being direct for the sake of argument, please forgive any unkindness.)


From our environment, genetics, and other people. We simply are able to take in more inputs (i.e. not just text) than LLMs.


I would agree that much more than we're usually ready to admit to ourselves is second-hand, but saying everything is going too far. Inventions and discoveries are happening all the time, at all scales.


Inventions and discoveries are basically a very advanced form of pattern recognition - seeing "interesting" regularities in some particular input where no-one has spotted them before.

And LLMs are capable of making discoveries in this sense, if you feed them data and ask to spot the regularities. They're not particularly good at it, but that's a different question from whether they're able to do it in principle.


Yes, in as far as LLMs can be said to make inventions and discoveries, this is clearly how they do it. And yes, these type of processes definitely play a big part in our human creative capacity. But to say this is all there is to it, is going too far in my opinion. We just don't know. There's still so much we don't understand about ourselves. We haven't designed ourselves after all, we just happened to "come to" one bright primeval day and since then we've been exploring and discovering ourselves. And again and again, we've found ourselves in that seductive spot where we thought we "got it" and there's nothing essential about ourselves we're still missing, only again and again to be proven wrong. Dogmas crumble, new dogmas coagulate, only to dissolve again. Or, we could use the more dignified word "paradigm".


I mean, to me at least, that is the definition of discovery. The exact process used to spot the pattern is an implementation detail.

And yes, I agree that we really just don't know too many things. But my impression is that we're overestimating just how complicated out behavior really is.


The rabbit hole goes very deep with these questions. For example, you left out above the other half of the equation: inventions. Our creative ability. Is that just more pattern recognition? And can discovery and invention be always cleanly teased apart? Also, what humans might have access to is something that is more simple than we imagine. Mystics and philosophers have tried to point towards it. One book that discusses these things in the context of western science and philosophy is Nature Likes to Hide: https://www.amazon.com/Nature-Loves-Hide-Quantum-Perspective...


I would argue that invention is the same thing, yes - identifying patterns in the environment that can be exploited for productive purposes.

FWIW I think it's not a coincidence that LSD - which kicks pattern matching capabilities of the brain into high gear, so to speak - is commonly used as a drug to boost both "creativity" and "inventiveness".


Yes, but that dream? It could never have it. Sure, it can produce at times very convincing descriptions of supposed dreams, but not actually have the experience of dreaming. Because of that, there will always be ways it will eventually miss-step when trying to mimic human narratives.


Sorry about this, but I couldn't resist:

GTP4, rewrite the above message to be less brusque

I hope you don't mind me sharing a different perspective for the sake of discussion. I might be mistaken, and I'm not entirely sure how much I believe in the argument I'm presenting.

It's true that ChatGPT doesn't only provide answers based on what it has been taught, but it also uses the structure of those answers to create statistically similar responses. However, when it comes to demonstrating novelty, I think we might be underestimating the vast amount of information and variety that humans have written about and shared online. While anything we ask ChatGPT to do might be new to us personally, it's highly likely that it has already been thought of and documented online, and ChatGPT is simply providing a similar response based on its prior knowledge.

This phenomenon, where ChatGPT has significantly more training data and experience than any single human, leading to the illusion of originality, is quite intriguing. For instance, when it comes to image generation, we might experience a period of "artistic inbreeding" because we, as individuals, are not aware of everything others have done. We may perceive something like Midjourney's output as moving and original, when in reality, it could just be a slight variation of someone else's work that we haven't seen before.

Please don't take this as me being confrontational; I genuinely respect your opinion and experiences, and I'm enjoying our conversation.


> But it's not better than almighty human intelligence, it _is_ human intelligence, because it was trained on a mass of some of the best human intelligence in all recorded history

Sure, I was saying "better" in the sense that if for X task, it can do better than Y% of humans.

> since we hand-fed it the answers, it falls a little flat for me

We didn't really hand-fed it any answers though did we? If you put a human in a white box all its life, with access to the entire dataset on a screen but no social interaction, nothing to see aside from the text, nothing to hear, nothing to feel, nothing to taste, etc, it'd be very impressed if they were then able to create answers that seem to display such thoughtful and complex understanding of the world.


I think the human would make a lot of the same fundamental errors LLMs make, for similar reasons. The level to which LLMs seem to understand the world is highly superficial because it is entirely linguistic. Also human written texts about the world and human affairs miss out huge swathes of contextual information that we safely assume actual humans have. LLMs don’t have any of that, which is why they fall flat on their faces in so many ways.


Absolutely. What’s fascinating is that they’re getting such good understanding of many things through just text. Multimodal models that can process text, images, sounds, video, etc. are gonna be very interesting for that very reason




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: