Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One definition of intelligence would be how many examples are needed to get a pattern.

AFAIK, all the major AI, not just LLMs but also game players, cars, anthropomorphic kinematic control systems for games [0] need the equivalent of multiple human lifetimes to do anything interesting.

That they can end up skilled in so many fields it would take humans many lifetimes to master is notable, but it's still kinda odd we can't get to the level of a 5-year-old with just the experiences we would expect a 5-year-old to have.

[0] Stuff like this: https://youtu.be/nAMSfmHuMOQ



It's apples to oranges.

Modern Artificial Neural networks are nowhere near the scale of the brain. The closest biological equivalent to an artificial neuron is a synapse and we have a whole lot more of them.

Humans do not start "learning" from zero. Millions of years of evolution play a crucial role in our general abilities. Much more equivalent to fine-tuning than starting from scratch.

There's also a whole lot of data from multiple senses that currently dwarf anything modern models are trained with yet.

LLMs need a lot less data to speak coherently when you aren't trying to get them to learn the total sum of human knowledge.

https://arxiv.org/abs/2305.07759

>but it's still kinda odd we can't get to the level of a 5-year-old with just the experiences we would expect a 5-year-old to have

Well we're not building humans.

"It's still kind of odd we can't a plane or drone to fly with the energy consumption or efficiency proportions of a bird".

I mean sure I guess and It's an interesting discussion but the plane is still flying.


All perfectly reasonable arguments, IMO.

But it's still a definition that humans pass and the AI don't.

(I'm in favour of the "do submarines swim" analogy for intelligence, which says that this difference isn't actually important).


I don't think saying "humans pass and AI doesn't" makes any sense here because the two are not even taking the same exam for all the points outlined above.

Evolution alone means humans are "cheating" in this exam, making any comparisons fairly meaningless.


If all you care about is the results, or even specifically just the visible part of the costs, then there's no such thing as cheating.

That's both why I'm fine with the AI "cheating" by the transistors being faster than my synapses by the same magnitude that my legs are faster than continental drift (no really I checked) and also why I'm fine with humans "cheating" with evolutionary history and a much more complex brain (around a few thousand times GPT-3, which… is kinda wild, given what it implies about the potential for even rodent brains given enough experience and the right (potentially evolved) structures).

When the topic is qualia — either in the context "can the AI suffer?" or the context "are mind uploads a continuation of experience?" — then I care about the inner workings; but for economic transformation and alignment risks, I care if the magic pile of linear algebra is cost-efficient at solving problems (including the problem "how do I draw a photorealistic werewolf in a tuxedo riding a motorbike past the pyramids"), nothing else.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: