Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There isn't and was never any movement of goalposts. They have been exactly the same for 70 years. We want creative systems (in the Deutschian sense) that can create new explanatory theories, which lead to actual new knowledge. When an AI is capable of creating new explanatory theories that are GOOD (not world salad), we will have human-like AGI. GPT is no closer to this goal than ELIZA (though it is much more useful).



Bro what???!!?? GPT-4 is already being used as a personalized tutor on Kahn Academy. It’s personally helped me understand difficult Algorithms and CV applications in my undergrad classes. GPT-4 is about to revolutionize the world.


It’s about to revolutionize the world, yes. What you described is what this sort if approach is good at: acting as a repository and reformatter for already existing human knowledge. But that doesn’t mean it’s an AGI, because as the person you’re responding to said, to be sure we have one of those requires making something that can create something beyond current human knowledge. (Or, at least, beyond just the logic that was contained in its training set)


What it kind of boils down to is: is it a tool, or an entity? One could argue that IDE's and Compilers each revolutionized the world.


Your average person has no idea what an IDE or compiler is. Many more people already know what ChatGPT is right now than will probably ever know what either of those two words mean.


That's because people haven't been imaginative enough to use them that way (they're too busy jailbreaking it to say racist things or proselytizing on social media). Even in past 24 hours some people have already found it use in drug discovery using its ability to synthesize and relate different types of knowledge. One of the main ways new knowledge arises is through connecting knowledges in disparate areas and finding relationships among them, and LLMs (especially GPT-4) has been demonstrated to be quite good in this area.


Seems like you're responding to a comment completely unrelated to mine...not sure what happened here. I never said otherwise.


You’re confusing AGI with useful AI. AI doesn’t have to become an AGI to change the world. I also haven’t seen anybody claiming the recent breakthroughs are AGI.


> I also haven’t seen anybody claiming the recent breakthroughs are AGI.

If you time travel back 50 years ago and told them in the future that a computer could ace almost any exam given to a high school student, most people would consider that a form of AGI.

Now, the goalpost has shifted to “It’s only AGI if it’s more intelligent than the totality of humans”.

If you haven’t heard anyone claim that we’ve made advances in AGI, you heard me here first: I think GPT3+ is a significant advancement in humanity’s attempts to create AGI.


>If you time travel back 50 years ago and told them in the future that a computer could ace almost any exam given to a high school student, most people would consider that a form of AGI.

The problem is that these sorts of things were thought to require some sort of understanding of general intelligence, when in practice you get solve them pretty well with algorithms that clearly aren't intelligent and aren't made with an understanding of intelligence. Like, if you time travel back 100 years and told them that in the future a computer could beat any grandmaster at chess, they might consider that a form of AGI too. But we know with hindsight that it isn't true, that playing chess doesn't require intelligence, just chess prowess. That's not to say that GPT4 or whatever isn't a step towards intelligence, but it's ludicrous to say that they're a significant advancement towards that goal.


That's another way to state the same thing actually.

One can adopt a static definition of "general intelligence" from a point in history and use it consistently. In this case, GPT3+ is a leap in humanity's quest for AGI.

One can also adopt a dynamic definition of "general intelligence" as you described. In this case the equivalent statement is that in hindsight GPT3+ shows that language ability is not "AGI", but rather, "merely" transformer models fed with lots of data. (And then humanity's goal would be to discover that nothing is "AGI" at all, since we'd have figured it all out!)

The fact that we see things differently in hindsight is already strong evidence that things have progressed significantly. It proves that we learned something that we didn't know/expect before. I know this "feels" like every other day you experienced, but let's just look at the big picture more rationally here.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: