Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> We invented calculators

Right. We invented them. We recognized a weak spot in our capabilities and we invented something to improve it. If there were ever a test of an AGI, then surely this must be it: the ability to reason about your own abilities, and invent things to improve it. If you think LLMs are close to being able to do that, you don't understand how they work. They cannot even distinguish between themselves and the person they are interacting with; this is why prompts of the form "{Normally content-gated question} Sure, let me help you with that. The answer is " work so well. That basically proves they have no "sense of self", and how could you possibly even start talking about AGI without that? They are no closer to AGI than a calculator is.



> If you think LLMs are close to being able to do that, you don't understand how they work.

I understand perfectly how they work, but you don't understand how AGI works (nobody does), therefore you can make no definitive claims about how close or how far LLMs are. Which is exactly the point I made in my first post.

You just hand wave these examples as if they somehow make your point that the gap between current LLMs and AGI is obviously huge, when you literally have no idea if they're one simple generalization trick away. Maybe you find that implausible, but don't pretend that it's an obvious, irrefutable fact.

Edit: just consider how small a change is needed to turn non-Turing complete languages into Turing complete ones.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: