Hacker News new | past | comments | ask | show | jobs | submit login

Sparks of AGI is not AGI. It's also possible that we're not testing LLMs fairly, or that merely slight tweaks to the architecture or methods would address the issues. I think this comment elaborates nicely:

https://news.ycombinator.com/item?id=38332420

I do think there might be something missing, but I also suspect that it's not as far off as most think.




So, in other words, perhaps what we have is a necessary component but not wholly sufficient on its own?


That is my take on it.

I think embodiment and the encoding of the natural laws (gravity, force, etc) that go into that will be another huge step at grounding AI. People tend to gravitate to thinking about humanoid robots when that is mentioned (and thereby terminators), but honestly I would think things closer to sensor networks involving thousands or millions of bodies like a hivemind would be more likely (why stick at the human level of a single body if you didn't have to). Interaction with the world is a means of determining truth... The ability to perform science.

And as hard as embodiment is, it will be the easy part in my opinion. Continuous learning without losing the plot is going to be quite the challenge. If an LLM has something wrong, how does it update and change that bit of information without huge amounts of power use? How do you make the system learn 'important' things without filling up with junk/spam it is subject to? How do you keep the system aligned with a goal that is not destructive to itself or others?


But embodiment being a bottleneck could indicate that it's a data/training issue, rather than an architectural issue. Multimodal training data improves GPT-4 already, but that's still very little data compared to growing up to a full human adult. There are still many things to try.


That has always been my impression, despite the myriad ways that LLMs impress.

So much potential is lost just in the request/response limitation. While I’m waiting for a response from GPT-4, I’m continuing to think. Imagine if the reverse were true. AGI needs to be able to mull things over for spans of time.


At least any company trying to sell a product this is going to be an issue with operations costs.

Also this gets into the halting problem. How many resources do you expend on finding an answer? In a human issues will typically come up like we have to go pee, or eat, or something outside our body interrupts us. For an AI, how much time should it spend? Do we want to wake up one day finding our data centers running at full tilt?

This said, there have been some attempts at working on agent based systems that reach out for answers from multiple places and pool the data then run things like chain of thought on that data pool.


Perhaps not even necessary.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: