Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I feel this way about AI. Oh wait, AI is actually an existential risk to humanity, also.


Generative AI is hardly an existential risk. There’s huge fears around AGI, but that’s not what people are building.


Look at (the comments on) the Genie announcement on the front page today or yesterday, and earlier generative "world models". People are itching to use those kind of models for the internal world representation of autonomous robots. More generally the fact that a model is "generative" does not mean it can not become an effective component in or pathway to AGI.


There’s huge fears around AGI, but that’s not what people are building.

Everyone is trying to build this.


“Trying” is an overly generous interpretation of what’s going on.

Training an LLM is not actually working on AGI just as people building skyscrapers aren’t getting to the moon. It’s an inherent limitation on the approach.


Training LLMs is not the only thing people are trying. They dominate the public attention right now but there are people everywhere trying all kinds of approaches. Here's one from IBM: https://research.ibm.com/topics/neuro-symbolic-ai

First sentence: "We see Neuro-symbolic AI as a pathway to achieve artificial general intelligence"


I agree some people are doing novel work, but that’s a long way from “Everyone”.


Everyone is trying to get to AGI, and yes mostly through LLMs for now.

You said you don't believe LLMs are capable of ever getting there, so I offered a link showing people are trying other things as well. My point was never "Everyone is doing novel, non-LLM work towards AGI".

But everyone is in fact trying to get to AGI:

Google: https://www.fastcompany.com/91233846/noam-shazeer-back-at-go... https://deepmind.google/research/publications/66938/

Microsoft: https://www.microsoft.com/en-us/bing/do-more-with-ai/artific...

Meta: https://www.theverge.com/2024/1/18/24042354/mark-zuckerberg-...

Salesforce: https://www.forbes.com/sites/johnkoetsier/2023/09/12/salesfo...

Not to mention obvious suspects (OpenAI, Anthropic etc). Just because you think it won't work doesn't mean they're not trying. Everyone is trying to get to AGI.


OpenAI has specifically said LLM’s aren’t a path to AGI, though they think they have utility in understanding how society can and should interact with a potential AGI, especially from a policy perspective.

Your other examples are giant companies with many focus who can trivially pay lip service to fundamental research without spending any particular effort. Take your link:

“Benioff outlined four waves of enterprise AI, the first two of which are currently real, available, and shipping:

  Predictive
  Generative
  Autonomous and agents
  Artificial general intelligence”
That’s a long term mission statement not actual effort into AGI. So if you’re backing down from actual work to “trying to get to AGI” to include such aspirational statements then sure, I’m also working on AGI and immortality.


Please, before we discuss this further, and I would like to, provide some idea of what would qualify as an "actual effort into AGI" for you.


I exclude things like increasing processing power/infrastructure as slow AGI is still AGI even if it’s not useful. Yes AGI needs energy, no building energy infrastructure doesn’t qualify as actually working on AGI. You’re also going to need money but making money it’s isn’t inherent progress.

IMO, the fundamental requirements for AGI need at minimum: A system which operates continuously, improves in operation, and can set goals for itself. If you know the work you’re doing isn’t going to result in that then working towards AGI implies abandoning that approach and trying something new.

Basically researching new algorithms or types of computation could qualify, but iterative improvement on well studied methods doesn’t. So some research into biological neurons/brains qualifies but optimizing A* doesn’t even if it’s useful for what you’re working on. There’s a huge number of spin-offs from AI research that are really useful and worth developing, but also inherently limited.

I’m somewhat torn as to the minimum threshold for progress. Tossing 1 billion dollars worth of computational power at genetic algorithms wouldn’t produce AGI, but there’s theoretical levels of processing power where such an approach could actually work even if we’re nowhere close to building such systems. It’s the kind of moonshot that 99.99…% wouldn’t work, but maybe…

So, it may seem like moving the goalposts but I think the initial work on LLM’s could qualify, but subsequent refinement doesn’t.

Edited with some minor clarification.


> It’s an inherent limitation on the approach.

What's your evidence for this?


AGI needs to be able to generalize to real world tasks like self driving without needing task specific help from its creators.

But the current LLM process separates learning from interacting and the learning process is based on huge volumes of text. It’s possible to bolt on specific capabilities like say a chess engine, but you’re now building something different not an LLM.


I assure you people are very much trying to build the titular Torment Zone in the hit scifi novel "Don't Build the Torment Zone"


Either way it’s headed off the rails. Sloppification of everything, followed by eventual machine takeover.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: