Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Ok, I understand your point now, and I agree with it for the most part. AGI really is a singularity. We can't see past it, and nobody knows what a society with AGI would look like, or even if there is room for humans in it. I think not. AGI will likely be our last invention as a species and the next step in natural evolution of life will be intelligent design (initiated by us!) Oh the irony.

So you're right that running full speed towards AGI is incredibly dangerous, and while it might still mean progress for life, it might not be progress for humanity. AGI may be one of the few technologies that are not progress. I'd argue nuclear fission so far has not been progress either, but that story has not yet fully played out. You could also think of other hypothetical and current technologies where the risks far outweigh the rewards. Imagine we find a way to unlock an energy so vast that a small group could unlock it and unleash it and super-heat the entire atmosphere of the planet, killing nearly all life. There's no law of physics that says that's impossible - and once discovered, there's no way to defend against some suicidal nutjobs doing exactly that. That's one of the solutions to the Fermi Paradox.

But AGI also may be our destiny - there may be no way to avoid it. Even if we could agree in the US to stop advancing AI, other countries will not agree, and so it continues anyway and the US just loses control over it. You can replace the US above with any country with the same game theoretic outcome. So game theory involving competing groups may possibly be an unstable system that ends in self destruction. That's another solution to the Fermi Paradox.

Then there's the detail that current LLMs like ChatGPT are not AGI, and probably don't lead there. They're a fancy parlour trick, but not really intelligence. So progress on LLMs may or may not bring us closer to AGI, nobody really knows. Stopping work on it now would halt progress, but only for those groups foolish enough to do so.

I don't yet see a path to being cautious and intentional about its development and use of AI. The genie is out of the bag, and can't be put back in. The same way nuclear fission can't be undone - although that's a bad analogy since it's much easier to control. Maybe we figure out a way to do that in future, but AI development is just the development and spread of information, and that's impossible to control.

What I think we can do, as you mentioned, is modify our societal and economic systems to be more fair and to not leave so many people behind who's skills have been obsoleted.



Also, I don't think you would need a human-like level of consciousness for a general problem solving device i.e. general intelligence. We could end up with a Deus Ex Machina situation where the general problem solving device appears to be human. Even exhibiting heart string pulling capabilities, not to mention cock string pulling capabilities, while still having the end goal being something absurd like going to a specific crosswalk at a Manhattan street and stand like a useless toaster.

That isn't the singularity, but it sure as hell is a general problem solving device i.e. AGI.


I think it's generally useful AI, but not AGI as people discuss the term. AGI would need to be sentient, self-aware. It would need to be alive and intelligent by any definitions of the term. ChatGPT is generally useful, but still very far from alive.

Anything short of that could mean large disruptions and societal changes, sure, but not a threat to humanity. Just technological progress as we know and love.


I don't understand why it's called the singularity. Shouldn't it be called the event horizon?


I guess the AGI is the singularity which causes the event horizon beyond which we can't see. But now we really are getting pedantic ;)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: