Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

To join the quote-and-respond masses in the comments:

> AI will be the single most significant driving factor of change in the world. If we solve AGI (or achieve intelligence close to AGI), we'll likely solve most of the world's problems.

I literally don't get the confidence in this statement. I'm not an AI-doomer by any means, but AGI (if possible) will likely be the most powerful technology humankind has ever invented. Just in terms of possible impact, why would we assume it will solve more problems than it will create (or the opposite)?

Think of recent super-powerful technologies we've invented. Sure, there's the potential for fantastic fixes to many problems that come out of nuclear tech. But there's also... the threat of nuclear annihilation? Is that all net positive? Do we even really have a way to evaluate on the timescale of 100 years? How can we know the net impact of nuclear tech in the next century, or millennium?

How can we call this sort of rhetoric anything other than blind optimism? Why would we have any priors about how AI will go? Why do we say things that make us blindly rush forward?

I'm not being sarcastic, or trying to argue one way or another. I'm genuinely asking. How does anyone have confidence in "AI is good" or "AI is bad" claims? Is confidence even good in this case?

For me, these questions lead into such deep and treacherous waters it's probably best to stop the comment there. There are limits to what even interested HN addicts can ask of each other.



AI may be. But current neural nets are simply an advanced template generators, unable to "solve" any unsolved problem. I guess this would be featured on his list of "50 things I've learned the hard way before 50" :)

Overall the list reads exactly how I would expect an ambitious 25 year old would write. A little bit of generic health advice, a little bit of latest hyped tech (5 years ago he would have wrote decentralized finance, or DAOs I guess), a little bit of bragging (here I'm founding companies at 25 and teach people to do so properly). This is a like every intro to a self help books or modern biographies.


I'm glad you spent the time to read it. Would love to read yours if you ever make one!


I feel like we are in the 40s when everyone was talking about the “atomic age”, or the late 90s with internet. Fission technology did change the world, but not the way people expected, and not to the extent they were hoping in the areas they thought about most (energy prices). Internet has honestly lived up to expectations and more, but it took longer than people expected, and at least right now it’s way more centralized than anybody had ever guessed it would become.

Personally I think AI is going, over decades, to lead to huge changes on par with the invention of the computer and Internet, but like the computer and internet, it will be more gradual than we think. It’s already happening and we aren’t at the beginning, we’re just near some kind of dotcom-style inflection point IMO. Don’t get too caught up in the hype!

Anyway, the problem I have with posts like these is they simplify and confidently state things that take asymmetrically large amounts of text to refute or refine, because there are several other points (like blanket suggesting 2meals/day) I take issue with, and others I really like (renting/going carless) but are presented without the context or reasoning to justify them. Most of these points would require their own post to do them justice IMO


Thank you for the comment. I see what you mean and by no means my point is dismissive of the potential threats of AGI gone rogue. By the way, I didn't say "AI is good". I said that it would change the world the most.

Whether that involves the current humanity's status-quo or not, I'm not sure. I would even say that given where the world is heading right now, I would love to try and take a shot at some form of AGI governing us (making decisions etc) and establishing the world order. I'm not sure we can escape this future unless we go back into the medieval era.

In general, I guess I'm an optimist at heart and I'm more focused on the amazing things we would be able to do with the infinite compute and infinite resources rather than the doomsday scenario, but I'm supportive of thinking of both.


It's also a really strange thing to suggest because there are a lot of physical problems with the world that AGI won't be able to solve without access to extremely advanced robots (or some way to control humans to do its bidding which is even more terrifying). And then there's even more human problems such as getting people to agree on such as immigration or gun rights. Regardless of where you stand on the issue it's laughable to claim that AGI will be able to get people to agree on an action there.


Isn't all of this a question of the timescale?


The claim is that AGI will solve all these problems. But AGI alone cannot solve any problems. AGI needs other advancements before it can get close to solving the problems. Additionally many human problems cannot be solved with AGI unless AGI has control over humans.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: