Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That list isn’t just existential threats, though it is a big factor. Another factor is the marginal impacts.

Climate change is a huge oncoming disaster. It will cause millions of deaths and untold suffering. It’s also not much of an existential threat. It may kill 1% of people or even 10% of people (again, horrible) but there is not a solid argument about how it will cause the last human to take their last breath. The climate can get really really bad and the Earth will still be habitable for some humans.

It also seems to be pretty on-rails at this point. It’s already happening and will keep happening (short of a magic bullet). Both sides (pro and anti-humanity) have their heels dug in. It’s not a place where an indivisa can hop in and is likely to have much leverage.

AI has a very realistic path to destruction of humanity, although many will disagree on that. I think it should at least be obvious that it’s in the category of super pathogen and nuclear winter vs climate change. It’s also a problem that’s way more open, way less established. There’s more opportunity to move this for individuals, depending on the person of course.




The problem I have with this line of thinking is that there's no attempt to even engage in any serious discussion about whether rogue AI is actually likely at all. Is the chance that AI will wipe out humanity in the next 100 years 1%, or 0.1%, or 0.0000001%? What about in the next 1000 years? Nobody can claim any sort of confidence in those sort of estimates right now. If you're grouping rogue AI, super pathogens, and nuclear winter separately from climate change because of potential impact, you might as well throw in alien invasion, zombie apocalypse, and the rapture as well, because those all could have the same impact, and the claim that rogue AI is a serious threat is much closer in level of rigor to them than your other examples.


> The problem I have with this line of thinking is that there's no attempt to even engage in any serious discussion about whether rogue AI is actually likely at all.

By "no attempt" are you criticizing that my comment doesn't quantify the likelihood or that no one is quantifying the likelihood? If it's the latter, have you looked? There is definitely serious discussion happening.

The difference between AI and your other examples is trend. The ability of computers is growing superlinearly. Nothing related to aliens is changing much at all (some extra noise in the news about UFOs?), rapture has nothing going on. Maybe zombie apocalypse gets a tiny bump for there having been a global pandemic, but it's still approximately nothing. All of those are very different from what's happening with AI.


Even through that lens though, and even assuming AGI superintelligences are a reasonable thing to be worried about, is the AGI community helping? I kind of feel like, if a movement has a set of concerns around "this thing could end humanity" and OpenAI's response to that is essentially, "heck yeah, we got to get that on the posters and get some press articles about that" -- that to me is a sign that the movement isn't very effective.

I honestly think that OpenAI is at least partially using AGI concern for advertising. If I'm right and if that's the case, that is the kind of thing that should give that community pause. It should prompt the question, is that community actually doing anything to help avoid an existential outcome, or are they inadvertently accelerating it by basically giving fuel to the companies who are trying to create that world?

Ignore the fact that stuff like prompt injection seems like it should be pretty high priority for people worried about AGI anyway, ignore that there are lots of ways for buggy software wired up to critical systems to kill people without being an AGI -- even just taking the existential concerns at their face value, OpenAI has turned:

- "Wiring an intelligence up to more resources could allow it to break containment, so be very careful about that", into

- "Our AI is so good that these people are worried that it will break containment, check out the cool things it could do when we told it to break containment, doesn't that seem sci-fi? Anyway, we're launching in a couple of weeks."

And then OpenAI launched a product has clientside validation and uses essentially normal prompts for instructions. These are not people who know how to secure small things, let alone big things like a superintelligence. This is a system that would be terrible to use for an AGI. So again, even if I take it at face value that rogue AI should be the highest priority, it doesn't seem like the effective altruism community is being very... effective... at stopping the emergence of rogue AGI.

There's a criticism of the rogue AI fears as being unrealistic and out-of-touch with real security concerns that impact people today. Separately (and in addition), there's the criticism that the movement to stop rogue AI seems to be mostly larping its security measures and doesn't seem to be doing anything particularly useful to actually stop rogue AI. That movement should be even more concerned than I am about wiring AIs to arbitrary network APIs. They should not be OK with introducing that extra level of access just to make calendar appointments and SQL queries easier to execute. They shouldn't be OK with that level of risk being turned into a commercial product, not if they actually think this is a humanity-level existential concern.


As far as I know, there is no AGI community. You’re framing a risk/situation/problem as a group/cause. It is not.

This isn’t like walking past a bunch of charity booths and thinking “who seems to have their act together?”. That mindset works great for deciding between donating to a charity that gives wheelchairs to the poor and another that gives glasses. It is not the right framing to evaluate the issue of something destroying humanity.

The massive difference is it is completely wrong to think “well this would be a big deal, but they’re really blowing the execution”. No, that makes it a bigger deal. That’s the fundamental difference between a threat and an opportunity. Either it’s not real and it doesn’t matter, or it is real and it matters a lot. It’s not conditional on if someone can pull it off.

The security concerns that impact people today are just not on the same scale of importance. Having your chat history leak or people getting scammed by voice imitations of family members is not in the same category as a super intelligent AGI whose interest doesn’t align with ours. It’s like saying a group that saw the development of the atom bomb coming and chose to focus on preventing all-out nuclear war should have done more about the radioactive water runoff from the testing sites. And, that they didn’t, means that nuclear war isn’t that important and they shouldn’t be taken seriously.


> Either it’s not real and it doesn’t matter, or it is real and it matters a lot. It’s not conditional on if someone can pull it off.

If it is real and it does matter, and them LARPing research papers and giving OpenAI more advertising material makes it more likely to happen, that is conditional on their reaction. If the risk is real and they're making the problem worse (basically accelerating the timeline), then it would be better for them to stop talking about it and focus on basic security practices instead.

> The security concerns that impact people today are just not on the same scale of importance.

The security concerns that impact people today are tied into the risks of AGI. A company that can't secure its products against basic XSS attacks and prompt injection is fundamentally incapable of securing a rogue AI.

You'd have a point here if these were actually different categories, but they're not. 3rd-party prompt injection by random actors is a very feasible way for an AGI to turn rogue. That should be a big priority to fix if the concerns about AGI are real. And if it's unfixable, these people should be screaming from the rooftops that we should not be wiring up AIs to any real-world systems at all until we find a better mitigation technique. I mean, you're talking about something existential; obviously if that concern is real it's more important to mitigate those problems then for OpenAI to displace Google search and get a competitive advantage on the market. And those people should terrified that OpenAI is both rushing to the market and proving that they have bad security practices.

But for the most part, that reaction really isn't happening; so it makes me wonder how much people actually believe that AGI is an existential threat.

The really wild thing is that most of the current-world problems that exist with AI have massive implications for AGI. Systemic bias, corporations controlling the training functions, anthropomorphism from the general public, the ability to produce deceptive material or bypass human security checks on a mass scale, prompt injection and instruction bypassing -- all of those are extremely relevant to keeping a rogue AI contained or preventing it from going rogue in the first place.

As far as I'm concerned, anyone who was seriously concerned about AGI would be focusing on that stuff anyway: those categories represent some of the most immediately tangible steps you could take to prevent an AI from going rogue or from breaking containment if it went rogue.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: