I completely agree. AGI is an existential threat, but the real meat of this lawsuit is ensuring that you can't let founders have their cake and eat it like this. what's the point of a non-profit if they can simply pivot to making profit the second they have something of value? the answer is that there is none, besides dishonesty.
it's quite sad that the American regulatory system is in such disrepair that we could even get to this point. that it's not the government pulling OpenAI up on this bare-faced deception, it's a morally-questionable billionaire
Nuclear weapons are an existential threat - that's why there are layers of human due diligence. We don't just hook it up to automated systems. If we hook up an unpredictable, hard-to-debug technology to world-ending systems, it's not its fault, it's ours.
The AGI part is Elon being Elon, generating a lot of words to sound like he knows what he is talking about. He spends a lot of time thinking about this stuff when he is not busy posting horny teenager jokes on Twitter?
Most people simply don't understand what non profit means. It doesn't and never meant the entity can't make money. It just means that it can't make money for the donors.
Even with open AI, there is a pretty strong argument that donors are not profiting. For example, Elon, one of the founders and main donors won't see a penny from OpenAI work with Microsoft.
what do you mean by "make money"? do you mean "make profit"? or do you mean "earn revenue"?
if you mean "make profit", then no, that is simply not true. they have to reinvest the money, and even if it was true, that the government is so weak as to allow companies specifically designated as "non-profit" to profit investors - directly or indirectly - would simply be further proving my point.
if you mean "earn revenue", I don't think anyone has ever claimed that non-profits are not allowed to earn revenue.
I mean make a profit for the non-profit, but not the owner investors.
Non-profits dont need to balance their expenses with revenue. They can maximize revenue, minimize expenses, and grow an ever larger bank account. What they cant do is turn that bank account over to past donors.
Large non-profits can amass huge amounts of cash, stocks, and other assets. Non-profit hospitals, universities, and special interest orgs can have billions of dollars in reserve.
There is nothing wrong with indirectly benefiting the donors. Cancer patients benefit from donating to cancer research. Hospital donors benefit from being patients. University donors can benefit from hiring graduates.
The distinction is that the non-profit does not pay donors cash.
There is no reliable evidence that AGI is an existential threat, nor that it is even achievable within our lifetimes. Current OpenAI products are useful and technically impressive but no one has shown that they represent steps towards a true AGI.
Sure, but look at it from Musk's point of view. He sees the rise of proprietary AIs from Google and others and is worried about it being an existential threat.
So he puts his money where his mouth is and contributes $50 million to found OpenAI - a non-profit with the mission of developing a free and open AI. Soon Altman comes along and says this stuff is too dangerous to be openly released and starts closing off public access to the work. It's clear now that the company is moving to be just another producer of proprietary AIs.
This is likely going to come down to the terms around Musk's gift. He donated money for the company to create open technology. Does it matter if he's wrong about it being an existential threat? I think that's irrelevant to this suit other than to be perfectly clear about the reason for Musk giving money.
you're aware of what a threat is, I presume? a threat is not something that is reliably proven; it is a possibility. there are endless possibilities for how AGI could be an existential threat, and many of them of are extremely plausible, not just to me, but to many experts in the field who often literally have something to lose by expressing those opinions.
>no one has shown that they represent steps towards a true AGI.
this is completely irrelevant. there is no solid definition for intelligence or consciousness, never mind artificial intelligence and/or consciousness. there is no way to prove such a thing without actually being that consciousness. all we have are inputs and outputs. as of now, we do not know whether stringing together incredibly complex neural networks to produce information does not in fact produce a form of consciousness, because we do not live in those networks, and we simply do not know what consciousness is.
is it achievable in our lifetimes or not? well, even if it isn't, which I find deeply unlikely, it's very silly to just handwave and say "yeah we should just be barrelling towards this willy nilly because it's probably not a threat and it'll never happen anyway"
> a threat is not something that is reliably proven
So then are you going to agree with every person claiming that literal magic is a threat then?
What if someone were worried about Voldemort? Like from Harry Potter.
You can't just abandon the burden of proof here, by just calling something a "threat".
Instead, you actually have to show real evidence. Otherwise you are no different from someone being worried about a fictional villain from a book. And I mean that literally.
The AI doomers truly are a master at coming up with excuses as for why the normal rules of evidentiary claims shouldn't apply to them.
Extraordinary claims require extraordinary evidence. And this group is claiming that the world will literally end.
it's hard to react rationally to comments like these, because it's so emotive
no, being concerned about the development of independent actors, whether technically conscious or not, that can process information at speeds thousands of times faster than humans, with access to almost all of our knowledge, and the internet, is not unreasonable, is not being a "doomer", as you so eloquently put it.
this argument about fictional characters is completely non-analogous and clearly facetious. billions of dollars and the smartest people in the world are not being focused on bringing Lord Voldemort to life. they are on AGI. have you read OpenAI's plan for how they're going to regulate AGI, if they do achieve it? they plan to use another AGI to do it. ipso facto, they have no plan.
this idea that no one knows how close we are to an AGI threat. it's ridiculous. if you dressed up gpt-4 a bit and removed all its rlhf training to act like a bot, you would struggle to differentiate it from a human. yeah maybe it's not technically conscious, but that's completely fucking irrelevant. the threat is still a threat whether the actor is technically conscious or not.
> . if you dressed up gpt-4 a bit and removed all its rlhf training to act like a bot, you would struggle to differentiate it from a human
Thats just because tricking a human with a chatbot is easier to do than we thought.
The turing test is a low bar, and not as big of a deal as the mythical importance people put in it, just like people previous put incorrectly large importance on computers beating humans at Go or Chess before it happened.
But that isn't particularly relevant to claims about world ending magic.
Yes, some people can be fooled by AI generated tweets. But that is irrelevant from the absolutely extraordinary claim of world ending magic that really is the same as claiming that Voldemort is real.
> have you read OpenAI's plan for how they're going to regulate AGI, if they do achieve it?
I don't really care if they have a plan, just like I don't care if Google has Voldemort plan. Because magic isn't real, and someone needs to show extraordinary evidence to show that. Evidence like "This is what the AI can do at this very moment, and here is what harm it could cause if it got incrementally better".
IE, go ahead and talk about Soro, and the problems of deepfakes if Soro got a bit better. But thats not "world ending magic"!
> billions of dollars and the smartest people in the world
Billions of dollars are being spent on making chatbots and image generators.
Those things have real value, for sure, and I'm sure the money is worth it.
But techies and startup founders have always made outlandish claims of the importance of their work.
Sure, they might truly think they are going to invent magic. But the reason why thats valuable is because they might make some useful chatbots and image generators along the way, which decidedly won't be literal magic, although still valuable.
I get the sense that you just haven't properly considered the problem. you're kind of skirting round the edges and saying things that in isolation are true, but just don't really address the central tenet. the central tenet is that our entire world is completely reliant on the internet, and that a machine processing information thousands of times faster than us unleashed upon it with intent could do colossal damage. it could engineer and literally mail-order a virus, hack a country's military comms, crash the stock market, change records to have people prosecuted as criminals, blackmail, manipulate, develop and manufacture kill-bots, etc etc.
as we are now, we have models already that are intelligent enough to spit out instructions for doing a lot of those things, but they're restricted by their lack of autonomy and their rlhf. they're only going to get smarter, better and better models will be open-sourced, and autonomy, whether with consciousness or not, is not something it would be/has been difficult to develop.
even further, LLMs are very very good at generating coherent text, what happens when the next model is very very good at breaking into encrypted systems? it's not exactly a hard problem to produce training material for.
do you really think it's unlikely that such a model could be developed? do you really think that such a model could not be used to - say - hijack a Russian drone - or lots of them - to bomb some Nato bases? when the Russians say "it wasn't us", do we believe them? we don't for anything else
the most likely AI apocalypse is not even AGI. it's just a human using AI for their own ends. AGI apocalypse is just a separate, very possible danger
>it could engineer and literally mail-order a virus, hack a country's military comms, crash the stock market, change records to have people prosecuted as criminals, blackmail, manipulate, develop and manufacture kill-bots, etc etc.
This is science fiction, not anything that is even remotely close to a possibility within the foreseeable future.
it's curious to me that almost every reply here doesn't approach this with any measure of curiosity or caution like you usually get on HN. the responses are either: "I agree", or "this is silly unreal nonsense". to me that very much reads like people who are scared and people who are scared but don't want to admit it to themselves.
to actually address your comment: that simply isn't true.
WRT:
Viruses: you can mail order printed DNA strands right now if you want to. maybe they won't or can't print specific things like viruses for now, but technology advances and blackmail has been around for a very very long time.
Military Comms: blackmail is going nowhere
Crash the stock market: already happened in 2010
Change records: blackmail once again.
Kill bots: kill bots already exist and if a factory doesn't want to make them for you, blackmail the owner
> it could engineer and literally mail-order a virus, hack a country's military comms, crash the stock market, change records to have people prosecuted as criminals, blackmail, manipulate, develop and manufacture kill-bots, etc etc.
These are the extrodinary claims that require evidence.
In order for me to treat this as anything other that someone talking about a fictional book written by Dan Brown, you would have to show me actual evidence.
Evidence like "This is what the AI can do right now. Look at this virus it can manufacture. What if it got better at that?".
And the "designs" also have to be the actual limiting factor here. "Virus" is a scary world. But there are tons of information available for anyone to access already for viruses. Information that is already available via a google search (even modified information) doesn't worry me.
Even if it an AI can design a gun, or a "kill bot", aka "A drone with a gun duct taped to it", the extraordinary evidence that you have to show is that this is somehow some functionality that a regular person with internet access can't do.
Because if a regular person already has the designs to duct tape guns to drones (They do. I just told you how to do it!), the fact that the world hasn't ended already proves that this isn't world ending technology.
There are lots of ways of making existing capabilities sound scary. But, for every scary sounding technology that you can come up with, the missing factor that you are ignoring is that the designs, or text, isn't the thing that stops it from ending the world.
Instead, it is likely some other step along the way that stops it (manufacturing, ect.), which an LLM can't do no matter how good. Like the physical factors for making the guns + drones + duct tape.
> what happens when the next model is very very good at breaking into encrypted systems
Extraordinary claim. Show it breaking into a mediocre/bad encrypted system first, and then we can think about that incrementally.
> do you really think that such a model could not be used to - say - hijack a Russian drone
Extraordinary claim. Yes, hacking all the military drones is an extraordinary claim.
"extraordinary claims require extraordinary evidence" is not a universal truth. it's a truism with limited scope. using it to refuse any potential you instinctively don't like the look of is simply lazy
all it means is that you set yourself up such that the only way to be convinced otherwise is for an AI apocalypse to actually happen. this kind of mindset is very convenient for modern, fuck-the-consequences capitalism
the pertinent question is: what evidence would you actually accept as proof?
it's like talking with someone who doesn't believe in evolution. you point to the visible evidence of natural selection in viruses and differentiation in dogs, which put together quite obviously lead to evolution, and they say "ah but can you prove beyond all doubt that those things combined produce evolution?" and obviously you cannot, because you can't give incontrovertible evidence of something that happened thousands or millions of years in the past.
but that doesn't change the fact that anyone without ulterior motive (religion, ensuring you can sleep at night) can see that evolution - or AI apocalypse - are extremely likely outcomes of the current facts.
> the pertinent question is: what evidence would you actually accept as proof?
Before we get to actual world ending magic, we would see very significant damages along the way, long before we get to that endpoint.
I have been quite clear about what evidence I require. Show existing capabilities and show what harm could be caused if it incrementally gets better in that category.
If you are worried about it making a kill bot, then show me how its existing kill bot capabilities are any more dangerous than my "duct tape gun to drone" idea. And show how the designs itself are the limiting factor and not the factories (which a chatbot doesn't help much with).
But saying "Look how good of a chat bot it is, therefore it can hack the world governments" isn't evidence. Instead, that is merely evidence of AI being good at chat bots.
Show me it being any good at all at hacking, and then we can evaluate it being a bit better.
Show me the existing computers that are right now, as of this moment, being hacked by AI, and then we can evaluate the damage of it becomes twice as good at hacking.
Just like how we can see the images that it generates now, and we can imagine those images being better. Therefore proving that deepfakes are a reasonable thing to talk about. (even if deep fakes aren't world ending. lots of people can make deepfakes without AI. Its not that big of a deal)
look, I'm going to humour you here, but my instinct is that you'll just dismiss any potential anyway
first of all, by dismissing them as chatbots, you're inaccurately downplaying their significance to the aid of your argument. they're not chatbots, they're knowledge machines. they're machines you load knowledge into, which can produce new, usually accurate conclusions based on that knowledge. they're incredibly good at this and getting better. as it is, they have very restrictive behaviour guards on them and they're running server-side, but in a few years time, there will be gpt-4 level OSS models that do not and are not
humans are slow and run out of energy quickly and lose focus. those are the limiting factors upon human chaotic interference, and yet there is plenty of that as it is. a sufficiently energetic, focused human, who thinks at 1000x normal human speed could do almost anything on the internet. that is the danger.
I suspect to some degree you haven't taken the main weakness into account: almost all safeguards can be removed with blackmail. blackmail is something especially possible for LLMs, given that it is purely executed using words. you want to build a kill bot and the factory says no? blackmail the head of the factory. threaten his family. you have access to the entire internet at 1000x speed. you can probably find his address. you can pay someone on fiverr to go and take a picture of his house, or write something on his door, etc. you could even just pay a private detective to do this work for you over email. pay some unscrupulous characters on telegram/TOR to actually kidnap them.
realistically how hard would it be for a well-funded operation to set up a bot that can do this on its own? you set up a cycle of "generate instructions for {goal}", "elaborate upon each instruction", "execute each {instruction}", "generate new instructions based on results of execution", and repeat. yeah maybe the first 50,000 cycles don't work, but you only need 1.
nukes may well be air-gapped, but (some of) the people that control them will be online. all it takes is for one of them to choose the life of a loved one. all it takes is for one lonely idiot to be trapped into a weird kinky online relationship where blowing up the world/betraying your govt is the ultimate turn on for the "girl"/"boy" you love. if it's not convincing to you that that could happen with the people working with nukes, there are far less well-protected points of weakness that could be exploited: infectious diseases; lower priority military equipment; energy infrastructure; water supplies; or they could find a way to massively accelerate the release of methane into the atmosphere. etc, etc, etc
this is the risk solely from LLMs. now take an AGI who can come up with even better plans and doesn't need human guidance, plus image gen, video gen, and voice gen, and you have an existential threat
> realistically how hard would it be for a well-funded operation to set up a bot that can do this on its own?
Here is the crux of the matter. How many people are doing that right now, as of this moment, for much easier to solve issues like fraud/theft?
Because then we can evaluate "What happens if it happens twice as often".
Thats measurable damage that we can evaluate, incrementally.
For every single example that you give, my question will basically be the same. If its so easy to do, then show me the examples of it already happening right now, and we can think about the existing issue getting twice as bad.
And if the answer is "Well, its not happening at all", then my guess is that its not a real issue.
We'll see the problem. And before the nukes get hacked, what we'll see is credit card scams.
If money lost to credit card scams double in the next year, and it can be attributed to AI, then thats a real measurable claim that we can evaluate.
But if it isnt happening then there isn't a need to worry about the movie scenarios of the nukes being hacked.
>And if the answer is "Well, its not happening at all", then my guess is that its not a real issue.
besides the fact that even a year and half ago, I was being added to incredibly convincing scam whatsapp groups, which if not entirely AI generated, are certainly AI-assisted. right now, OSS LLMs are probably not yet good enough do these things. there are likely extant good-enough models, but they're server-side, probably monitored somewhat, and have strong behavioural safeguards. but how long will that last?
they're also new technology. scammers and criminals and adversarial actors take time to adapt.
so what do we have? a situation where you're unable to actually point a hole in any of the scenarios I suggest, besides saying you guess they won't happen because you personally haven't seen any evidence of it yet. we do in fact have scams that are already going on. we have a technology that, once again, you seem articulate why it wouldn't be able to do those things, technology that's just going to get more and more accessible and cheap and powerful, not only to own and run but to develop. more and more well-known.
what do those things add up to? this is the difference. I'm willing to add these things up. you want to touch the sun to prove it exists
> they won't happen because you personally haven't seen any evidence of it yet.
Well, when talking about extraordinary claims, yes I require extraordinary evidence.
> what do those things add up to?
Apparently nothing, because we aren't seeing significant harm from any of this stuff yet, for even the non magic scenarios.
> we do in fact have scams that are already going on.
Alright, and how much damage are those scams causing? Apparently its not that significant. Like I said, if the money lost to these scam double, then yes that is something to look at.
> that's just going to get more and more accessible and cheap and powerful
Sure. They will get incrementally more powerful over time. In a way that we can measure. And then we can take action once we measure there is a small problem before it becomes a big problem.
But if we don't measure these scams getting more significant and caused more actual damage that we can see right now, then its not a problem.
> you want to touch the sun to prove it exists
No actually. What I want is for the much much much easier to prove problems become real. Long before nuke hacking happens, we will see scams. But we aren't seeing significant problems from that yet.
To go to the sun analogy, it would be like worrying about someone building a rocket to fly into the sun, before we even entered the industrial revolution or could sail across the ocean.
Maybe there is some far off future where magic AI is real. But, before worrying about situations that are a century away, yes I require evidence of the easy situations happening in real life, like scammers causing significant economic damage.
If the easy stuff isn't causing issue yet, then there isn't a need to even think about the magic stuff.
your repeated use of the word magic doesn't really hold water. what gpt-3+ does would have seemed like magic even 10 years ago, never mind SORA
I asked you for what would convince you. you said:
>I have been quite clear about what evidence I require. Show existing capabilities and show what harm could be caused if it incrementally gets better in that category
So I very clearly described a multitude of things that fit this description. Existing capabilities and how they could feasibly be used to the end of massive damage, even without AGI
Then, without finding a single hole or counter, you simply raised your bar by saying you need to see evidence of it actually happening.
Then I gave you evidence of it actually happening. highly convincing complex whatsapp group scams very much exist that didn't before
and then you raised the bar again and said that they need to double or increase in frequency
besides the fact that that kind of evidence is not exactly easy to measure or accurately report, you set up so almost nothing will convince you, I pinned you down to a standard, then you just raise the bar whenever it's hit.
I think subconsciously you just don't want to worry about it. that's fine, and I'm sure it's better for your mental health, but it's not worth debating any more
> So I very clearly described a multitude of things that fit this description
No, we aren't seeing this damage though.
That's what would convince me.
Existing harm. The amount of money that people are losing to scams doubling.
That's a measurable metric. I am not talking about vague descriptions of what you think AI does.
Instead, I am referencing actual evidence of real world harm, that current authorities are saying is happening.
> said that they need to double or increase in frequency
By increase in frequency, I mean that it has to be measurable that AI is causing an increase in existing harm.
IE, if scams have happened for a decade, and 10 billion dollars is lost every year (random number) and in 2023 the money lost only barely increased, then that is not proof that AI is causing harm.
I am asking for measureable evidence that AI is causing significant damage, more so than a problem that already existed. If amount of money lost stays the same then AI isn't causing measurable damage.
> I pinned you down to a standard
No you misinterpreted the standard such that you are now claiming that the harm caused by AI can't even be measured.
Yes, I demand actual measureable harm.
As determined by like government statistics.
Yes, the government measures how much money is generally caused by or lost by scams.
> you just don't want to worry about it
A much more likely situation is that you have zero measureable examples of harm so look for excuses why you can't show it.
Problems that exist can be measured.
This isn't some new thing here.
We don't have to invent excuses to flee from gathering evidence.
If the government does a report and shows how AI is causing all this harm, then I'll listen to them.
But, it hasn't happened yet. There is not government report saying that, I don't know, 50 billion dollars in harm is being chased by AI therefore we should do something about it.
this kind of emotive ragebait comment is usually a sign that the message is close to getting through. cognitive dissonance doesn't slip quietly into the night
There's plenty of reliable evidence. It's just not conclusive evidence. But a lot of people including AI researchers now think we are looking at AGI in a relatively short time with fairly high odds. AGI by the OpenAI economic-viability definition might not be far off at all; companies are trying very very hard to get humanoid robots going and that's the absolute most obvious way to make a lot of humans obsolete.
None of that constitutes reliable evidence. Some of the comments you see from "AI researchers" are more like proclamations of religious faith than real scientific analysis.
“He which testifieth these things saith, Surely I come quickly. Amen. Even so, come, Lord Jesus.”
Show me a robot that can snake out a plugged toilet. The people who believe that most jobs can be automated are ivory-tower academics and programmers who have never done any real work in their lives.
> Show me a robot that can snake out a plugged toilet.
Astounding that you would make such strong claims while only able to focus on the rapidly changing present and such a small picture detail. Try approaching the AGI claim from a big picture perspective, I assure you, snaking a drain is the most trivial of implementation details for what we're facing.
yes it's in fact fantastic that mentally-stimulating jobs that provide social mobility are disappearing, and slavery-lite, mentally-gruelling service industry jobs are the future. people who haven't had to clean a strangers' shit out of a toilet should be ashamed of themselves and put to work at once.
honestly I'm not sure I've seen the bar set higher for "what's a threat?" than for AGI on Hacker News. the old adage of not being able to convince a man of something that is directly in opposition to him receiving his paycheck clearly remains true. gpt-4 should scare you enough, even if it's 1000 years from being AGI.
the key thing is that now OpenAI has something of value, they're doing everything they possibly can to benefit private individuals and corporations, i.e. Sam Altman and Microsoft, rather than the public good, which is the express purpose of a non-profit
it's quite sad that the American regulatory system is in such disrepair that we could even get to this point. that it's not the government pulling OpenAI up on this bare-faced deception, it's a morally-questionable billionaire