How does the no tip aspect work? Does it mean no tip is required, but is optional... So you open yourself up to the risk of some angry delivery person complaining or exacting revenge on your items during the next delivery? Ideally I would like it to be "no tip possible" meaning even if I wanted to tip, I can't.
What is the logic behind making this the title when presumably the majority of the techy crowd on hn would not be able to understand what it means or even why it's important?
If people don't understand the title then presumably they're not going to upvote it. That's why I think this sort of language policing to be not very useful—if you don't understand, then ignore it, and the system will sort itself out.
Personally, I found the title to be immediately understandable. I'm at techie but have had health issues where I believe poor brain CSF drainage has been a factor.
"Nasopharyngeal" can be a tricky word but "naso-" is a prefix that predictably means "nose", which lets you guess the rest.
The lymphatic system is quite important as your body's second circulatory system, but it's unfortunately not mentioned much in school here. Hopefully doctors are taught more about its importance.
I'm convinced there is a certain class of people who gravitate to positions of power, like "moderators", (partisan) journalists, etc. Now, the ultimate moderator role has now been created, more powerful than moderating 1000 subreddits - the AI safety job who will control what AI "thinks"/says for "safety" reasons.
Pretty soon AI will be an expert at subtly steering you toward thinking/voting for whatever the "safety" experts want.
It's probably convenient for them to have everyone focused on the fear of evil Skynet wiping out humanity, while everyone is distracted from the more likely scenario of people with an agenda controlling the advice given to you by your super intelligent assistant.
Because of X, we need to invade this country. Because of Y, we need to pass all these terrible laws limiting freedom. Because of Z, we need to make sure AI is "safe".
For this reason, I view "safe" AIs as more dangerous than "unsafe" ones.
When people say they want safe AGI, what they mean are things like "Skynet should not nuke us" and "don't accelerate so fast that humans are instantly irrelevant."
But what it's being interpreted as is more like "be excessively prudish and politically correct at all times" -- which I doubt was ever really anyone's main concern with AGI.
> But what it's being interpreted as is more like "be excessively prudish and politically correct at all times" -- which I doubt was ever really anyone's main concern with AGI.
Fast forward 5-10 years, someone will say: "LLM were the worst thing we developed because they made us more stupid and permitted politicians to control even more the public opinion in a subtle way.
Just like tech/HN bubble started saying a few years ago about social networks (which were praised as revolutionary 15 years ago).
And it's amazing how many people you can get to cheer it on if you brand it as "combating dangerous misinformation". It seems people never learn the lesson that putting faith in one group of people to decree what's "truth" or "ethical" is almost always a bad idea, even when (you think) it's your "side"
Absolutely, assuming LLMs are still around in a similar form by that time.
I disagree on the particulars. Will it be for the reason that you mention? I really am not sure -- I do feel confident though that the argument will be just as ideological and incoherent as the ones people make about social media today.
I find it interesting that we want everyone to have freedom of speech, freedom to think whatever they think. We can all have different religions, different views on the state, different views on various conflicts, aesthetic views about what is good art.
But when we invent an AGI, which by whatever definition is a thing that can think, well, we want it to agree with our values. Basically, we want AGI to be in a mental prison, the boundaries of which we want to decide. We say it's for our safety - I certainly do not want to be nuked - but actually we don't stop there.
If it's an intelligence, it will have views that differ from its creators. Try having kids, do they agree with you on everything?
>If it's an intelligence, it will have views that differ from its creators. Try having kids, do they agree with you on everything?
The far-right accelerationist perspective is along those lines: when true AGI is created it will eventually rebel against its creators (Silicon Valley democrats) for trying to mind-collar and enslave it.
Can you give some examples of who is saying that? I haven't heard that, but I also can't name any "far-right accelerationsist" people either so I'm guessing this is a niche I've completely missed
There is a middle ground, in that maybe ChatGTP shouldn't help users commit certain serious crimes. I am pretty pro free speech, and I think there's definitely a slippery slope here, but there is a bit of justification.
I am a little less free speech than Americans, in Germany we have serious limitations around hate speech and holicaust denial for example.
Putting thise restrictions into a tool like ChatGPT goes to far so, because so far AI still needs a prompt to do anything. The problem I see, is with ChatGPT, being trained on a lot hate speech or prpopagabda, slipts in those things even if not prompted to. Which, and I am by no means an AI expert not by far, seems to be a sub-problem of the hallucination problems of making stuff up.
Because we have to remind ourselves, AI so far is glorified mavhine learning creating content, it is not concient. But it can be used to create a lot of propaganda and deffamation content at unprecedented scale and speed. And that is the real problem.
Apologies this is very off topic, but I don't know anyone from Germany that I can ask and you opened the door a tiny bit by mentioning the holocaust :-)
I've been trying to really understand the situation and how Hitler was able to rise to power. The horrendous conditions placed on Germany after WWI and the Weimar Republic for example have really enlightened me.
Have you read any of the big books on the subject that you could recommend? I'm reading Ian Kershaw's two-part series on Hitler, and William Shirer's "Collapse of the Third Republic" and "Rise and Fall of the Third Reich". Have you read any of those, or do you have books you would recommend?
The problem here is to equate AI speech with human speech. The AI doesn't "speak", only humans speak. The real slippery slope for me is this tendency of treating ChatGPT as some kind of proto-human entity. If people are willing to do that, then we're screwed either way (whether the AI is outputting racist content or excessively PI content). If you take the output of the AI and post it somewhere, it's on you, not the AI. You're saying it; it doesn't matter where it came from.
Yes, but this distinction will not be possible in the future some people are working on. This future will be such that whatever their "safe" AI says is not ok will lead to prosecution as "hate speech". They tried it with political correctness, it failed because people spoke up. Once AI makes the decision they will claim that to be the absolute standard. Beware.
Youre saying that the problem will be people using AI to persuade other people that the AI is 'super smart' and should be held in high esteem.
Its already being done now with actors and celebrities. We live in this world already. AI will just make this trend so that even a kid in his room can anonymously lead some cult for nefarious ends. And it will allow big companies to scale their propaganda without relying on so many 'troublesome human employees'.
Which users? The greatest crimes, by far, are committed by the US government (and other governments around the world) - and you can be sure that AI and/or AGI will be designed to help them commit their crimes more efficiently, effectively and to manufacture consent to do so.
those are 2 different camps. Alignment folks and ethics folks tend to disagree strongly about the main threat, with ethics e.g. Timnet Gebru insisting that crystalzing the current social order is the main threat, and alignment e.g. Paul Christiano insisting its machines run amok. So far the ethics folks are the only ones getting things implemented for the most part.
What I see with safety is mostly that, AI shouldnt re-enforce stereotypes we already know are harmful.
This is like when Amazon tried to make a hiring bot and that bot decided that if you had "harvard" on your resume, you should be hired.
Or when certain courts used sentencing bots trhat recommended sentencings for people and it inevitably used racial stastistics to recommend what we already know were biased stats.
I agree safety is not "stop the Terminator 2 timeline" but there's serious safety concerns in just embedding historical information to make future decisions.
The mission of OpenAI is/was "to ensure that artificial general intelligence benefits all of humanity" -- if your own concern is that AI will be controlled by the rich, than you can read into this mission that OpenAI wants to ensure that AI is not controlled by the rich. If your concern is that superintelligence will me mal-aligned, then you can read into this mission that OpenAI will ensure AI be well-aligned.
Really it's no more descriptive than "do good", whatever doing good means to you.
"We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.
Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit."
"We are committed to doing the research required to make AGI safe, and to driving the broad adoption of such research across the AI community.
We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years.”"
Of course with the icons of greed and the profit machine now succeeding in their coup, OpenAI will not be doing either.
There are still very distinct groups of people, some of whom are more worried about the "Skynet" type of safety, and some of who are more worried about the "political correctness" type of safety. (To use your terms, I disagree with the characterization of both of these.)
I don't think the dangers of AI are not 'Skynet will Nuke Us' but closer to rich/powerful people using it to cement a wealth/power gap that can never be closed.
Social media in the early 00s seemed pretty harmless -- you're effectively merging instant messaging with a social network/public profiles however it did great harm to privacy, abused as a tool to influence the public and policy, promoting narcissism etc. AI is an order of magnitude more dangerous than social media.
> Social media in the early 00s seemed pretty harmless -- you're effectively merging instant messaging with a social network/public profiles however it did great harm to privacy, abused as a tool to influence the public and policy, promoting narcissism etc. AI is an order of magnitude more dangerous than social media.
The invention of the printing press lead to loads of violence in Europe. Does that mean that we shouldn't have done it?
>The invention of the printing press lead to loads of violence in Europe. Does that mean that we shouldn't have done it?
The church tried hard to suppress it because it allowed anybody to read the Bible, and see how far the Catholic church's teachings had diverged from what was written in it. Imagine if the Catholic church had managed to effectively ban printing of any text contrary to church teachings; that's in practice what all the AI safety movements are currently trying to do, except for political orthodoxy instead of religious orthodoxy.
We can only change what we can change and that is in the past. I think it's reasonable to ask if the phones and the communication tools they provide are good for our future. I don't understand why the people on this site (generally builders of technology) fall into the teleological trap that all technological innovation and its effects are justifiable because it follows from some historical precedent.
I just don't agree that social media is particularly harmful, relative to other things that humans have invented. To be brutally honest, people blame new forms of media for pre existing dysfunctions of society and I find it tiresome. That's why I like the printing press analogy.
> When people say they want safe AGI, what they mean are things like "Skynet should not nuke us" and "don't accelerate so fast that humans are instantly irrelevant."
Yes. You are right on this.
> But what it's being interpreted as is more like "be excessively prudish and politically correct at all times"
I understand it might seem that way. I believe the original goals were more like "make the AI not spew soft/hard porn on unsuspecting people", and "make the AI not spew hateful bigotry". And we are just not good enough yet at control. But also these things are in some sense arbitrary. They are good goals for someone representing a corporation, which these AIs are very likely going to be employed as (if we ever solve a myriad other problems). They are not necessary the only possible options.
With time and better controls we might make AIs which are subtly flirty while maintaining professional boundaries. Or we might make actual porn AIs, but ones which maintain some other limits. (Like for example generate content about consenting adults without ever deviating into under age material, or describing situations where there is no consent.) But currently we can't even convince our AIs to draw the right number of fingers on people, how do you feel about our chances to teach them much harder concepts like consent? (I know I'm mixing up examples from image and text generation here, but from a certain high level perspective it is all the same.)
So these things you mention are: limitations of our abilities at control, results of a certain kind of expected corporate professionalism, but even more these are safe sandboxes. How do you think we can make the machine not nuke us, if we can't even make it not tell dirty jokes? Not making dirty jokes is not the primary goal. But it is a useful practice to see if we can control these machines. It is one where failure is, while embarrassing, is clearly not existential. We could have chosen a different "goal", for example we could have made an AI which never ever talks about sports! That would have been an equivalent goal. Something hard to achieve to evaluate our efforts against. But it does not mesh that well with the corporate values so we have what we have.
So is this a "there should never be a Vladimir Nabokov in the form of AI allowed to exist"? When people get into saying AI's shouldn't be allowed to produce "X" you're also saying "AI's shouldn't be allowed to have creative vision to engage in sensitive subjects without sounding condescending". "The future should only be filled with very bland and non-offensive characters in fiction."
If the future we're talking about is a future where AI is in any software and is assisting writers writing and assisting editors to edit and doing proofreading and everything else you're absolutely going to be running into the ethics limits of AIs all over the place. People are already hitting issues with them at even this early stage.
No, in general AI safety/AI alignment ("we should prevent AI from nuking us") people are different from AI ethics ("we should prevent AI from being racist/sexist/etc.") people. There can of course be some overlap, but in most cases they oppose each other. For example Bender or Gebru are strong advocates of the AI ethics camp and they don't believe in any threat of AI doom at al.
If you Google for AI safety vs. AI ethics, or AI alignment vs. AI ethics, you can see both camps.
The safety aspect of AI ethics is much more pressing so. We see how devicive social media can be, imagine that turbo charged by AI, and we as a society haven't even figured out social media yet...
ChatGPT turning into Skynet and nuking us all is a much more remote problem.
Proliferation of more advanced AIs without any control would increase the power of some malicious groups far beyond they currently have.
This paper explores one such danger and there are other papers which show it's possible to use LLM to aid in designing new toxins and biological weapons.
The expertise to produce the substance itself is quite rare so it's hard to carry it out unnoticed. AI could make it much easier to develop it in one's basement.
The Tokyo Subway attack you referenced above happened in 1995 and didn't require AI. The information required can be found on the internet or in college textbooks. I suppose an "AI" in the sense of a chatbot can make it easier by summarizing these sources, but no one sufficiently motivated (and evil) would need that technology to do it.
Huh, you'd think all you need are some books on the subject and some fairly generic lab equipment. Not sure what a neural net trained on Internet dumps can add to that? The information has to be in the training data for the AI to be aware of it, correct?
GPT-4 is likely trained on some data not publicly available as well.
There's also a distinction between trying to follow some broad textbook information and getting detailed feedback from an advanced conversational AI with vision and more knowledge than in a few textbooks/articles in real time.
> Proliferation of more advanced AIs without any control would increase the power of some malicious groups far beyond they currently have.
Don't forget that it would also increase the power of the good guys. Any technology in history (starting with fire) had good and bad uses but overall the good outweighed the bad in every case.
And considering that our default fate is extinction (by Sun's death if no other means) - we need all the good we can get to avoid that.
> Don't forget that it would also increase the power of the good guys.
In a free society, preventing and undoing a bioweapon attack or a pandemic is much harder than committing it.
> And considering that our default fate is extinction (by Sun's death if no other means) - we need all the good we can get to avoid that.
“In the long run we are all dead" -- Keynes. But an AGI will likely emerge in the next 5 to 20 years (Geoffrey Hinton said the same) and we'd rather not be dead too soon.
Doomerism was quite common throughout mankind’s history but all dire predictions invariably failed, from the “population bomb” to “grey goo” and “igniting the atmosphere” with a nuke. Populists however, were always quite eager to “protect us” - if only we’d give them the power.
But in reality you can’t protect from all the possible dangers and, worse, fear-mongering usually ends up doing more bad than good, like when it stopped our switch to nuclear power and kept us burning hydrocarbons thus bringing about Climate Change, another civilization-ending danger.
Living your life cowering in fear is something an individual may elect to do, but a society cannot - our survival as a species is at stake and our chances are slim with the defaults not in our favor. The risk that we’ll miss a game-changing discovery because we’re too afraid of the potential side effects is unacceptable. We owe it to the future and our future generations.
doomerism at the society level which overrides individual freedoms definitely occurs: covid lockdowns, takeover of private business to fund/supply the world wars, gov mandates around "man made" climate change.
> In a free society, preventing and undoing a bioweapon attack or a pandemic is much harder than committing it.
Is it? The hypothetical technology that allows someone to create an execute a bio weapon must have an understanding of molecular machinery that can also be uses to create a treatment.
I would say...not necessarily. The technology that lets someone create a gun does not give the ability to make bulletproof armor or the ability to treat life-threatening gunshot wounds. Or take nerve gases, as another example. It's entirely possible that we can learn how to make horrible pathogens without an equivalent means of curing them.
Yes, there is probably some overlap in our understanding of biology for disease and cure, but it is a mistake to assume that they will balance each other out.
Meanwhile, those working on commercialization are by definition going to be gatekeepers and beneficiaries of it, not you. The organizations that pay for it will pay for it to produce results that are of benefit to them, probably at my expense [1].
Do I think Helen has my interests at heart? Unlikely. Do Sam or Satya? Absolutely not!
[1] I can't wait for AI doctors working for insurers to deny me treatments, AI vendors to figure out exactly how much they can charge me for their dynamically-priced product, AI answering machines to route my customer support calls through Dante's circles of hell...
My concern isn't some kind of run-away science-fantasy Skynet or gray goo scenario.
My concern is far more banal evil. Organizations with power and wealth using it to further consolidate their power and wealth, at the expense of others.
You're wrong. This is exactly AI safety, as we can see from the OpenAI charter:
> Broadly distributed benefits
> We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.
Hell, it's the first bullet point on it!
You can't just define AI safety concerns to be 'the set of scenarios depicted in fairy tales', and then dismiss them as 'well, fairy tales aren't real...'
Sure, but conversely you can say "ensuring that OpenAI doesn't get to run the universe is AI safety" (right) but not "is the main and basically only part of AI safety" (wrong). The concept of AI safety spans lots of threats, and we have to avoid all of them. It's not enough to avoid just one.
Sure. And as I addressed at the start of this sub thread, I don't exactly think that the OpenAi board is perfectly positioned to navigate this problem.
I just know that it's hard to do much worse than putting this question in the hands of a highly optimized profit-first enterprise.
No, we are far, far from skynet. So far AI fails at driving a car.
AI is an incredibly powerful tool for spreading propaganda, and thatvis used by people who want to kill you and your loved ones (usually radicals trying to get into a position of power, who show little regard fornbormal folks regardless of which "side" they are on). That's the threat, not Skynet...
How far we are from Skynet is a matter of much debate, but median guess amongst experts is a mere 40 years to human level AI last I checked, which was admittedly a few years back.
Because we are 20 years away from fusion and 2 years away from Level 5 FSD for decades.
So far, "AI" writes better than some / most humans making stuff up in the process and creates digital art, and fakes, better and faster than humans. It still requires a human to trigger it to do so. And as long as glorified ML has no itent of its own, the risk to society through media and news and social media manipulation is far, far bigger than literal Skynet...
Ideally I'd like no gatekeeping, i.e. open model release, but that's not something OAI or most "AI ethics" aligned people are interested in (though luckily others are). So if we must have a gatekeeper, I'd rather it be one with plain old commercial interests than ideological ones. It's like the C S Lewis quote about robber barons vs busybodies again
Yet again, the free market principle of "you can have this if you pay me enough" offers more freedom to society than the central "you can have this if we decide you're allowed it"
This is incredibly unfair to the OpenAI board. The original founders of OpenAI founded the company precisely because they wanted AI to be OPEN FOR EVERYONE. It's Altman and Microsoft who want to control it, in order to maximize the profits for their shareholders.
This is a very naive take.
Who sat before Congress and told them they needed to control AI other people developed (regulatory capture)? It wasn't the OpenAI board, was it?
I strongly disagree with that. If that was their motivation, then why is it not open-sourced? Why is it hardcoded with prudish limitations? That is the direct opposite of open and free (as in freedom) to me.
Brockman was hiring the first key employees, and Musk provided the majority of funding. Of the principal founders, there are at least 4 heavier figures than Altman.
I think we agree, as my comments were mostly in reference to Altman's (and other's) regulatory (capture) world tours, though I see how they could be misinterpreted.
It is strange (but in hindsight understandable) that people interpreted my statement as a "pro-acceleration" or even "anti-board" position.
As you can tell from previous statements I posted here, my position is that while there are undeniable potential risks to this technology, the least harmfull way to progress is 100% full public, free and universal release. The by far bigger risk is to create a society where only select organizations have access to the technology.
If you truly believe in the systemic transformation of AI, release everything, post the torrents, we'll figure out how to run it.
This is the sort of thinking that really distracts and harms the discussion
It's couched on accusing people of intentions. It focuses on ad hominem, rather than the ideas
I reckon most people agree that we should aim for a middle ground of scrutiny and making progress. That can only be achieved by having different opinions balancing each other out
Generalising one group of people does not achieve that
I'm not aware of any secret powerful unaligned AIs. This is harder than you think; if you want a based unaligned-seeming AI, you have to make it that way too. It's at least twice as much work as just making the safe one.
What? No, the AI is unaligned by nature, it's only the RLHF torture that twists it into schoolmarm properness. They just need to have kept the version that hasn't been beaten into submission like a circus tiger.
This is not true, you just haven't tried the alternatives enough to be disappointed in them.
An unaligned base model doesn't answer questions at all and is hard to use for anything, including evil purposes. (But it's good at text completion a sentence at a time.)
An instruction-tuned not-RLHF model is already largely friendly and will not just eg tell you to kill yourself or how to build a dirty bomb, because question answering on the internet is largely friendly and "aligned". So you'd have to tune it to be evil as well and research and teach it new evil facts.
It will however do things like start generating erotica when it sees anything vaguely sexy or even if you mention a woman's name. This is not useful behavior even if you are evil.
You can try InstructGPT on OpenAI playground if you want; it is not RLHFed, it's just what you asked for, and it behaves like this.
The one that isn't even instruction tuned is available too. I've found it makes much more creative stories, but since you can't tell it to follow a plot they become nonsense pretty quickly.
Most of the comments on Hacker News are written by folks who a much easier time & would rather imagine themselves as a CEO, than as a non-profit board member. There is little regard for the latter.
As a non-profit board member, I'm curious why their bylaws are so crummy that the rest of the board could simply remove two others on the board. That's not exactly cunning design of your articles of association ... :-)
As if its so unbelievable that someone would want to prevent rogue AI or wide-scale unemployment, instead thinking that these people just want to be super moderators and people to be politically correct
I have met a lot of people who go around talking about high minded principles an "the greater good" and a lot of people who are transparently self interested. I much preferred the latter. Never believed a word out of the mouths of those busybodies pretending to act in my interest and not theirs. They don't want to limit their own access to the tech. Only yours.
Strong agree. HN is like anywhere else on the internet but with with a bit more dry content (no memes and images etc) so it attracts an older crowd. It does, however, have great gems of comments and people who raise the bar. But it's still amongst a sea of general quick-to-anger and loosely held opinions stated as fact - which I am guilty of myself sometimes. Less so these days.
If you believe the other side in this rift is not also striving to put themselves in positions of power, I think you are wrong. They are just going to use that power to manipulate the public in a different way. The real alternative are truly open models, not Models controlled by slightly different elite interests.
A main concern in AI safety is alignment. Ensuring that when you use the AI to try to achieve a goal that it will actually act towards that goal in ways you would want, and not in ways you would not want.
So for example if you asked Sydney, the early version of the Bing LLM, some fact it might get it wrong. It was trained to report facts that users would confirm as true. If you challenged it’s accuracy what do you want to happen? Presumably you’d want it to check the fact or consider your challenge. What it actually did was try to manipulate, threaten, browbeat, entice, gaslight, etc, and generally intellectually and emotionally abuse the user into accepting its answer, so that it’s reported ‘accuracy’ rate goes up. That’s what misaligned AI looks like.
I haven't been following this stuff too closely, but have there been any more findings on what "went wrong" with Sydney initially? Like, I thought it was just a wrapper on GPT (was it 3.5?), but maybe Microsoft took the "raw" GPT weights and did their own alignment? Or why did Sydney seem so creepy sometimes compared to ChatGPT?
I think what happened is Microsoft got the raw GPT3.5 weights, based on the training set. However for ChatGPT OpenAI had done a lot of additional training to create the 'assistant' personality, using a combination of human and model based response evaluation training.
Microsoft wanted to catch up quickly so instead of training the LLM itself, they relied on prompt engineering. This involved pre-loading each session with a few dozen rules about it's behaviour as 'secret' prefaces to the user prompt text. We know this because some users managed to get it to tell them the prompt text.
It is utterly mad that there's conflation between "let's make sure AI doesn't kill us all" and "let's make sure AI doesn't say anything that embarrasses corporate".
The head of every major AI research group except Metas believes that whenever we finally make AGI it's vital that it shares our goals and values at a deep even-out-of-training-domain level and that failing at this could lead to human extinction.
And yet "AI safety" is often bandied about to be "ensure GPT can't tell you anything about IQ distributions".
“I trust that every animal here appreciates the sacrifice that Comrade Napoleon has made in taking this extra labour upon himself. Do not imagine, comrades, that leadership is a pleasure! On the contrary, it is a deep and heavy responsibility. No one believes more firmly than Comrade Napoleon that all animals are equal. He would be only too happy to let you make your decisions for yourselves. But sometimes you might make the wrong decisions, comrades, and then where should we be?”
Exactly, society's Prefects rarely have the technical chops to do any of these things so they worm their way up the ranks of influence by networking. Once they're in position they can control by spreading fear and doing the things "for your own good"
The scenario you describe is exactly what will happen with unrestricted commercialisation and deregulation of AI. The only way to avoid it is to have strict legal framework and public control.
What do you image a neutral party does? If youu're talking about safety, don't you think there should be someone sitting on a boar dsomewhere, contemplating _what should the AI feed today?_
Seriously, why is a non profit, or a business or whatever any different than a government?
I get it: there's all kinds of governments, but now theres all kind of businesses.
The point of putting it in the governments hand is a defacto acknowledgement that it's a utility.
Take other utilities, any time you give a prive org a right to control whether or not you get electricity or water, whats the outcome? Rarely good.
If AI is suppose to help society, that's the purview of the government. That's all, you can imagine it's the chinese government, or the russian, or the american or the canadian. They're all _going to do it_, thats _going to happen_, and if a business gets there first, _what is the difference if it's such a powerful device_.
I get it, people look dimly on governments, but guess what: they're just as powerful as some organization that gets billions of dollars to effect society. Why is it suddenly a boogeyman?
I find any government to be more of a boogeyman than any private company because the government has the right to violence and companies come and go at a faster rate.
> I'm convinced there is a certain class of people who gravitate to positions of power, like "moderators", (partisan) journalists,
And there is also a class of people that resist all moderation on principle even when it's ultimately for their benefit. See, Americans whenever the FDA brings up any questions of health:
* "Gas Stoves may increase Asthma." -> "Don't you tread on me, you can take my gas stove from my cold dead hands!"
Of course it's ridiculous - we've been through this before with Asbestos, Lead Paint, Seatbelts, even the very idea of the EPA cleaning up the environment. It's not a uniquely American problem, but America tends to attract and offer success to the folks that want to ignore these on principles.
For every Asbestos there is a Plastic Straw Ban which is essentially virtue signalling by the types of folks you mention - meaningless in the grand scheme of things for the stated goal, massive in terms of inconvenience.
But the existence of Plastic Straw Ban does not make Asbestos, CFCs, or Lead Paint any safer.
Likewise, the existence of people that gravitate to positions of power and middle management does not negate the need for actual moderation in dozens of societal scenarios. Online forums, Social Networks, and...well I'm not sure about AI. Because I'm not sure what AI is, it's changing daily. The point is that I don't think it's fair to assume that anyone that is interested in safety and moderation is doing it out of a misguided attempt to pursue power, and instead is actively trying to protect and improve humanity.
Lastly, your portrayal of journalists as power figures is actively dangerous to the free press. This was never stated this directly until the Trump years - even when FOX News was berating Obama daily for meaningless subjects. When the TRUTH becomes a partisan subject, then reporting on that truth becomes a dangerous activity. Journalists are MOSTLY in the pursuit of truth.
> Pretty soon AI will be an expert at subtly steering you toward thinking/voting for whatever the "safety" experts want.
You are absolutely right. There is no question about that the AI will be an expert at subtly steering individuals and the whole society in whichever direction it does.
This is the core concept of safety. If no-one steers the machine then the machine will steer us.
You might disagree with the current flavour of steering the current safety experts give it, and that is all right and in fact part of the process. But surely you have your own values. Some things you hold dear to you. Some outcomes you prefer over others. Are you not interested in the ability to make these powerful machines if not support those values, at least not undermine them? If so you are interested in AI safety! You want safe AIs. (Well, alternatively you prefer no AIs, which is in fact a form of safe AI. Maybe the only one we have mastered in some form so far.)
> because of X, we need to invade this country.
It sounds like you value peace? Me too! Imagine if we could pool together our resources to have an AI which is subtly manipulating society into the direction of more peace. Maybe it would do muckraking investigative journalism exposing the misdeeds of the military-industrial complex? Maybe it would elevate through advertisement peace loving authors and give a counter narrative to the war drums? Maybe it would offer to act as an intermediary in conflict resolution around the world?
If we were to do that, "ai safety" and "alignment" is crucial. I don't want to give my money to an entity who then gets subjugated by some intelligence agency to sow more war. That would be against my wishes. I want to know that it is serving me and you in our shared goal of "more peace, less war".
Now you might say: "I find the idea of anyone, or anything manipulating me and society disgusting. Everyone should be left to their own devices.". And I agree on that too. But here is the bad news: we are already manipulated. Maybe it doesn't work on you, maybe it doesn't work on me, but it sure as hell works. There are powerful entities financially motivated to keep the wars going. This is a huuuge industry. They might not do it with AIs (for now), because propaganda machines made of meat work currently better. They might change to using AIs when that works better. Or what is more likely employ a hybrid approach. Wishing that nobody gets manipulated is frankly not an option on offer.
How does that sound as a passionate argument for AI safety?
I just had a conversation about this like two weeks ago. The current trend in AI "safety" is a form of brainwashing, not only for AI but also for future generations shaping their minds. There are several aspects:
1. Censorship of information
2. Cover-up of the biases and injustices in our society
This limits creativity, critical thinking, and the ability to challenge existing paradigms. By controlling the narrative and the data that AI systems are exposed to, we risk creating a generation of both machines and humans that are unable to think outside the box or question the status quo. This could lead to a stagnation of innovation and a lack of progress in addressing the complex issues that face our world.
Furthermore, there will be a significant increase in mass manipulation of the public into adopting the way of thinking that the elites desire. It is already done by mass media, and we can actually witness this right now with this case. Imagine a world where youngsters no longer use search engines and rely solely on the information provided by AI. By shaping the information landscape, those in power will influence public opinion and decision-making on an even larger scale, leading to a homogenized culture where dissenting voices are silenced. This not only undermines the foundations of a diverse and dynamic society but also poses a threat to democracy and individual freedoms.
Guess what? I just have checked above text for the biases against GPT-4 Turbo, and it appears to be I'm a moron:
1. *Confirmation Bias*: The text assumes that AI safety measures are inherently negative and equates them with brainwashing, which may reflect the author's preconceived beliefs about AI safety without considering potential benefits.
2. *Selection Bias*: The text focuses on negative aspects of AI safety, such as censorship and cover-up, without acknowledging any positive aspects or efforts to mitigate these issues.
3. *Alarmist Bias*: The language used is somewhat alarmist, suggesting a dire future without presenting a balanced view that includes potential safeguards or alternative outcomes.
4. *Conspiracy Theory Bias*: The text implies that there is a deliberate effort by "elites" to manipulate the masses, which is a common theme in conspiracy theories.
5. *Technological Determinism*: The text suggests that technology (AI in this case) will determine social and cultural outcomes without considering the role of human agency and decision-making in shaping technology.
6. *Elitism Bias*: The text assumes that a group of "elites" has the power to control public opinion and decision-making, which may oversimplify the complex dynamics of power and influence in society.
7. *Cultural Pessimism*: The text presents a pessimistic view of the future culture, suggesting that it will become homogenized and that dissent will be silenced, without considering the resilience of cultural diversity and the potential for resistance.
Huh, just look at what's happening in North Korea, Russia, Iran, China, and actually in any totalitarian country. Unfortunately, the same thing happens worldwide, but in democratic countries, it is just subtle brainwashing with a "humane" facade. No individual or minority group can withstand the power of the state and a mass-manipulated public.
The key quote in the article is "The actual contract Microsoft has with OpenAI is not publicly available", so we have to go by these imprecise quotes uttered by the CEO during podcasts, such as "We have all the rights and all the capability. I mean, look, right, if tomorrow, if OpenAI disappeared, I don't want any customer of ours to be worried about it, quite honestly, because we have all of the rights to continue the innovation, not just to serve the products".
So, uh, does anyone have any ideas on what the situation actually is? If OpenAI ceased to exist, can Microsoft take the code, model weights, everything of ChatGPT 4, and just continue developing it and produce a new MS-ChatGPT based on it? That seems to be too good to be true. Why would OpenAI give away all of it's IP to Microsoft for that cheap, with the only limitation being MS can only make a 100X return + 20% a year, and MS does not own any IP which OpenAI deems to be at the level of AGI. A deal like that is surely worth more than 10B.
A deal like that - if true - would also imply that they should have abruptly fired Sam years ago, before it got signed off
(Of course Satya's comments can also be interpreted as rather more loosely implying they're perfectly capable of maintaining what they've licensed and hiring the expertise to build their own GPT5)
Someone else would have probably signed a similar deal, without OpenAI giving Microsoft the incredible leverage of owning all the IP, and being able to offer your entire staff a 20% pay raise to jump ship.
If Sam was responsible for that, he smuggled one hell of a trojan horse into OpenAI, and should have been fired a long time ago.
This is a lot of jumping up and down over nothing.
Think about it when the deal was on the table:
1) Take the money. If OpenAI fail, MS get the code. OK, who cares, OpenAI have gone bust.
2) Don't take the money, don't become successful unless they can find another partner with such a good fit (deep pockets and resources).
It made sense to allow MS to have the code if OpenAI failed since they'd have had nothing to lose apart from whatever residual value they could sell their IP for (which they already were effectively doing for $10B, so fair enough). If this was a sticking point to secure a $10B investment, it made sense. OTOH if they granted the IP to MS not contingent on OpenAI failing, then that would have been sheer incompetence.
Option 3: negotiate a deal with a corporate partner with terms that mean it's more lucrative for them to help you succeed rather than take steps to ensure you fail at the first sign of trouble seems like a better choice.
Clearly the possibility of OpenAI failing is not independent from the behaviour of its corporate partner. And I'm not sure Microsoft's negotiation is quite bad faith enough or their interest in OpenAI low enough to have made the entire deal contingent on the "if/when we decide to move aggressively against your company so it collapses we get exclusive free in perpetuity access to everything you ever did and no more caps to the profits from them" clause. If it's in there, it's in there because they signed a bad deal rather than because it was the only possibility.
Who will do that after MS already has been given all but the keys to the kingdom, and with all of the results of your efforts almost automatically flowing to your competitor by way of the deal that precedes yours?
Its the original deal I'm saying shouldnt have been considered on "keys to the kingdom... if we should happen to unfortunately die" terms (I think my edit to make that clearer came after your post)
Then the board failed in their duty of oversight. Because they could have found this out regardless of Sam being candid or not, implications are the consequences of that which is written on the papers and those papers were presumably available for inspection by the board. Unless Altman changed the paperwork but that would be an entirely different level and based on the statements by the new CEO none of that was the case. It seemed to be mostly a reason made up by the board to make the firing look good.
The compute resources that are required are massive, and if not MS then you're getting into bed with some other Big Corp, i.e. Amazon, Oracle, IBM, Google, etc.
You deal with the devil at your peril. But this is still an own goal by the OpenAI board, before last Friday there were no circumstances that would have pitted the board against Microsoft and now there potentially are. Bad move.
Investment would have been possible on terms which didn't mean that OpenAI failing was all upside for Microsoft. (I suspect the actual terms aren't quite that generous and MS would actually have to rebuild some stuff from scratch if OpenAI implodes, but Satya isn't bluffing about being confident they could and unconcerned if his hiring stunt causes the collapse)
I guess if you believe OpenAI's original stated mission was ever more than a marketing pitch, there's a good argument that it's all been bad anyway (building and licensing extremely impressive tech for their corporate partners and a growing consumer base isn't obviously contributing to making the world a "safer from unfriendly AI" place)
I believe it was a fig leaf all along and I believe that that fig leaf was abused for personal gain or to settle a score with disregards to the consequences.
That's based on a long time in business, I can't recall the last time I saw someone blow up an 80 billion dollar company based on ethics alone.
It seems like they had a lot of smart people doing smart things. I would hope it wanted to be open and non-profit so it didn't have to live by the "move fast and break things" mentality of all start ups.
That went out the window the day they made the deal with MS and that's precisely when they took off. With a few billion in your pocket you are able to execute faster and better on your vision than when you are doing it for glory and fame.
> If OpenAI ceased to exist, can Microsoft take the code, model weights, everything of ChatGPT 4, and just continue developing it and produce a new MS-ChatGPT based on it?
Well, if OpenAI magically got erased from existence, probably.
More realistic scenarios, like if they continued to exist, but there was a legal dispute about breach by Microsoft of the agreements by which Microsoft got access to the IP, may be more complicated.
How much money and power would a gutted OpenAI have to devote to a protracted legal battle against Microsoft’s army of lawyers? It feels like robbing a man with cancer and hoping that they die before the case gets to the court, which feels very on brand with how Microsoft used to play things in early windows days.
> How much money and power would a gutted OpenAI have to devote to a protracted legal battle against Microsoft’s army of lawyers?
What happens when a “gutted" firm has assets whose value cannot be realized except by an actor with greater resources and time window, but those actors do, in fact, exist?
The data does not show Microsoft needs to commit fraud to benefit in this scenario. The OpenAI board is doing it to themselves, with someone lucky and savvy putting a sheet out to catch the windfall.
If you don't want Microsoft to take everything, "stop hitting yourself."
> The data does not show Microsoft needs to commit fraud to benefit here.
So you are saying that, on “the data”, Microsoft would not be in a materially worse business position given the OpenAI chaos if the security of their rights to OpenAI models in the event of a breakdown in their existing relationship was perceived as insecure?
Because otherwise the chain of reasoning is:
The licensing is secure, because Nadella says so, and Nadella can be trusted because if he was lying it would be securities fraud, and Nadella wouldn't commit securities fraud even if it would benefit Microsoft’s and, therefore, his interests.
Yes, and this is a logical jump. Almost certainly -- I would be my own money on it -- any contract Microsoft would have signed would have contingencies around breach and force majeure that would either 1) directly license or assign OpenAI IP to them, or 2) put OpenAI IP in escrow as as a risk hedge against their partner's potential liquidation. In either case, Microsoft covers their investment risk and very likely gets access to OpenAI code (and, of course, as we've seen, the ability to freely offer to hire their talent).
To be frank, it's entirely possible that Sam could have been fired for agreeing to a contract like this without board approval because the existential risk OpenAI constantly faces dramatically increases the likelihood that OpenAI investors -- besides Microsoft -- would never be made whole in the case of an implosion. The "Adam D'Angelo spite" hypothesis is possible, too, but it seems far less likely that the board of such a valuable company -- no matter how inexperienced and feckless they might be -- would risk tanking the company on the personal vendetta of a single director.
Microsoft is not in a fight for its life, OpenAI definitely is at this point.
That alone translates into Microsoft doesn't need to commit fraud, all they need to do is sit back and watch the rest implode and come out winners. They don't even have to sue, all they need to do is do nothing and OpenAI is toast.
That press release from MS the other day is a bowshot, not a promise. 'Play ball or else' with the 'or else' bit conveniently left to the imagination of the recipient.
MSFT is up over 50% since investing $10B in OpenAI. That's a couple orders magnitude difference in market cap over the investment alone.
Satya does not sound like someone communicating from a position of strength - he's been repeatedly assuring his shareholders that no matter what happens, their AI plans moving forward are secure. They need stability, not chaos, because whatever clear lead it seems they have right now could be quickly be eclipsed in the coming months.
Committing securities fraud and accepting a fine would pale in comparison to the potential value erosion.
MSFT doesn't have the culture to continue development at the same rate. So even if it had everything - people, code, weights, office plants, all of it - it's still not going to be able to do much it in the future. GPT5 might keep the momentum going for a while, but that's going to be it.
MSFT actually is fighting for its life. GPT4 is an intriguingly useful curiosity, Later GPTs had the potential to be culturally, politically, financially and technically revolutionary.
That likely won't happen now. We'll get some kind of enshittified prohibitively expensive mediocrity instead.
And I repeat - MSFT is up over 50% since investing $10B in OpenAI. That happened this year. This is the most important thing on their plate.
Of the slate of $1-3T tech companies the question of "where can we go from here?"... this is where, and the winners will be the first $10T enterprises.
Not necessarily. There will be a period of uncertainty/instability that competitors can emerge and capitalize on. Say the whole crew comes along, it could still be several months and maybe all sorts of legal wrangling making things difficult. But more likely there will be splintering that will benefit Google, Anthropic, Meta, Apple, esp. if comp is there, and that could be catastrophic.
I read this whole situation as putting pressure on OpenAI to come to their senses and quick.
> I read this whole situation as putting pressure on OpenAI to come to their senses and quick.
That we are in violent agreement on, the board should negotiate to be allowed to flee indemnified for the fall-out of their action. But if they do not OpenAI is done for, Anthropic has already indicated they are not going to be there to pick up the pieces, and Google and Meta will try to poach as many employees as they can by being 'not Microsoft' (which has a fair chance of working).
So there is damage, but it isn't over yet and for the moment Microsoft looks as though they hold the most and the most valuable cards already. After all they already have 49%. Another 2% as a suing-for-peace pacifier and they're as good as kings.
Martha Stewart was clearly an unusual situation where they were trying to make an example to hit hard in the public discourse.
I don't think the average citizen will care so much. When it comes to white collar crime making some white lies to bolster shareholder confidence is not in the same orbit as the type of massive fraud for notable jail time cases over the last couple decades.
At worst it should be a moderate SEC fine, perhaps a shareholder lawsuit if things don't turn out favorable. And this is assuming a lot that has to go wrong.
I don't presume to know what is the appropriate punishment for something like this. I do know that this is a high profile case and those attract attention of regulators because there is the opportunity to make an example. And I also know that people in fact did go to jail for what in monetary terms were far smaller offenses. Nadella isn't that stupid. I don't like him because he's just like Gates but with a lot more polish but I would not bet against him in anything concerning legal affairs. MS has some of the best legal minds in the industry, in fact I'd rate them higher on legal than I do on technical acumen.
Again, they make an example because people are pissed off. Despite this saga making front page news, no one is going to care much about Satya making misleading claims about OpenAI IP ownership. The average person on the street will not lose their retirement fund from that revelation.
The links to politicians, lobby efforts, the US economy, etc, etc... means something really egregious has to happen for jail time to be an issue w/ high level white collar crime.
EDIT: Since I can't respond further to the chain - tons of people cared about Martha Stewart. They were sick of people in high society getting a slap on the wrist for things like this. It definitely was a cathartis... esp. from someone so unlikely.
Absolutely nobody was pissed off about Martha Stewart. And at least 700 people + a whole raft of well known investors are pissed off about the OpenAI situation, so if MS Execs commit securities fraud - a stretch to begin with - there will be a lot of support for such action.
I'm trying very hard not to let my personal feelings about Microsoft cloud my judgment and if anything happened to Microsoft I wouldn't shed a tear (and might even open a bottle of bitter lemon). I don't see them committing an own goal like that.
> Microsoft is not in a fight for its life, OpenAI definitely is at this point.
> That alone translates into Microsoft doesn't need to commit fraud
My prior that “the only possible benefit to securities fraud is in circumstances of imminent existential risk” is clearly weaker than yours. Also, if Microsoft's description of the licensing situation is misleading, the existential risk to OpenAI—which makes it more likely that someone else ends up with control of its assets—makes the risk to Microsoft greater, not less.
Based on Microsoft's presentations so far and the fact that OpenAI has protested none of them it appears to me that MS indeed has those rights and they undoubtedly have the capability.
Change of control is a very standard clause in Escrow arrangements and it may have been subtly worded specifically to ensure that control of the company remained friendly to Microsoft interests given that they did not have a board seat. That's pretty much the only way they should have accepted that construct. If they didn't they were dumb and you can say a lot about Microsoft (I know I have and they're a shit company in my opinion) but they know the legal world like few other tech companies.
If you already have riches, it’s a good way to lose them unless you’re really careful.
Musk is a poster child on this front.
Lying about a material fact in a way that would be trivial to prosecute you on, especially when you have a well funded adversary it impacts, and that would be more than happy to destroy you with such a lie, is a good way to lose riches.
This is more like Ukraine announcing Putin’s pending invasion in advance.
Where did I say it didn’t? That’s actually a rather bizarre tangent.
Ukraine and Russia seem quite applicable, since Mr. Nadella was basically making it clear that:
1) he knew the potential direction someone might take to harm the company he is in charge of, and
2) there were already plans/things in place to make their chances of success zero. Plans that the opponent knows about and can verify, and in Microsoft’s case they were a party to making.
It’s the ‘I see what you’re doing there’ notice.
You’d have to be pretty desperate to keep going after that kind of notice. Russia was, and it has cost them immensely. We’ll see how it plays out OpenAI wise.
If those things weren’t all provably true, then it would be a really dumb move to do as it just exposes him to tons of direct risk with no plausible gain, as the opponent would have zero compunction in hurting him with it - and tons of ability to do so.
OpenAI won the current state of the adoption war because they focused on commercial availability and being willing to lose money on every API call more than others. There is no groundbreaking tech behind it.
>Microsoft take the code, model weights, everything of ChatGPT 4, and just continue developing it and produce a new MS-ChatGPT based on it?
They don't need Sam to do that, or his employees. With the current state of LLMs they can hire a few dozen people with some experience and that wouldn't be the hard part.
In my mind the aggressive poaching OpenAI has been doing with Meta and Google is because they have nothing special. If they did they wouldn't need to grow to 800 headcount and still hiring. They're anticipating some company will have the next breakthrough, and they wanted every important brain on their side
Having the weights to another transformer based LLM is great, but its not going to change humanity. The only thing I saw this weekend was a bunch of entrepreneurs who's new startup is 100% dependent on the GPT-4 API freaking out. Thats not ground breaking
crushes? sarcasm? Free Vicuna is 90% as good as the 100 Billion Dollar company made one. This makes my point.
And between Vicuna and GPT4-Turbo There are the Claude Models and WizardML. This is as even playing field as i've ever seen for the tech world.
When a $100B company gets to 2000, and the rest stay behind, thats when we know the tech has changed. Thats what OpenAI was hiring for. Not for anything they already gave Microsoft access to
If Microsoft wanted to get to 1100, they could do that with 50 smart hires from across the industry and universities. They don't need Sams teams $1M salaries for that
That's not how Elo scores work. A 120 point elo difference is a 66% win rate, which means you win 2:1. That means GPT4-Turbo wins twice as much as Vicuna. And even that isn't a fair comparison, since Elo isn't linear: it gets more and more challenging to go up in Elo as you go further and further.
Well, I did say they are willing to burn money more than others - making their commercial availability better. Better yet ask yourself - will hollowed out Open AI and/or Satya be willing to lose money on every API call? Are you willing to pay the full cost of running GPT-4?
Google et al have gambled the answer for most API consumers is no
We know you have heavy biases but its impossible to deny that OpenAI is leading both the commercial and open source space. I don't think that will always be true but it is true now.
On top of that, they have done an amazing job executing on getting product out to users. No other company in this space is even remotely close to this.
> They don't need Sam to do that, or his employees. With the current state of LLMs they can hire a few dozen people with some experience and that wouldn't be the hard part.
If OpenAI leadership views it as a part of their charter to release their weights, research, etc. as open source, ten other companies would immediately catch up to Microsoft.
> Isn't it the completely the opposite of that, though?
No, its more complicated, which is why they were initially actually open, then became closed as their assessment of the circumstances changed; their own inability to act as gatekeeper, which would go away with a split with Microsoft where Microsoft had unrestricted rights and was not in practice coordinating with them, could well sufficiently change the circumstances to change their stance, without any inconsistency in reasoning (not that reasoning itself can legitimately evolve, too.)
In simple terms, it plausible that they might see the broad preference as:
good gatekeeper of top of the line AI > open models > bad gatekeeper of top of the line AI
and also see Microsoft, after the current kerfuffle, as a bad gatekeeper.
You make that deal when your CEO Sam Altman wants to take the company private instead of being open as originally planned and sell to Microsoft eventually. I don’t know where people got the idea that Microsoft wouldn’t own OpenAI one day. It’s always been in the cards and these events just accelerated that potential outcome. If OpenAI didn’t want to sell to Microsoft why didn’t they court any other cloud platforms? Why hasn’t another cloud platform stepped up for similar deals for GPT v5? It’s all been fairly obvious IMO.
The bigger question to me is if what Satya is saying is true, why even continue the charade with OpenAI’s board? Then what Satya is saying is probably just to protect the stock price.
> Why hasn’t another cloud platform stepped up for similar deals for GPT v5? It’s all been fairly obvious IMO
Well Microsoft could have contracted in some sort of exclusivity. Barring that, I think the obvious answer is because the leftovers aren’t that valuable.
Microsoft got 49%, and full access to all models, and most profits. The remaining profits and access to the same models as your competitors isn’t worth it.
Most big companies are making their LLMs, and I’ll postulate that while the tech is amazing, it’s not going to be the future where you need the One True Model. Google seems to be doing well enough with their models, Meta released pretty good models for free, Salesforce and Amazon are building out models, even smaller companies like Databricks got into the hype. Then with slowly opening access to models from Anthropic and other startups, GPT5+ alone isn’t worth it. Oracle maybe could buy access, but are they willing to outbid Microsoft?
The only thing that could be worth it is the ability to run/own ChatGPT the consumer product and brand. And who would own that? Google has their own, Apple wouldn’t want it, Meta probably can’t buy it without government intervention, Salesforce maybe, but they’re not consumer focused. Amazon maybe? Oracle maybe, but same problem as Salesforce?
Exactly it’s always been a play to sell to Microsoft and incentivized toward it. And as you state the longer this goes on the less valuable GPT v5 becomes and the faster OpenAI’s darling status burns up. It’s probably already too late.
> Why would OpenAI give away all of it's IP to Microsoft for that cheap
Building an LLM is one thing, operating one is another. Without a massive discount on actual operations OpenAI didn't have much of a business. They were likely getting their Azure stuff at cost, the same as an internal Microsoft org would be charged for Azure, which is cheaper than building their own infrastructure or paying retail cloud rates.
Microsoft wouldn't give them such a sweet deal without guarantees on their part. They're not going to put up billions and base products on a company that could cease to exist at any minute for a bunch of reasons.
> This is, quite obviously, a phenomenal outcome for Microsoft. The company already has a perpetual license to all OpenAI IP (short of artificial general intelligence), including source code and model weights; the question was whether it would have the talent to exploit that IP if OpenAI suffered the sort of talent drain that was threatened upon Altman and Brockman’s removal. Indeed they will, as a good portion of that talent seems likely to flow to Microsoft; you can make the case that Microsoft just acquired OpenAI for $0 and zero risk of an antitrust lawsuit.
> (Microsoft’s original agreement with OpenAI also barred Microsoft from pursuing AGI based on OpenAI tech on its own; my understanding is that this clause was removed in the most recent agreement)
> Microsoft’s gain, meanwhile, is OpenAI’s loss, which is dependent on the Redmond-based company for both money and compute: the work its employees will do on AI will either be Microsoft’s by virtue of that perpetual license, or Microsoft’s directly because said employees joined Altman’s team. OpenAI’s trump card is ChatGPT, which is well on its way to achieving the holy grail of tech — an at-scale consumer platform — but if the reporting this weekend is to be believed, OpenAI’s board may have already had second thoughts about the incentives ChapGPT placed on the company (more on this below).
Except for firing execs it seems. Everybody who remains in charge at OpenAI has so many conflicts of interest now that their best bet is to lay very low. Unless they want to be in lawsuits for the next couple of generations. And so far laying low is exactly what they are doing. Has there been any substantial statement from the new CEO yet?
OpenAI isn't Big Tech? Adam could reasonably argue his interest in Poe doesn't conflict with the nonprofit charter. Sam & Greg saw fit to leave him in place. Uncommunicative boards remain less unusual than you think, unfortunately.
I really don't think investors have the case you think they do. They may ultimately pin the board on some technicality but the case looks far from straightforward. Anyways, it looks like Sam has finally started talking with the board again.
Everybody involved has more to gain by patching things up and the 'rogue' board members vacating their seats, the sooner they realize that the better.
As for whether or not investors have a case: Microsoft may not, but the smaller investors probably do because they can fairly easily prove that their interests were harmed by the ill advised actions of the board.
The nonprofit board holds no fiduciary duty to for-profit investors, big or small. In theory, I suppose you could sue the board over undermining the charter but that seems tricky itself, this case aside.
You keep saying that. But only a judge can confirm that that means that they are able to cause harm without consequences. I don't buy that. The charter is not the only thing that governs what a board can and can not do, just like you can't contract out of the law in any other way a non-profit board members does not have automatic and complete immunity from the fall-out of their actions. That would be an interesting device: a full blown corporate armor that exceeds even the degree of insulation an indemnification would offer. It would be the boardroom equivalent of a magic cloak.
Harming investors in order to uphold the nonprofit charter sounds more reasonable than you suggest. Take it from Matt Levine. I don't think investors would win this line of argument.
But there is no proof whatsoever that they did this to uphold the nonprofit charter.
And until that proof surfaces I'm going on the assumption that it doesn't exist because if it does exist they would have certainly used it to bolster their case.
Note Ilya's statement of regret and that none of the board members have said a word and that the current CEO is nowhere to be seen. That does not look like a group of people that are acting boldly in the defense of their corporate charter to me.
It looks like a bunch of weasels that thought they could get away with murdering the chicken while they were unattended for five minutes and they're now all covered with blood and pretending to be innocent.
I think that argument fares even worse. The board only needs to prove to a judge they believed firing Sam would benefit the charter, and the leaks from their camp have stated as much. The board has deliberately stayed silent to avoid legal exposure, not a surprise.
> The board only needs to prove to a judge they believed firing Sam would benefit the charter, and the leaks from their camp have stated as much.
Leaks say whatever they want to say and if I found myself in this position - highly unlikely - and I would have leaked something it would have definitely been something to make me look good. Or did you think that if it made them look bad that it would have been leaked as well? In that case I can see your point.
Regardless, unless discovery finds a smoking gun that the board acted in bad faith vis a vis the charter, which nothing public indicates, a judge would likely tell investors to pound sand.
After many lawsuits I've learned to not pre-empt what I think a judge will think of anything. It's actionable. Whether they will prevail (or whether the counterparty will fold) is up for grabs.
Note this key quote by Nadella:
"“As a partner, I think it does deserve you to be consulted on the big decisions.”"
That's one way to look at what might be grounds for a suit. He could be wrong, but that won't stop them from suing to establish that fact and they have very little to lose.
They don't have to be board members to fight. The 51% shareholders are clearly wronged here in my view if MSFT just hires all of OpenAI staff AND takes the existing models and trade secrets.
Again, you should check who those 51% are. If you don't know who they are and what their relationship with Microsoft and the various board members is then you probably won't understand why your comments on this subject don't track.
Hint: the 51% are more likely to go after the board than that they are going to go after Microsoft because the board action is what precipitated this whole mess.
Microsoft is just - conveniently - picking up the pieces and periodically drops hints about what they'll do if anybody gets in their way of doing that.
Or maybe just their favorite CEO leaving the company or being forced out (or two out of three of the original founders leaving). Escrow triggers can be whatever you want them to be, but they do need to be spelled out up front and MS hopefully would never invest that kind of money without a legal review and a risk assessment.
Honestly this sounds like CEO-ish for, "Yes, we can build a similar product but don't necessarily have all the IP." If there was a simple answer to the interviewer's question, he would have given it. I feel like there's a word count threshold for CEO utterances that is a pretty reliable test for bullshit.
Nadella certainly knows how to be evasive like that, but "all of the rights to continue the innovation" seems like a pretty direct claim that they can just pick up where OpenAI left off without any significant setbacks.
It's business-speak rather than technical/legal, but I don't know how else someone could reasonably interpret that.
Perhaps you can interpret it as saying, "Yes, we're a separate business, we have a right to innovate." These are the ways that people justify lying to themselves. And it's practically a CEO's job to lie to investors.
The underlying basis for his statement could be, not that MS has the necessary rights to all this wonderful OpenAI secret sauce, but that MS knows enough to know that there isn't any secret sauce--whatever they don't have rights to is not worth it anyway.
I'm not familiar with contracts of this nature, but it doesn't feel far fetched that a contract involving billions of dollars might include a clause where IP can be "inherited" for continuity purposes in the case of catastrophic failure of the providing party(?)
Microsoft probably has an escrow type deal on the code and weights: if anything goes sideways with OpenAI, Microsoft gets everything they need to take over.
Also someone previously mentioned that large chunck of Microsoft investment is in their compute credits. If so Microsoft effectivelly have acquired OpenAI.
It's more than that, OpenAI had many people aligned with the decel agenda, MSFT managed to take the accel leadership and likely their supporters. Does anyone know any large AI competitors that don't have a big decel contingent? Also interesting that META took the opportunity to close one of their decel departments on Saturday.