> Indeed, as the November 2023 drama was unfolding, Microsoft’s CEO boasted that it would not matter “[i]f OpenAI disappeared tomorrow.” He explained that “[w]e have all the IP rights and all the capability.” “We have the people, we have the compute, we have the data, we have everything.” “We are below them, above them, around them.”
Yikes.
This technology definitely needs to be open source, especially if we get to the point of AGI. Otherwise Microsoft and OpenAI are going to exploit it for as long as they can get away with it for profit, while open source lags behind.
Reminds me of the moral principles that guided Zimmermann when he made PGP free for everyone: A powerful technology is a danger to society if only a few people possess it. By giving it to everyone, you even the playing field.
Works already been done for the most part. Mixtral is to GPT what Linux was to Windows. Mistral AI has been doing such a good job democratizing Microsoft's advantage that Microsoft is beginning to invest in them.
There's a "Download" button for their open models literally two clicks away from the homepage.
Click "Learn more" under the big "Committing to open models" heading on the homepage. Then, because their deeplinking is bad, click "Open" in the toggle at the top. There's your download link.
See “no longer” in my original comment. They just announced their new slate of models, none of which are open weights. The models linked to download are the “before Microsoft $$$, Azure deal, and free supercomputers” ones.
Sorry, they’ve just scrubbed most of the references and otherwise edited their site to downplay any commitment to open source, post-Microsoft investment.
I guess it you want a nuclear apocalypse then giving the tech to people that would rather see the world end than be "ruled by the apostates", that sounds like a great plan.
Is that really the case? Nukes are supposed to be deterrents. If only groups aligned with each other have nukes that sounds more dangerous than enemies having nukes and knowing they can't use them.
> I don't trust OpenAI or Microsoft, but I don't have much faith in democratization either. We wouldn't do that with nukes, after all.
Dangerous things are controlled by the government (in a democracy, a form of democratization). It's bizarre and shows the US government's self-inflicted helplessness that they haven't taken over a project that its founders and developers see as a potential danger to civilization.
If we get to the point of AGI then it doesn’t matter much; the singularity will inevitably occur and the moment that AGI exists, corporations (and the concept of IP) are obsolete and irrelevant. It doesn’t matter if the gap between AGI existing and the singularity is ten hours, ten weeks, ten months, or ten years.
And yet, still safer than everyone having nukes...
It's unfortunate that the AGI debate still hasn't made it's way very far into these parts. Still have people going, "well this would be bad too." Yes! That is the existential problem a lot of people are grappling with. There is currently and likely, no good way out of this. Too much "Don't Look Up" going on.
nuclear weapons is a ridiculous comparison and only furthers the gas lighting of society. At the barest of bare minimums, AI might, possibly, theoretically, perhaps pose a threat to established power structures (like any disruptive technology does). However, a nuclear weapon definitely destroys physical objects within its effective range. Relating the two is ridiculous.
We do prosecute humans who misuse weapons. The problem with AI is that the potential for damage is hard to even gauge; potentially an extinction event, so we have to take more precautions than just prosecuting after the fact. And if the AI has agency, one might argue that it is responsible... what then?
Elon Musk: "There’s a strong probability that it [AGI] will make life much better and that we’ll have an age of abundance. And there’s some chance that it goes wrong and destroys humanity."
Yikes.
This technology definitely needs to be open source, especially if we get to the point of AGI. Otherwise Microsoft and OpenAI are going to exploit it for as long as they can get away with it for profit, while open source lags behind.
Reminds me of the moral principles that guided Zimmermann when he made PGP free for everyone: A powerful technology is a danger to society if only a few people possess it. By giving it to everyone, you even the playing field.