Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Indeed, as the November 2023 drama was unfolding, Microsoft’s CEO boasted that it would not matter “[i]f OpenAI disappeared tomorrow.” He explained that “[w]e have all the IP rights and all the capability.” “We have the people, we have the compute, we have the data, we have everything.” “We are below them, above them, around them.”

Yikes.

This technology definitely needs to be open source, especially if we get to the point of AGI. Otherwise Microsoft and OpenAI are going to exploit it for as long as they can get away with it for profit, while open source lags behind.

Reminds me of the moral principles that guided Zimmermann when he made PGP free for everyone: A powerful technology is a danger to society if only a few people possess it. By giving it to everyone, you even the playing field.



Just going to note that it is widely suspected that Hal Finney did much of the programming on PGP with Zimmermann taking the heat for him.


I can confirm that is true second-hand from his former boss at that time. Other biographies, profiles, and interviews also support it.


Works already been done for the most part. Mixtral is to GPT what Linux was to Windows. Mistral AI has been doing such a good job democratizing Microsoft's advantage that Microsoft is beginning to invest in them.


Microsoft just bought off Mistral into no longer releasing open weights and scrubbing all references to them from their site…?


There's a "Download" button for their open models literally two clicks away from the homepage.

Click "Learn more" under the big "Committing to open models" heading on the homepage. Then, because their deeplinking is bad, click "Open" in the toggle at the top. There's your download link.


See “no longer” in my original comment. They just announced their new slate of models, none of which are open weights. The models linked to download are the “before Microsoft $$$, Azure deal, and free supercomputers” ones.


This is Linux all over again, Microsoft is going to use every trick and dollar they have to fight open source.

/I'm too old to fight that battle again...


Sure, but they clearly haven't "scrubbed all references" of their open weights from their site.


Sorry, they’ve just scrubbed most of the references and otherwise edited their site to downplay any commitment to open source, post-Microsoft investment.


Which is Mistral 7B and Mixtral 8x7B. Mistral Large belongs to the closed source optimized models.


> A powerful technology is a danger to society if only a few people possess it. By giving it to everyone, you even the playing field.

Except nukes. Only allies can have nukes.


I guess it you want a nuclear apocalypse then giving the tech to people that would rather see the world end than be "ruled by the apostates", that sounds like a great plan.


Russia has nukes, China has nukes, Pakistan has nukes....

And yet, no countries with nukes have ever gone to war with each other.


India and Pakistan fight skirmishes along their contested border all the time. (And re your first line, Pakistan is a US ally, at least in theory.)


1. In 2024, India is a closer ally than Pakistan. (Major Defense Partner)

2. Yep. Skirmishes, not wars.


Is that really the case? Nukes are supposed to be deterrents. If only groups aligned with each other have nukes that sounds more dangerous than enemies having nukes and knowing they can't use them.


Until people who believe in martyrdom get them


I don't trust OpenAI or Microsoft, but I don't have much faith in democratization either. We wouldn't do that with nukes, after all.


> I don't trust OpenAI or Microsoft, but I don't have much faith in democratization either. We wouldn't do that with nukes, after all.

Dangerous things are controlled by the government (in a democracy, a form of democratization). It's bizarre and shows the US government's self-inflicted helplessness that they haven't taken over a project that its founders and developers see as a potential danger to civilization.


Nukes blow up cities.


Not just for profit. It's also about power.


The technologies that power LLMs are open source.


If we get to the point of AGI then it doesn’t matter much; the singularity will inevitably occur and the moment that AGI exists, corporations (and the concept of IP) are obsolete and irrelevant. It doesn’t matter if the gap between AGI existing and the singularity is ten hours, ten weeks, ten months, or ten years.


> A powerful technology is a danger to society if only a few people possess it. By giving it to everyone, you even the playing field.

That's why we all have personal nukes, of course. Very safe


I shudder at a world where only corporations had nukes.


And yet, still safer than everyone having nukes...

It's unfortunate that the AGI debate still hasn't made it's way very far into these parts. Still have people going, "well this would be bad too." Yes! That is the existential problem a lot of people are grappling with. There is currently and likely, no good way out of this. Too much "Don't Look Up" going on.


nuclear weapons is a ridiculous comparison and only furthers the gas lighting of society. At the barest of bare minimums, AI might, possibly, theoretically, perhaps pose a threat to established power structures (like any disruptive technology does). However, a nuclear weapon definitely destroys physical objects within its effective range. Relating the two is ridiculous.


A disembodied intelligent agent could still trigger or manipulate a person into triggering a weapon.


So can a human, yet we don't ban those. I don't think AI is going to get better at manipulating people than a sufficiently skilled human.

What might be scary is using AI for a mass influence operation, propaganda to convince people that, for example, using a weapon is necessary.


We do prosecute humans who misuse weapons. The problem with AI is that the potential for damage is hard to even gauge; potentially an extinction event, so we have to take more precautions than just prosecuting after the fact. And if the AI has agency, one might argue that it is responsible... what then?


It's not a ridiculous comparison. This thread involves Sam Altman and Elon Musk, right?

Sam Altman:"Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity."

In the essay "Why You Should Fear Machine Intelligence" https://blog.samaltman.com/machine-intelligence-part-1

So, more than nukes then...

Elon Musk: "There’s a strong probability that it [AGI] will make life much better and that we’ll have an age of abundance. And there’s some chance that it goes wrong and destroys humanity."




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: