Any competent lawyer is going to get Musk on the stand reiterating his opinions about the danger of AI. If the tech really is dangerous then being more closed arguably is in the public's best interest, and this is certainly the reason OpenAI have previously given.
Not saying I agree that being closed source is in the public good, although one could certainly argue that accelerating the efforts of bad actors to catch up would not be a positive.
> If the tech really is dangerous then being more closed arguably is in the public's best interest, and this is certainly the reason OpenAI have previously given.
Not really. It slows down like security over obscurity. It needs to be open that we know the real risks and we have the best information to combat it. Otherwise, someone who does the same in closed matter, has better chances to get advantage when misusing it.
When I try to port your logic over into nuclear capacity it doesn't hold very well.
Nuclear capacity is constrained, and those constraining it attempt to do so for reasons public good (energy, warfare, peace). You could argue about effectiveness, but our failure to self-annihilate seems positive testament to the strategy.
Transparency does not serve us when mitigating certain forms of danger. I'm trying to remain humble with this, but it's not clear to me what balance of benefit and danger current AI is. (Not even considering the possibility of AGI, which is beyond scope of my comment)
The lack of nukes isn't because of restriction of information. That lasted about as long as it took to leak the info to Soviets. It's far more complicated than that.
The US (and other nations) is not too friendly toward countries developing nukes. There are significant threats against them.
Also perspective is an interesting thing. Non-nuclear countries like Iran and (in the past) North Korea that get pushed around by western governments probably wouldn't agree that restriction is for the best. They would probably explain how nukes and the threat of destruction/MAD make people a lot more understanding, respectful, and restrained. Consider how Russia has been handled the past few years, compared to say Iraq.
(To be clear I'm not saying we should YOLO with nukes and other weapon information/technology, I'm just saying I think it's a lot more complicated an issue than it at first seems, and in the end it kind of comes down to who has the power, and who does not have the power, and the people without the power probably won't like it).
Every single member of the UNSC has facilitated nuclear proliferation at some point. Literally every single one, without exception. It's not really a core objective.
This is a poor analogy, a better one would be nuclear physics. An expert in nuclear physics can develop positively impactful energy generation methods or very damaging nuclear weapons.
It's not because of arcane secrets that so few nations have nuclear weapons, all you need is a budget, time and brilliant physicists and engineers. The reason we don't have more is largely down to surveillance, economics, challenge of reliable payload delivery, security assurances, agreements and various logistical challenges.
Most countries are open and transparent about their nuclear efforts due to the diplomatic advantages. There are also methods to trace and detect secret nuclear tests and critical supply chains can be monitored. Countries who violate these norms can face anything from heavy economic sanctions and isolation to sabotage of research efforts. On the technical side, having safe and reliable launch capacity is arguably as much if not more of a challenge than the bomb itself. Logistical issues include mass manufacture (merely having capacity only paints a target on your back with no real gains) and safe storage. There are a great many reasons why it is simply not worth going forward with nuclear weapons. This calculus changes however, if a country has cause for fear for their continued existence, as is presently the case for some Eastern European countries.
The difference between nuclear capability and AI capability is that you can't just rent out nuclear enrichment facilities on a per-hour basis, nor can you buy the components to build such facilities at a local store. But you can train AI models by renting AWS servers or building your own.
If one could just walk into a store and buy plutonium, then society would probably take a much different approach to nuclear security.
AI isn't like nuclear weapons. AI is like bioweapons. The easier it is for anyone to play with highly potent pathogens, the more likely it is someone will accidentally end the world. With nukes, you need people on opposite sides to escalate from first detection to full-blown nuclear exchange; there's always a chance someone decides to not follow through with MAD. With bioweapons, it only takes one, and then there's no way to stop it.
I would argue that AI isn't like bioweapons either.
Bioweapons do not have similar dual-use beneficial purpose as the AI does. As a result, AI development will continue regardless. It can give competitive advantage on any field.
Bioweapons are not exactly secret as well. Most of the methods to develop such things are open science. The restricting factor is that you potentially kill your own people as well, and the use-case is really just a weapon for some mad man, without other benefits.
Edit: To add, science behind "bioweapons" (or genetic modification of viruses/bacteria) are public exactly for the reason, that we could prevent the next future pandemic.
I elaborated on this in a reply to the comment parallel to yours, but: by "bioweapons" I really meant "science behind bioweapons", which happens to be just biotech. Biotech is, like any applied field, inherently dual-use. But unlike nuclear weapons, the techniques and tools scale down and, over time, become accessible to individuals.
The most risky parts of biotech, the ones directly related to bioweapons, are not made publicly accessible - but it's hard, as unlike with nukes, biotech is dual-use to the very end, so we have to balance prevention and defense with ease of creating deadly pathogens.
it's the weirdest thing to compare nuclear weapons and biological catastrophe to tools that people around the world right now are using towards personal/professional/capitalistic benefit.
bioweapons is the thing, AI is a tool to make things. That's exactly the most powerful distinction here. Bioweapon research didn't also serendipitously make available powerful tools for the generation of images/sounds/text/ideas/plans -- so there isn't much reason to compare the benefit of the two.
These arguments aren't the same as "Let's ban the personal creation of terrifying weaponry", they're the same as "Let's ban wrenches and hack-saws because they can be used down the line in years from now to facilitate the create of terrifying weaponry" -- the problem with this argument being that it ignores the boons that such tools will allow for humanity.
Wrenches and hammers would have been banned too had they been framed as weapons of bludgeoning and torture by those that first encountered them. Thankfully people saw the benefits offered otherwise.
> it's the weirdest thing to compare nuclear weapons and biological catastrophe to tools that people around the world right now are using towards personal/professional/capitalistic benefit.
You're literally painting a perfect analogy for biotech/nuclear/AI. Catastrophe and culture-shifting benefits go hand in hand with all of them. It's about figuring out where the lines are. But claiming there is minimal or negligible risk ("so let's just run with it" as some say, maybe not you) feels very cavalier to me.
But you're not alone, if you feel that way. I feel like I'm taking crazy pills with how the software dev field talks about sharing AI openly.
And I'm literally an open culture advocate for over a decade, and have helped hundreds of ppl start open community projects. If there's anyone who's be excited for open collaboration, it's me! :)
Okay, I made a mistake of using a shorthand. I won't do that in the future. The shorthand is saying "nuclear weapons" and "bioweapons" when I meant "technology making it easy to create WMDs".
Consider nuclear nonproliferation. It doesn't only affect weapons - it also affects nuclear power generation, nuclear physics research and even medicine. There's various degrees of secrecy to research and technologies that affect "tools that people around the world right now are using towards personal/professional/capitalistic benefit". Why? Because the same knowledge makes military and terrorist applications easier, reducing barrier to entry.
Consider then, biotech, particularly synthetic biology and genetic engineering. All that knowledge is dual-use, and unlike with nuclear weapons, biotech seems to scale down well. As a result, we have both a growing industry and research field, and kids playing with those same techniques at school and at home. Biohackerspaces were already a thing over a decade ago (I would know, I tried to start one in my city circa 2013). There's a reason all those developments have been accompanied by a certain unease and fear. Today, an unlucky biohacker may give themselves diarrhea or cancer, in ten years, they may accidentally end the world. Unlike with nuclear weapons, there's no natural barrier to scaling this capability down to individual level.
And of course, between the diarrhea and the humanity-ending "hold my beer and watch this" gain-of-function research, there's whole range of smaller things like getting a community sick, or destroying a local ecosystem. And I'm only talking about accidents with peaceful/civilian work here, ignoring deliberate weaponization.
To get a taste of what I'm talking about: if you buy into the lab leak hypothesis for COVID-19, then this is what a random fuckup at a random BSL-4 lab looks like, when we are lucky and get off easy. That is why biotech is another item on the x-risks list.
Back to the point: the AI x-risk is fundamentally more similar to biotech x-risk than nuclear x-risk, because the kind of world-ending AI we're worried about could be created and/or released by accident by a single group or individual, could self-replicate on the Internet, and would be unstoppable once released. The threat dynamics are similar to a highly-virulent pathogen, and not to a nuclear exchange between nation states - hence the comparison I've made in the original comment.
> the kind of world-ending AI we're worried about could be created and/or released by accident by a single group or individual, could self-replicate on the Internet, and would be unstoppable once released.
I also worry every time I drop a hammer from my waist that it could bounce and kill everyone I love. Really anyone on the planet could drop a hammer which bounces and kills everyone I love. That is why hammers are an 'x-risk'
Self annihilation fails due to nuclear proliferation, i.e MAD. So your conclusion is backward.
But that's irrelevant anyway, because nukes are a terrible analogy. If you insist on sci-fi speculation, use an analogy that's somewhat remotely similar -- perhaps compare the development of AI vs. traditional medicine. They're both very general technologies with incredible benefits and important dangers (e.g. superbugs, etc).
So in other words, one day we will see a state actor make something akin to Stuxnet again but this time instead of targeting the SCADA systems of a specific power plant in Iran, they will make one that targets the GPU farm of some country they suspect of secretly working on AGI.
Well then, isn’t the whole case about just denying the inevitable?
If OpenAI can do it, I would not say that that is very unlikely for someone else to do the same. Open or not. The best chance is still that we prepare with the best available information.
Yep, it absolutely is about denying the inevitable, or rather, "playing for time." The longer we manage to delay, the more likely somebody comes up with some clever approach for actually controlling the things. Also humanity stays alive in the meantime, which is no small thing in itself.
... Eh? You as an end-user can't contribute to this anyways. If you really want to work on safety, either use a smaller network or join the safety team at a big org.
> The best information we have now is if we create AGI/ASI at this time, we all die.
We can still unplug or turn off the things. We are still very faraway from the situation where AI has some factories and full supply chain to control and take physical control of the world.
Meanwhile, every giant AI company: "yeah we're looking at robotics, obviously if we could embody these things and give them agency in the physical world that would be a great achievement"
Our rush into AI and embodiment reminds me of the lily pad exponential growth parable.
>Imagine a large pond that is completely empty except for 1 lily pad. The lily pad will grow exponentially and cover the entire pond in 3 years. In other words, after 1 month there will 2 lily pads, after 2 months there will be 4, etc. The pond is covered in 36 months
We're all going to be sitting around at 34 months saying "Look, it's been years and AI hasn't taken over that much of the market.
I don't see how opening it makes it safer. It's very different from security things, where some "white hat" can find a security, and they can then fix it so instances don't get hacked. Sure, a bad person could run the software without fixing the bug, but that isn't going to harm anyone but themselves.
That isn't the case here. If some well meaning person discovers a way that you can create a pandemic causing superbug, they can't just "fix" the AI to make that impossible. Not if it is open source. Very different thing.
The whole “security through obscurity doesn’t work” is absolute nonsense. It absolutely works and there are countless real world examples. What doesn’t work is relying on that as your ONLY security.
I'm not sure if nuclear weapons are a good example. In the 1940's most of the non-weapons-related nuclear research was public (and that did make certain agencies nervous). That's just how scientists tend to do things.
While the US briefly had unique knowledge about the manufacture of nuclear weapons, the basics could be easily worked out from first principles, especially once schoolchildren could pick up an up-to-date book on atomic physics. The engineering and testing part is difficult, of course, but for a large nation-state stealing the plans is only a shortcut. The on-paper part of the engineering is doable by any team with the right skills. So the main blocker with nuclear weapons isn't the knowledge, it's acquiring the raw fissile material and establishing the industrial base required to refine it.
This makes nuclear weapons a poor analogy for AI, because all you need to develop an LLM is a big pile of commodity GPUs, the publicly available training data, some decent software engineers, and time.
So in both cases all security-through-obscurity will buy you is a delay, and when it comes to AI probably not a very long one (except maybe if you can restrict the supply of GPUs, but the effectiveness of that strategy against China et al remains to be seen).
>This makes nuclear weapons a poor analogy for AI, because all you need to develop an LLM is a big pile of commodity GPUs, the publicly available training data, some decent software engineers, and time.
Except the GPUs are on export control, and keeping up with the arms race requires a bunch of data you don't have access to (NVidia's IP) - or direct access to the source.
Just like building a nuclear weapon requires access to either already refined fissile material. Or the IP and skills to build your own refining facilities (IP most countries don't have). Literally everyone has access to Uranium - being able to do something useful with it is another story.
After the export ban, China demonstrating a process node advancement that shocked the world. So the GPU story doesn't support your position particularly well.
Every wealthy nation & individual on Earth has abundant access to AI's "ingredients" -- compute, data, and algorithms from the '80s. The resource controls aren't really comparable to nuclear weapons. Moreover, banning nukes won't also potentially delay cures for disease, unlock fusion, throw material science innovation into overdrive, and other incredible developments. That's because you're comparing a general tool to one exclusively proliferated for mass slaughter. It's just...not a remotely appropriate comparison.
>After the export ban, China demonstrating a process node advancement that shocked the world. So the GPU story doesn't support your position particularly well.
I'm not sure why you're conflating process technology with GPUs, but if you want to go there, sure. If anyone was surprised by China announcing they had the understanding of how to do 7nm, they haven't been paying attention. China has been openly and actively poaching TSMC engineers for nearly a decade now.
Announcing you can create a 7nm chip is a VERY, VERY different thing than producing those chips at scale. The most ambitious estimates put it at a 50% yield, and the reality is with China's disinformation engine, it's probably closer to 20%. They will not be catching up in process technology anytime soon.
>Every wealthy nation & individual on Earth has abundant access to AI's "ingredients" -- compute, data, and algorithms from the '80s. The resource controls aren't really comparable to nuclear weapons. Moreover, banning nukes won't also potentially delay cures for disease, unlock fusion, throw material science innovation into overdrive, and other incredible developments. That's because you're comparing a general tool to one exclusively proliferated for mass slaughter. It's just...not a remotely appropriate comparison.
Except they don't? Every nation on earth doesn't have access to the technology to scale compute to the levels needed to make meaningful advances in AI. To say otherwise shows an ignorance of the market. There are a handful of nations capable, at best. Just like there are a handful of nations that have any hope of producing a nuclear weapon.
Nuclear weapons can definitely be replicated. The U.S. and allies aggressively control the hard to get materials and actively sabotage programs that work on it.
And the countries that want nukes have some anyway, even if they are not as good.
This is a broken comparison IMO because you can’t instantly and freely duplicate nuclear weapons across the planet and then offer them up to everyone for low marginal cost and effort.
The tech exists, and will rapidly become easy to access. There is approximately zero chance of it remaining behind lock and key.
Security through obscurity isn't what is at play with nuclear weapons. It's a fabrication and chemistry nightmare at every single level; the effort and materials is what prevents these kind of things from happening -- the knowledge and research needed has been essentially available since the 50s-60s like others have said.
It's more like 'security through scarcity and trade control.'
The knowledge of how to make the tool chain of building a nuclear weapon is something that every undergraduate in physics can work out from first principles.
You don't even need to call him to the stand, it's not some gotcha, he writes it all over the complaint itself. "AGI poses a grave threat to humanity — perhaps the greatest existential threat we face today." I highly doubt a court is going to opine about open vs closed being safer, though. The founding agreement is pretty clear that the intention was to make it open for the purpose of safety. Courts rule on if a contract was breached, not whether breaching it was a philosophy good thing.
You're perhaps forgetting that the plaintiff here is Elon Musk, the man who was forced to buy Twitter due to not realizing that signing a legally binding contract was legally binding.
> If the tech really is dangerous then being more closed arguably is in the public's best interest
If that was true, then they shouldn't have started off like that to begin with. You can't have it both ways. Either you are pursuing your goal to be open (as the name implies) or the way you set yourself up was ill-suited all along.
Their position evolved. Many people at the time disagreed that having open source AGI - putting it in the hands of many people - was the best way to mitigate the potential danger. Note that this original stance of OpenAI was before they started playing with transformers and having anything that was beginning to look like AI/AGI. Around the time of GPT-3 was when they said "this might be dangerous, we're going to hold it back".
There's nothing wrong with changing your opinion based on fresh information.
> There's nothing wrong with changing your opinion based on fresh information.
I don't really get that twist. What "fresh" information arrived here suddenly? The structure they gave themselves was chosen explicitly with the risks of future developments in mind. In fact, that was why they chose that specific structure as outlined in the complaint. How can it now be called new information that there are actually risks involved? That was the whole premise of creating that organization in the form it was done to begin with!
I’d agree. And the fact that it evolved in a way that made individuals massive massive profit, suggests that maybe their mind wasn’t changed, and profit was the actual intention
The fresh information was seeing who built an AGI, and what it looks like.
When OpenAI was founded it was expected that AGI would likely come out of Google, with OpenAI doing the world a favor by replicating this wonderful technology and giving it to the masses. One might have imagined AGI would be some Spock-like stone cold super intelligence.
As it turns out, OpenAI themselves were the first to create something AGI-like, so the role they envisaged for themselves was totally flipped. Not only this, but this AGI wasn't an engineered intelligence but rather a stochastic parrot, trained on the internet, and incredibly toxic; as much of a liability as a powerful tool.
OpenAIs founding mission of AI democracy has turned into one of protecting us from this bullshitting psychopath that they themselves created, while at the same time raising the billions of dollars it takes to iterate on something so dumb it needs to be retrained from scratch every time you want to update it.
They were founded on the premise that some large player (specifically Google) would develop AGI, keep it closed, and maybe not develop it in the best interests (safety) of the public. The founding charter was essentially to try to ensure that AI was developed safely, which at the time they believed would be best done by making it open source and available to everyone (this was anyways contentious from day 1 - a bit like saying the best defense against bio-hackers is to open source the DNA for Ebola).
What goes unsaid, perhaps, is that back then (before the transformer had even been invented, before AlphaGo, what people might have imagined AGI to look like (some kind of sterile super-intelligence) was very different from the LLM-based "AGI" that eventually emerged.
So, what changed, what was the fresh information that warranted a change of opinion that open source was not the safest approach?
I'd say a few things.
1) As it turned out, OpenAI themselves were the first to develop a fledgling AGI, so they were not in the role they envisaged of open sourcing something to counteract an evil closed source competitor.
2) The LLM-based form of AGI that OpenAI developed was really not what anyone imagined it would be. The danger of what OpenAI developed, so far, isn't some doomsday "AI takes over the world" scenario, but rather that it's inherently a super-toxic chatbot (did you see OpenAI's examples of how it was before RLHF ?!) that is potentially disruptive and negative to society because of what it is rather than because of it's intelligence. The danger (and remedy) is not, so far, what OpenAI originally thought it would be.
3) OpenAI have been quite open about this in the past: Musk leaving, being their major source of funds, forced OpenAI to make changes in how they were funded. At the same time as this was happening (around GPT 2.0), it was becoming evident how extraordinarily expensive this unanticipated path to AGI was going to be to continue developing (Altman has indicated a cost of $100M+ to train GPT-3 - maybe including hardware). They were no longer looking for a benefactor like Musk willing/able to donate a few $10's of millions, but needed a partner able to put billions into the effort, which necessitated an investor expecting a return on investment, and hence the corporate structure change to accommodate that.
…unless you believe that the world can change and people’s opinions and decisions should change based on changing contexts and evolving understandings.
When I was young I proudly insisted that all I ever wanted to eat was pizza. I am very glad that 1) I was allowed to evolve out of that desire, and 2) I am not constantly harangued as a hypocrite when I enjoy a nice salad.
Sure, but the OpenAI situation feels a bit more like "when I started this charity all I wanted to do was save the world. Then I decided the best thing to do was use the donor funds to strengthen my friend Satya's products, earn 100x returns for investors and spin off profit making ventures to bill the world"
It's not like they've gone closed source as a company or threatened to run off to Microsoft as individuals or talked up the need for $7 trillion investment in semiconductors because they've evolved the understanding that the technology is too dangerous to turn into a mass market product they just happen to monopolise, is it?
> …unless you believe that the world can change and people’s opinions and decisions should change based on changing contexts and evolving understandings.
What I believe doesn't matter. As an adult, if you set up contracts and structures based on principles which you bind yourself to, that's your decision. If you then convince people to join or support you based on those principles, you shouldn't be surprised if you get into trouble once you "change your opinion" and no longer fulfill your obligations.
> When I was young I proudly insisted that all I ever wanted to eat was pizza.
What a good thing that you can't set up a contract as a child, isn't it?
> The document says they will open source “when applicable”. If open sourcing wouldn’t benefit the public, then they aren’t obligated to do it.
From their charter: “resulting technology will benefit the public and the corporation will seek to open source technology for the public benefit when applicable. The corporation is not organized for the private gain of any person"
I just thought it might be important to provide more context. See the other comments for a discussion on "when applicable". I think this misses the point here.
> Care to explain your point or link to a relevant comment?
Explanation: Reducing the discussion to the two words "when applicable" (especially when ripped out of context) might be relevant in the legal sense, but totally misses the bigger picture of the discussion here. I don't like being dragged on those tangents when they can be expected to only distract from the actual point being discussed - or result in a degraded discussion about the meaning of words. I could, for instance, argue that it says "when" and not "if" which wouldn't get us anywhere and hence is a depressing and fruitless endeavor. It isn't as easy as that and the matter needs to be looked at broadly, considering all relevant aspects and not just two words.
For reference, see the top comment, which clearly mentions the "when applicable" in context and then outlines that, in general, OpenAI doesn't seem to do what they have promised.
And here's a sub thread that goes into detail on the two words:
> If the tech really is dangerous then being more closed arguably is in the public's best interest, and this is certainly the reason OpenAI have previously given.
Then what should we do about all the open models that are closing in on OpenAI's capabilities?
Personally, I don't have a trust problem with AI per se, I have a problem with the technology being locked behind closed doors.
My point is, if whatever they're doing is dangerous, I don't see what is actually special about Altman and Brockman having control of dangerous things. They seem completely motivated by money.
I'd trust scientists, AI experts who aren't in a for profit company with some government oversight over Aman and Bman.
Other groups are going to discover the same problems. Some will act responsibly. Some will try to, but the profit motive will undermine their best intentions.
This is exactly the problem having an open non-profit leader was designed to solve.
Six month moratoriums, to vet and mitigate dangers including outside experts, would probably be a good idea.
But people need to know what they are up against. What can AI do? How do we adapt?
We don't need more secretive data gathering, psychology hacking, manipulative corporations, billionaires (or trillionaires), harnessing unknown compounding AI capabilities to endlessly mine society for 40% year on year gains. Social networks, largely engaged in winning zero/negative sum games, are already causing great harm.
That would compound all the dangers many times over.
>If the tech really is dangerous then being more closed arguably is in the public's best interest, and this is certainly the reason OpenAI have previously given.
Tell about a technology you think but dangerous, and I'll give you fifty way to kill someone with it.
Plastic bag for example, are not only potentially dangerous, they make a significant contribution to the current mass extinction of biodiversity.
I am really not a fan of plastic trash, neither in the oceans, nor the forest, nor anywhere else. But in your links I did not found hints of "a significant contribution to the current mass extinction of biodiversity."
This was the most concrete, so some contribution (no news to me), but not in a significant way, like pesticides do, for example.
"When turtles eat plastic, it can block their intestinal system (their guts). Therefore, they can no longer eat properly, which can kill them. The plastics in their tummy may also leak chemicals into the turtle. We don’t know whether this causes long term problems for the turtle, but it’s probably not good for them."
Now, this was really an incidental point, not the nub of the comment, and since this is really not the topic here, I don't mean to deeply develop it here.
> If the tech really is dangerous then being more closed arguably is in the public's best interest, and this is certainly the reason OpenAI have previously given.
I contend that a threat must be understood before it can be neutralized. It will either take a herculean feat of reverse-engineering, or an act of benevolence on OpenAI's behalf. Or a lawsuit, I guess.
Perhaps, but who knew? Nobody at that time knew how to build AGI, and what it therefore might look like. I'm sure people would have laughed at you if you said "predict next word" was the path to AGI. The transformer paper that kicked off the LLM revolution would not be written for another couple of years. DeepMind was still focusing on games, with AlphaGo also still a couple of years away.
OpenAI's founding charter was basically we'll protect you from an all-powerful Google, and give you the world's most valuable technology for free.
Are you a lawyer or have some sort of credentials to be able to make that statement? I’m not sure if Elon Musk being hypocrite about AI safety would be relevant to the disputed terms of a contract.
I don't think it's about him being a hypocrite - just him undermining his own argument. It's a tough sell saying AI is unsafe but it's still in the public's best interest to open source it (and hence OpenAI is reneging on it's charter).
Not really. The fact that "we keep our technology secret for safety reasons" is the reasoning given by many for-profit corporations does not make it a good argument, just a very profitable lie to tell, and it has never stopped showing itself false at every opportunity to test it. But it's also never stopped being profitable to keep this secrecy, which is why the likes of Apple and Microsoft make these claims so frequently
This is, in many ways, the substance of the lawsuit. This logic of "we must guard this secret carefully... for safety!" doesn't actually inevitably come from most lines of enabling research in any field in academia for example, but it does reliably come up once someone can enclose the findings in order to profit from exploiting this information asymmetry somehow
Secrecy for profit isn't a super benevolent thing to do, but it's generally speaking fine. We have whole areas of law about how to balance the public benefit of wide availability of information and the private benefit to discoverers of some technique, technology, or even facts about the world. It is well understood by most people that trade secrets aren't public knowledge. We see this plea to "safety" come up only exactly in cases where companies want to justify having control over things that have become pervasive and often mandatory to use in many contexts in a way that allows said companies to in turn exert further control over that thing's users, which is to say in tech monopolies. The use of that reasoning basically one-to-one predicts a business model that relies on DMCA 1201 (or its international equivalents) to function, a legal edifice designed by Microsoft lawyers which has become pervasive worldwide essentially at their behest
That said, I don't think it's particularly hard to make the case that writing a whole-ass non-profit charter explicitly outlining the intent to do research in the open and then suddenly switching to the very familiar corporate reality distortion field stance of a convicted monopolist you happen to have formed a partnership with in order to justify effectively abandoning that charter is a good basis for a lawsuit
Not saying I agree that being closed source is in the public good, although one could certainly argue that accelerating the efforts of bad actors to catch up would not be a positive.