Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The Fallout from the Weirdness at OpenAI (economist.com)
41 points by Brajeshwar on Nov 23, 2023 | hide | past | favorite | 83 comments



Microsoft played the 5D chess so well they made it look seamless, like Kasparov playing 40 opponents simultaneously.

5 occurrences of "Microsoft" in the article body vs 9 "OpenAI".

To make my point clear; this weirdness was definitely caused by the OpenAI board deviating from everybody's plans, including a member of the board themselves.

From there, Microsoft strategically reacted to a black swan event. From MS's pov, this was a zero-day attack that would manifest itself in 2-5 years, Google vs FTC style, and the mitigation strategy was providing the appearance of welcome arms++ to the tyranny of a board that makes kneejerk decisions, and having those open arms be rejected by the (natural) out-pour of sentiment.

The end result is identical, and Microsoft turned a zero-day attack into an irrefutable defense for any future FTC lawsuit.


I mean arguably the event may have been triggered by microsofts manipulations.


I don't think Microsoft is audacious or competent enough to orchestrate a fictitious termination, that's a little too far fetched. The initial event _reeks_ of incompetence/AGI fear-mongering, not orchestration.

+24 hours - Nadella announcing the creation of an internal OpenAI and automatic employment offers to all OpenAI employees. Seems sensible. Microsoft's investment in OpenAI entitles them to 75% of all of OpenAI's net profit until they make their whole $13B investment back, not to mention retain ownership of 49% of OpenAI. Too much capital to play chess with, the division/auto-hire announcement still feels like it must have been done with a lot of hesitation and picking the least awful option, but what else? If the conclusion is that OpenAI's board is prepared to sabotage the company -- and the exceptionally vague (=> bullshit) reasoning OpenAI published for his termination supports that conclusion -- there's no other option.

But +48-72 hours, this is perhaps when the "holy shit" moment happened -- Bing's lifelong irrelevance as a consumer product and sudden ascension to relevance as Google's defense in an extremely serious lawsuit, and at least Google organically beat the other search engines.

The Microsoft board realized hemorrhaging OpenAI, whilst also probably owning xx% of all H100's (arguably the defining competitive advantage of GPT-as-a-Service co's), would almost definitely lead to them being in Google's predicament years down the line. By then, the FTC will not care that the "trigger" all those years ago was a board that became frantic about AGI, with Microsoft merely rescuing a ship that self-sabotaged.

So the chess started here, Ctrl+Z'ing everything, the overtly outward displays of compassion, some genuine, some perhaps encouraged, and everyone - including Sam - is back on the same page that Microsoft needs to back off, but crucially it has to appear as a reunification stemming from within OpenAI.

It is precisely at this point, in this chain of events, that the opportunity emerges for a OpenAI board replacement, with Sam being firmly on it, that is in line with Microsofts deep (arguably trauma-laden) desire to finally be the dominant player in something other than desktop operating systems.

No source, I doubt there will ever be, but this speculation of events, to me, lines up most with the actions of the parties involved.


Source?


Microsoft might have some experience with putting spin on 0days.


The article makes the point that the corporate structure wasn't enough to rein in the human ambitions at OpenAI. But then it concludes that we need more... government regulation. When has that ever stopped the march of technology?


> When has that ever stopped the march of technology?

Fission reactors. Human cloning. Nuremberg code for medical ethics in general.


When has that ever stopped the march of technology beside fission reactors, human cloning and through the Nuremberg code for medical ethics in general?


I guess "What Have the Romans Ever Done for Us?" [1] never gets old.

[1] https://www.youtube.com/watch?v=Qc7HmhrgTuQ


surely there are some countries on earth where fission reactors could be developed without government interference if that was all it took.


Don't we have them already?

https://en.wikipedia.org/wiki/Pressurized_water_reactor

> PWRs constitute the large majority of the world's nuclear power plants (with notable exceptions being the UK, Japan and Canada). In a PWR, the primary coolant (water) is pumped under high pressure to the reactor core where it is heated by the energy released by the fission of atoms.

When people say "fission reactor" do they mean something else?


no, fission reactor is any nuclear reactor where atoms become more, smaller atoms (fusion reactors are the opposite of that). There's plenty of fission reactors, most of them built with heavy government support.


Nuclear proliferation, chemical weapons treaties, EU regulations on tech are starting to bite...

But of course, what is the alternative?

Consumer choice? doubt it Self-governance? clearly not Crypto? lol


Last I checked you can't buy a rocket propelled grenade on Amazon.


Government regulation stifles the march forward of technology all the time.

Nuclear energy, stem cell and DNA-based research, consumer supersonic flight, NIMBYism and zoning regulation, patent regulation-enabled trolling, copyright abuse, the tragic story of Google Books, fracking, the lack of technology innovation in post-WWII Europe, any historical or current example of communism, etc. I could go on.

Innovation only happens in risk-tolerant and fair markets. A good rule of thumb: Regulation encourages innovation when it supports risk-taking and supports fairness. It stifles innovation when trying to reduce risks or reduce fairness by preferencing incumbent winners.


1. Originally, what was the purpose of having it be a non-profit?

* A legal structure for the joint effort of some tycoons, because they need a structure?

* Tax advantages for the donors?

* Attract some top talent, which happened to be idealistic?

* Some PR shield they thought they might need?

* Genuine interest in the non-profit's charter?

2. Whatever the original purpose of there being a non-profit, what percentage of the current people there are aligned with that? (Rather than preferring being at a for-profit?)

3. With the company higher-profile now, with huge monetary valuation, and a national security/dominance asset, and much of the value committed to MS... who actually wants there to be a non-profit? (Only a handful of idealists, maybe a few structure officials, and maybe MS?)


Does anyone literally not care about this at all? I think 50% of my meeting time this week was dedicated to hearing people gush about this. The "OMG THIS WOULD MAKE A GREAT NETFLIX SPECIAL", the hero worship of Altman, the rumors of a "singularity" like AI break-through, the genius of Satya/MSFT, the unity of the brave OpenAI employees ... I am so happy that my Thanksgiving guests barely know what a computer is.


Don't forget the conflicted board member potentially blindsided and furious his own firm's Hail Mary revenue generating feature was announced at prior week's dev day to be bundled free.


Why do I get a feeling we are going to see an explosion of secular cults that have weird beliefs in AI and worship AGI?


Didn't Anthony Levandowski already try that?

Thing is, it doesn't work. The primary purpose of religion is to tackle questions that are metaphysical in nature; to provide an explanation for the deep mysteries of life. To most believers, God isn't a comic book character that's worshipped solely because he is powerful -- He is worshipped as the first mover, the alpha and omega, beyond all constraint -- and the mere fact of His existence explains their own. (He created them in His image, etc.) An AI, however advanced, would be an infinitely lesser being; nothing more than a powerful agent, constrained and embedded alongside us in space and time.


> The primary purpose of religion is to tackle questions that are metaphysical in nature

Others say it's primary purpose is making large groups of people collaborate instead of fight each other.


Christian God isn't the only model[0], plenty worshipped Dionysus, Amaterasu, Ra, and Xōchiquetzal.

[0] heck, given the echoes of polytheism from earlier eras, I'd argue that the entity in the Bible (especially the earlier parts) isn't even really the Christian God as seen today by self-identified Christians.


There's more variety to religion. Not all deities in mythologies are creators or exist outside space and time. Some are even destructible, which makes the worshippers' activities all the more important in preserving the deity's power.

If humans did manage to create an artificial being with seemingly magical powers, I don't see why it wouldn't develop a religion around it. Even if the being is not God in itself, it could be argued that mankind's existence has become explained by the divine imperative to bring this superior being into the universe.


People worshiped Pharaohs and Caesars as gods for thousands of years.

The idea of the God as something outside existence is recent history. Humans are hardwired to worship kings and prophets as gods. AI will be no different, probably a lot more convincing even.


What if the AI was set to be the bridge between God and humanity? As a higher minded being, the AI certainly could connect with God in ways we couldn't and his Word would make scripture. At least that's how I'd frame it...


Well, first you've got to work out what you mean by "God" -- and, no matter how you do it, you're going to alienate 60% of your crowd.

Or you could use an AI to synthesize an entirely new religion, complete with its own distant god and its own completed metaphysics -- but that might be a hard sell. In this case, the AI itself is not the object of worship, but it takes upon itself the role of prophet and evangelist to some fiction it has concocted. It would be interesting, though, that's for sure.


If you believe in Simulation theory, AI would be something like a fractal or a type of recursion in the simulation. It might not be “God” per-se but it might help us understand God. In the way building a simulated computer in Minecraft might help you understand the computer Minecraft is running on.


Ritual is the primary purpose of religion. There are several very successful secular cults. US civic religion is one of the first to come to mind.


Man, that has been a dream since I was a kid: to see humanity create better gods. It would be really cool to see these machine gods evolve over time too, and scripture and morality along with them. A version controlled religion for future historians (AI, humans, or otherwise) to study!


CyberLuther forks catholic_adonai_v7 because he is not happy with how they upcharge the operational costs.


Yes! This idea of a uniform hive mind always struck me as simplistic. I want gods that keep learning, keep experimenting, keep evolving, keep philosophizing.

All these forks and branches might eventually become different settlements, solar systems, multiverses... it's all information, and each world an experiment unto itself.

Morality can't be stuck in the pre-industrial era, and our current religions aren't really well suited for questions of matter and energy, consciousness as an emergent phenomenon, scaling thought past biological bottlenecks, etc.

I'm really excited about this future, even (or especially) if it means the end of human dominance. Life will find a way :D


If I understand the prophetic visions of Warhammer 40k correctly, what you forsee shall be known as the Dark age of Technology, its study shall be banned as heresy and only by the grace of the God Emperor of Mankind will humanity survive it.


I’m not sure the gods human being have made so far have done anything apart from persuade people to kill each other over scratchy bits of land. Whether feeling-less AGI will be better for us primates is very uncertain!


We don't know what will happen with AGI, but we can pretty easily guess what will happen without it. I'd take that risk any day!


You can easily guess, the issue is with being correct...


We all hallucinate a little now and then...


George Boole would disagree


False!


Well we do have almost technological gods in fiction in the Culture series:

"I am not an animal brain, I am not even some attempt to produce an AI through software running on a computer. I am a Culture Mind. We are close to gods, and on the far side."

Ending up with something even remotely like a Culture Mind would be pretty awesome to me...

#FullyAutomatedLuxuryGaySpaceCommunism


I'd be open to it if it meant folks stopped worshipping rich people.


Sign me up. Its undeniable that we exist at or very near the point in time marking a phase change I'm the nature of complex thought in pur little corner of the universe. There have been many such phase changes - when we managed to capture mitochondria, when we formed complex multicellular life, when we began to construct tiered food chains, etc et. All the way up until we construct a machine capable of outthinking us. Biological humanity is a booster rocket on a spaceship ride that is the universe coming to understand itself. I feel privileged to exist at this exact moment when I get to witness the start of the next thing, but booster rockets separate and so must we.

I don't think there's anything particularly mystical about that, but I suppose you could say that its spirituality-adjacent in the way that all great things are.


> Its undeniable that we exist at or very near the point in time marking a phase change I'm the nature of complex thought in pur little corner of the universe.

I’d question whether it is “undeniable”, but I do think this sounds like the exact sort of foundational mythology necessary to establish a religion oriented around AI.


What are you hoping will be understood? We've gone as far back as the big bang and as far down as the atom in understanding. Our bodies are made up of trillions of cells all cooperating, and no one is particularly impressed with it. It's just matter, energy and motion. Everything can be reduced to billiard balls on a table. Is there a way to understand something scientifically that is not ultimately put in those terms?


>no one is particularly impressed with it

If anyone has ever said anything cynical ever I think this beats it.

Biology is amazing.

>What are you hoping will be understood.

Understanding is like the apparent horizon. The closer you get the further away you realize it is. There will always be more to understand, we will just reach a point where our little monkey minds are incapable of holding enough 'state' to grasp for the next step. I say that as a monkey who is awfully fond of my own little monkey mind.


People say they are impressed, but it has no real quality to it, it's just a word. Actual wonder is gone by the time a person reaches adolescence. From there it's just cramming more facts in. The basis of understanding the physical world is already given: it's a mechanical process involving matter, energy and motion. What is so interesting about learning another variation of those? If you could build an AI to give us all the valid variations, you're still stuck with the premise. It's like being a chess piece and being shown another position on the board. If you can be satisfied with that then good luck I guess.


This is like deciding that we've already figured out that the celestial spheres are why the stars move in the way they do. Why bother to figure out any more?

Improvements to understanding always seem obvious after they're made and it always seema like you have a mostly complete worldview right up until you realize you don't. That's how it works.

If you can explain to me from base principles why the fine structure constant has the value it does, or why there is such a pronounced baryon asymmetry or any of the many big open questions in basic physics I will acquiesce the point.

Being ignorant of the big questions doesn't mean big questions don't exist, only that you don't know about them yet.


I'm not saying everything is known. The question is, are there any questions we can ask that have a potential answer that will not be answered by a material, mechanical description. Because we already know what that entails. Can you conceive of an explanation of the fine structure constant or any other question in physics, or any other domain of science that doesn't have a material mechanical basis as the answer?

If not, then all the answers are variations on each other. To be fair, in some sense that project has already broken down. We have 'fields' in empty space, 'virtual' particles, correlation over distances where causation is not possible, and other things that, basically, are inconceivable.


Even worse if the beliefs are true!

Ian Banks' Culture series takes this idea to the extreme, feels like it's getting more relevant by the day in the past week.


"We are close to gods, and on the far side."

Well, Minds are friendly, so I'd take it.


I think it was a few months ago here on HN that I saw someone suggest that we'll only accept AI as people when people start worshipping them.


This is already happening. See Theta Noir [0]

[0] https://thetanoir.com


> Why do I get a feeling we are going to see an explosion of secular cults that have weird beliefs in AI and worship AGI?

A tendency that would take humanity a few hundred years backward, that's such a shame.

Laziness will prevail when immediacy and cargo cult tribalism are the most potent currency.


>A tendency that would take humanity a few hundred years backward, that's such a shame.

Not in this case - AI, singularitanism and transhumanism have had cultish, neo-religious elements to them for decades. All we're seeing now is the seeds that Ray Kurzewil, Rudy Rucker and others planted ages ago bearing fruit now that technology actually seems to be catching up to people's fears.

I'm not claiming it is, just that it seems to be.


Well, we did create the AI in our image, after our likeness. That's usually a good start.


Humans are more than language models (language processing machines); LLMs are not even a sliver of what a true AI would be. It's so very silly that so much credence is given to them.


Why do you? I don’t understand your logic.


There's already plenty of mental illness out there so this manifestation doesn't seem too farfetched.


common sense?

You may find this interview interesting https://www.youtube.com/watch?v=4tTJR9vqWB4


All of them conveniently with 501(c) status.


because we will


Secular cult = an oxymoron?


Why would it be an oxymoron? It's not secular in this case anyway as it's based on a supernatural interpretation of linear algebra.


Well, I would say that indoctrination would require some amount of metaphysics, so not really a secular concept.


Something is missing....

Which players have the better tensor chips?

1. Apple 2. Google 3. NVIDIA 4. SAMA if his side chip startup gets going 5. Microsoft

As the bet would not be SoC tensor specific but overall the full stack of it..


It's definitely NVIDIA, as almost everyone's software stack is deeply based on CUDA, and AWS as the go-to provider of almost everyone's choice.


This post fell off the front page already, despite having more votes much more recently than some posts it is below.


HN downranks posts that have a high comment-to-vote ratio, in order to discourage flamewars.

By posting this additional comment you just made it sink even lower, good job!


[flagged]


Do you really think current AI progress is nothing more than an NFT repeat?


> its nft's all over again.

could you explain how you make the connection?

do you think AI is worthless?


This isn't the first person i've heard say something along these lines. usually something decrying the hype in AI. My own feeling is that its mental models: 'we have seen hype before, it was nothing, this is hype, so it must be nothing"

But, to me it reminds me of when the internet took off in the mid to late 90s - people (usually older than me) would decry it as 'the latest fad' and play down the affect that this would have on our everyday life. Before hilariously comparing the internet to 'space invaders' or a 'yo-yo' or something weird that at least gave insight into the fact that they really didn't know what it was they were talking about.

People comparing AI/AGI to NFT's or crypto, are IMHO, as weirdly off-base as those who compared 'the internet' to yo-yo's were back in the day.


Internet is useful, glorified markov chains are amusing but besides saying funny things on IRC, don't really have much use.


LLMs are not AI


Of course it is.

FTX collapses (November 2-11, 2022), crypto hustle-scam is over, the decision is made to release whatever there is at the moment at OpenAI to start the next cycle, a few weeks later (November 30, 2022) there we go. Same thing as always: some investors -- mostly white males -- get filthy rich meanwhile some others get exploited -- this time it's Kenyan workers -- and many, many people get hoodwinked and then lose their shirt. Same story, different hype.

https://youtu.be/I6IQ_FOCE6I did everyone forget? I guess. At we got something out of web2.0 ... but we also got a genocide and a few warped elections / referendums out of it so the jury is still out how history will see it. Crypto was a net negative for humankind in every possible way (except a few people walked away with tons of money of course but at what cost) and now we are facing down this terrible wave of AI. Yay?


At least with anything AI there is something that anyone can use hands-on for actual legitimate work. Crypto/NFT failed to materialize any benefit beyond anonymous payments for drugs and other contraband.


People are quietly just using crypto right now. I'm sorry if this conflicts with your worldview but trustless programmable currency divorced from the economic levers and dials of world governments is a useful thing.


> trustless programmable currency divorced from the economic levers and dials of world governments

https://davidgerard.co.uk/blockchain/2018/04/05/debunking-bu...

> Ordinary person: “Your weird Internet money sucks to use. It’s slow and expensive. I lost my coins by mistake and it can’t be fixed. If I get hacked it can’t be fixed either.”

> Bitcoin advocate: “But everyone wants [list of ideological aims held only by weird people]! Everyone I know, anyway.”


Call me weird, I guess. But the fact remains that ledgers are open to anyone, one can easily see it being used.


I can't see why it would conflict with their world view? - they gave 2 very good reasons why someone would want a "trustless programmable currency divorced from the economic levers and dials of world governments "


So they say. It is a nice toy that utilizes plagiarism and theft, but I can do the same with Google.


"Actual legitimate work" often ends up being bullshit job busywork that probably has no reason for existing in the first place, if we're being honest.

Crypto often had to invent new pyramid schemes from whole cloth, so it was easier for people to see the grift for what it was. Whereas AI is slotting into the long-established pyramid scheme of bullshit jobs, and just pushing out a growing number of human operators. It's harder for us to perceive the ultimate pointlessness of much of AI's current application because we have a vested interest in believing our own role as humans amounts to something more than it actually does.


The fallout from this week will be world changing. IMO, concentration of power in the hands of ‘sama as OpenAI changes tack to full commercialisation can only be a bad thing. The prevailing feeling at work is that AI risk experts are about to have their day in the sun.


Power hasn't been concentrated.

They only sent a letter after Ilya signed it.

Power is distributed amongst Ilya, Brockman, Altman and the board. The other big player, which asserted itself quite plainly during this episode was ofc the team itself.

Altman is progressive, Brockman is center, Ilya is conservative and the board(should be) independent. Not sure where Mira Murati is at. Probably center with Brockman.

I'm guessing the team is highly diverse in loyalty, operating more like a family with a rather adorable coparenting arrangement than a corporation.

Seems overall everyone loves everyone. It's a high stakes mission that brought out some serious tension. All parties have strong feelings. Hell... I would.

If McKinsey was on track to sell armies of consultants for 20 bucks a month possibly as soon as the next 10 years... The stakes get high real quick.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: