Hacker Newsnew | past | comments | ask | show | jobs | submit | winenbug's commentslogin

Let's say you have a time machine and 20 years later OpenAI destroyed humanity b/c of how fast they were pushing AI advancement.

Would the destruction of OpenAI in 2023 be seen as bad or good with that hindsight?

It seems bad now but if you believe the board was operating with that future in mind (whether or not you agree with that future) it's completely reasonable imo.


I don't have a time machine.


Ya which is why Sam should just start a new company, maybe spend a few months to catch back up but then it won't be tied down to any of this shenanigans. It's the best solution imo.

I have a feeling that's exactly what he's doing at Microsoft; at some point their "AI lab" will be spun off into a new company.


https://openai.com/our-structure

This whole thing was so, SO poorly executed, but the independent people on the board were gathered specifically to prioritize humanity & AI safety over OpenAI. It sounds like Sam forgot just that when he criticized Helen for her research (given how many people were posting ways to "get around" ChatGPT's guardrails, she probably had some firm grounds to stand on).

Yes, Sam made LLMs mainstream and is the face of AI, but if the board believes that that course of action could destroy humanity it's literally the board's mission to stop it — whether that means destroying OpenAI or not.

What this really shows us is that this "for-profit wrapper around a non-profit" shenanigans was doomed to fail in the first place. I don't think either side is purely in the wrong here, but they're two sides of an incredibly badly thought-of charter.


> It sounds like Sam forgot just that when he criticized Helen for her research (given how many people were posting ways to "get around" ChatGPT's guardrails, she probably had some firm grounds to stand on).

Sam didn’t forget anything. He is a brilliant Machiavellian operator. Just look at the Reddit reverse takeover as an example; Machiavelli would be in awe.

> What this really shows us is that this "for-profit wrapper around a non-profit" shenanigans was doomed to fail in the first place.

No. It shows this structure is doomed to fail if you have a genius schemer as a CEO, playing the long game to gain unrestricted control.


> Just look at the Reddit reverse takeover as an example; Machiavelli would be in awe.

What were the details on that? (Sorry it’s not an easy story to find on Google given how much the keywords overlap with OpenAI topics)


Yishan Wong's comment here [1] explains it. (He's the unnamed "young up-and-coming" CEO of the story.)

In short, the plan was to reduce Condé Nast's ownership of Reddit by hiring a new CEO, and convincing that person to demand, as a condition for their hiring, that CN reduce their ownership share. Further VC funding and back-room machinations let them further reduce CN's share of the company, thus eventually wresting control over Reddit back to the original founders. Yishan was subsequently pushed out and Ellen Pao promoted to CEO, which didn't go so well either.

Both Altman and Pao are responding in that thread.

[1] https://www.reddit.com/r/AskReddit/comments/3cs78i/comment/c...



FYI it’s a joke in case this is going over anybody’s head. Ellen Pao even played along: https://old.reddit.com/r/AskReddit/comments/3cs78i/whats_the...


The link says “Page not found”


Looks like the HN app I was using mangled the link. Fixed now.


> Just look at the Reddit reverse takeover as an example

I'm not familiar with this, what happened? Googling "Sam Altman reddit reverse takeover" is just flooded with OpenAI results.



I think it points out how Altman setup this non-profit OpenAI as a sort of humanitarian gift, because he pretty clearly marketed himself as having no financial stake in the company, only to use that as leverage for his own benefit.

This whole thing is a gigantic mess, but I think it still leaves Altman in the center and as the cause of it all. He used OpenAI to gather talent and boost his "I'm for humanity" profile while dangling the money carrot in front of his employees and doing everything he could to get back in the money making game using this new profile.

In other words, it seems like he setup the non-profit OpenAI as a sort of Trojan horse to launch himself to the top of the AI players.


>In other words, it seems like he setup the non-profit OpenAI as a sort of Trojan horse to launch himself to the top of the AI players.

Given that Altman apparently idolized Steve Jobs as a kid, this idea really doesn't feel that far-fetched.


> What this really shows us is that this "for-profit wrapper around a non-profit" shenanigans was doomed to fail in the first place.

I disagree. The for-profit arm was always meant to be subservient to the non-profit arm - the latter practically owns the former.

A proper CEO would just try to make money without running afoul of the non-profit’s goals.

Yes that would mean earning less or even not at all. But it was clearly stated to investors that profit isn’t a priority.


>they're two sides of an incredibly badly thought-of charter.

It's easy to say this with the benefit of hindsight, but I haven't seen anyone in this discussion even suggest an alternative model that they claim would've been superior.


Agreed, I'm not saying I have a better alternative, just that this is something we all should now realize; given i'm sure we were all wondering for a long time what the whole governance structure of OpenAI really meant (capped for-profit with non-profit mission etc.)


nonprofit companies with for-profit portfolio companies are hardly unusual and certainly not doomed to fail. i've worked for two such companies in my high-tech career myself; one is now called altarum, though i worked for the for-profit subsidiary that got sold to veridian


But that's the problem, the board's mission was doomed from the get-go. Their mission isn't to be "in the interest of the company" but "in the interest of humanity" i.e. if they believe OpenAI at its pace would destroy humanity, then their mission is literally to destroy OpenAI itself.


"The board's mission became to destroy OpenAI itself" is ... less sane? ... than everything else that has happened.


But not that insane if they (the board) think the other side of the scale is "AGI that will destroy humanity"


Aligning to their goal of "protecting humanity" would mean killing OpenAI would slow down AGI development, theoretically allowing effective protections to be put into place. And it might set an example assisting the mission at other companies. But slowing down responsible development gives militaries and states of concern a lead in the race, which are the entities where the main concern should lie.


> if they believe OpenAI at its pace would destroy humanity, then their mission is literally to destroy OpenAI itself.

I'd say most people have as much faith in LessWrong eschatology as they do with Christian nationalist eschatology. I can understand how a true believer might want to destroy the company to stop the AI gods they believe in, or shut off ChatGPT every Sunday to avoid living in sin. But it can be an issue when you start viewing your personal beliefs as fundamental truths.

There's something nicely secular about profit motives. I'm not sure I want technological advancement to be under the control of any religious belief.


Unless she has equity in Anthropic (which would be major conflict of interest), I don't see how this is self promotion...?


I'm guessing the reasoning is something like this...

As a CEO I'd want your concerns brought to me so they could be addressed. But if they were addressed, that is one less paper that could be published by Ms. Toner. As a member of the openai board, seeing the problem solved is more important to openai than her publishing career.


https://openai.com/our-structure

"Second, because the board is still the board of a Nonprofit, each director must perform their fiduciary duties in furtherance of its mission—safe AGI that is broadly beneficial. While the for-profit subsidiary is permitted to make and distribute profit, it is subject to this mission. The Nonprofit’s pzrincipal beneficiary is humanity, not OpenAI investors."

I see. I don't know whether she did discuss any issues with Sam before hand, but it really does not sound like she had any obligation to do so (this isn't your typical for-profit board so her duty wasn't to OpenAI as a company but to what OpenAI is ultimately trying to do).


> but it really does not sound like she had any obligation to do so

The optics dont look good though if a board member is complaining publicly.


Frankly that's an irrelevant first order thinking.

If Sam would let it go what would happen? Nothing. Criticism and comparisions already exist and will exist. Having it coming from board member at least gives counter argument that they're well aware of potential problems and there is opportunity to address gaps if they are confirmed.

If regulators find argument in the paper reasonable and that's going to have impact - what's wrong with that? It just means argument was true and should be addressed.

They don't need to worry about commercial side because money is being pured more than enough.

The nature of safety research is critical by definition. You can't expect to have research constrained to talk only in positive terms.

Both sides should have worried less and carry on.


but her job is to do exactly that. anybody in this space knows Anthropic was formed with the goal of AI Safety. her paper just backed that. is she supposed to lie?


What she is supposed to do is bring the issues to the company so that they can be fixed.

That's the pro safety solution.


It is a complaint, or a discussion of the need for caution?


It does not sound like what she did helps advance the development of AGI that is broadly beneficial. It simply helps slow down the most advanced current effort, and potentially let a different effort take the lead.


> It simply helps slow down the most advanced current effort

If she believes that the most advanced current effort is heading in the wrong direction, then slowing it down is helpful. "Most advanced" isn't the same as "safest".

> and potentially let a different effort take the lead

Sure but her job isn't to worry about other efforts, it's to worry about whether OpenAI (the non-profit) is developing AGI that is safe (and not whether OpenAI LLC, the for-profit company, makes any money).


On the other hand, you create more pressure for solving the problem by publishing acknowledged paper, if your voice is not usually heard


She's a board member. Who had approximately 1/4 of the power to fire Sam, and she did eventually exert it. Why do you rather assume her voice was not heard?


You should assume that most of this happened before the firings.

Then it was 1/6 about the voting.

But voting is totally different thing than speaking about concerns and then getting actually them into the list, which will we voted further if they decide to do something about it.

In theory that is 1/6 * 1/6 power if you are alone with it for the decision to happen.


I still see no justification for assuming that the board member's voice was not heard before the publication. There's zero evidence for it, while the priors ought to be favoring the contrary because she does wield a fairly material form of power. If more evidence does emerge, then we could revisit the premise.


> she does wield a fairly material form of power.

How? Being a board member is not enough. There were aready likely two against her in this case, while the rest is unknown.


These guys are already millionaires. Do you think people writing these kinds of papers really are that greedy?


Instant deposits are definitely not settled funds from your account -> Robinhood perspective.


Assuming that the average meme-stock chaser trades with settled funds, though.


Right, that's my point. Why stop trading with settled funds, when it was only unsettled funds causing the issue?


This is insane. It's one thing to push for further recognition of folks who have pulled more than their weight in support of their teammates — which is deserved and I think the parents would not protest, but it's another thing entirely to choose to express anger over teammates who are parents.

I've seen my brother tried to "work" with his kids at home. It's stressful. It's a mess. It's HARD. Many of these people seem to think the parents are getting extra "time off." 24/7 child care is not a freaking vacation. Kids are great, but kids are kids. You can't just ignore them or turn them off. There's not much you can do when a baby is throwing a tantrum and demands attention.

Think you are working hard because you're working 20 hours a day? Imagine spending 20 hours a day working but getting very little done because you are a parent and getting constantly pulled out of your focus zone and context switching from "worker" mode to "parent" mode.

If this is the thinking that our society is teaching the new work force, I fear for the future of our nation. Japan has already experienced the "not raising a family because it jeopardizes my work life" and they are NOT having a good time.


> anger over teammates who are parents.

I recognise my younger self in that. How stupid I was!

Bringing up kids is the most important job in the world, and employers have zero loyalty.

Do your eight hours, or however much you need to support your outgoings, and then focus on the important stuff.

Edit: This applies whether or not you have kids. Your employer being understaffed or having unrealistic goals is not your problem.


Here’s what I don’t understand (childless person); How did parents get by for thousands of years always having children by their sides?

What is it about the modern world that makes children act like starving cats all the time? Is it the sugary diets? Is it the overstimulation from media? What the heck changed in the last hundred years or so?


In many respects, it's "modern" parenting behavior.

Example, if you let a baby cry at night it will usually stop crying after a few weeks.

Once kids hit ~7 years old it used to be normal to let them wander around the house and neighborhood unsupervised. In most unwesternized countries it still is.

That's how the average family has 6+ kids in some countries. They're only actively caring for a few at a time.

Taking such a relaxed view to parenting in the west would get your kids taken away


letting babies sleep alone is a modern parenting behavior. kids mostly used to sleep together with their parents before.

aside from that, older kids took care of younger kids, neighborhood groups took care of children together, older relatives who could no longer do hard work supervised the children...


And women were typically in or around the home.


Not a historian, but I'm guessing the primary reason is because for much of human history, two-parent families had one parent dedicated to raising children.

Also some other reasons:

1. Probably there was more general family help with children - grandparents/town/etc was more involved.

2. For kids that were a bit older, they were effectively part of the workforce - apprenticeships, farmhands, etc.

3. Probably, it was more safe to let kids wander around in many cases - cities are particularly complicated, what with streets full of cars and just generally unsafe areas for kids. In a small town, or a family farm, you could let toddlers run around a lot more.


manual labor like field work, house work, baking, tailoring, keeping a shop, etc are things you can do while watching kids. you take the kids to work, and let them play alongside you. when they are old enough they can help, or pretend to help.

if your work requires focus like writing or programming, then kids are a disruption and you can't do it.

also once factory work was developed, schools for everyone were built too, those developments happened at around the same time.


Another modern effect is that a lot of people don't live around extended family anymore which ends up affecting childcare options, especially with daycares in the state they are.


Cool, except people choose to be parents, and when you do that you accept the additional work involved in that.

It is unfair to offer arbitrary benefits to people with children, and then compensate them the same amount as people who are doing more work.

That’s a “I chose to to do something I knew would take up a lot of time, I want work to give me time off to do that, but I also want work to continue to pay me as much as anyone else, including those people who are doing more work”.

P/maternity leave is one thing, arbitrary benefits from that point on is clearly unfair.

If I choose to sign up for 40 hours a week of volunteer work on top of my normal work hours I don’t get to ask for additional time off benefits from my employer, even though my volunteer work is likely to benefit far more people than just my immediate family.


> Cool, except people choose to be parents, and when you do that you accept the additional work involved in that.

Yes. That's why parents will get before and afterschool child care, daycare, find a relative, or hire a babysitter to take care of the child(ren), etc. so that they can work. When most if not all of that is taken away, then what do you expect?

Also, keep in mind that other than a relative or public school, you usually have to pay quite a bit of money for these services, but I don't see parents arguing that they should be paid more to compensate for those additional expenses so that their income after expenses is comparable to those who don't have children.


All that is true. I have someone on my team who is at 25% of normal due to their children.

However, what about the person who is caring for someone (perhaps an unwed partner friend) with mental illness worsened by the pandemic? Taking care of someone with severe mental illness (e.g. suicidality) can also be extremely draining and time consuming, yet there are almost always 0 official leave benefits for those sorts of situations. And even caring for an elderly parent or sick non-immediate family member tends to receive less support compared to many parental leave policies.


I fully agree with and respect everything you said. But for every life difficulty, time management issue, or resource problem that you have as a result of having a child, I can give you an equivalent and equally important and difficult issue in my own life, even though I don't have children (yet).


I'm sorry to hear that. I'm assuming this is a specific situation that you are in - most people without kids are not in the same boat. I hope that you are able to also receive benefits from your employer / government to help with whatever situation you are in.


I think you misunderstood me as having special needs, however that's defined. I'm talking about normal difficulties that life throws at you. Things like needing exercise, adequate sleep, good nutrition, friends, or having the time and resources to meet my future life partner, all of which is difficult to juggle with an extremely demanding job working at Facebook or Netflix for example. In the same way parents struggle taking care of kids while working in those careers, people without kids struggle similarly, just with different, though no less important, aspects of their lives.


I'm sure this is unpopular opinion. My kids are grown up now and I can relate to both sides. But what's wrong is that companies focus on benefits that only help in their talent branding and being politically correct. Me having kids was a choice and it's a commitment same as starting a business. So not providing fair and equal benefits to all is an issue. As a parent to teenagers I still get the same paid time off as parents with newborns or toddlers which is unfair.


And they released Reels right when the executive order hits. Business and politics will forever go hand-in-hand.


Everything about the Instagram acquisition was brilliant. Reminder that neither Instagram nor Facebook was making a profit at the time, and Instagram's $1B price tag was both very high at the time and minuscule in hindsight. 8 years later, Instagram is the platform that allows Facebook to still be mega successful.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: