Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
OpenAI Welcomes Sarah Friar (CFO) and Kevin Weil (CPO) (openai.com)
119 points by meetpateltech on June 10, 2024 | hide | past | favorite | 86 comments



For those not in the know, these are two of the most highly regarded operators in the Valley (if not as well known).

Sarah was thought of as the person who "ran" Square, and Kevin is thought of as one of the strongest product leaders.

They'd both be in my top 3 worldwide hires I'd try to make for these roles.


I'm still at Block (née Square) and have been for over 10 years. I'm not in anything related to finance, but I can say that during her time here she was widely respected and valued within the company at all levels. I don't know of a single person that wasn't saddened to hear when she was leaving.

Landing her is a huge victory for OpenAI, and I wish her all the best.


Nextdoor shares don't look like they did too well under her management.


She was an awful CEO; she was basically a CFO hired to shepherd the company through the IPO (SPAC) process, but the years following were awful. I left right before the board fired her (along with many, many senior people) and hope the company is doing better without her there. Not surprised to see her failing upward.


If your complaint is that she was a fine CFO but a terrible CEO then i can hardly see her going back to a CFO role as failing upwards.


> If your complaint is that she was a fine CFO but a terrible CEO then i can hardly see her going back to a CFO role as failing upwards

Failing upwards would be her announcing she's raised money for a new start-up at which she will be CEO. (EDIT: To be clear, I'm agreeing with you.)

Instead, she's going back to where she has a track record. It would be like Musk selling Twitter. Or David Marcus going back to 2001 [1].

[1] https://en.wikipedia.org/wiki/David_A._Marcus


right, you're agreeing with GP


Yeah I agree with that. OpenAI is obviously a much bigger and more relevant company than Nextdoor, but she is going back to a role she has a proven track record at.


>Nextdoor shares don't look like they did too well under her management.

Isn't it fair to look at share price under their leadership? What other objective measures do we have of their management abilities? We take the OP's word for it that they're the best? Or is this like sports teams recycling the same 30 coaches?


> Or is this like sports teams recycling the same 30 coaches?

This.

The valley is nothing but a machine. Feed young engineers in one end and spit unicorns out the other. The same players make it all go.


I don't think she's someone to be admired. Being ceo of an app that is IMO rife with bigotry, xenophobia, and scare mongering.


A bit unfair. A neighbourhood orientated social media platform is always going to have a vocal tiny minority of the people you described. I’d like to hear suggestions on how she as a CEO could prevent those issues


Nextdoor is close to the worst example I can think of in terms of dark patterns and user experience, and I don't say that lightly. I think it is quite fair to lay that at the feet of the executives.


I would be sympathetic if it were not for their insane level of dark patterns, spam emails and growth hacking schemes. I had to make a blanket ban on @nextdoor.com email addresses because a dying relative was unable to parse their actual emails over nextdoor’s spam.


A UI can promote positive interactions, and a moderation team can mitigate negative ones. Quite tired of hearing implications that social media platforms aren't shaping the way people communicate on them.


I.e. the app can brainwash people into reflecting some specific sensibilities. That's some Orwellian-level stuff there.

The platform shaping communication is a fact, but not a license to use it to program a worldview. The job of a general-purpose platform is to enable communication, but otherwise stay out of the way.


Weak and paranoid take


> an app that is IMO rife with bigotry, xenophobia, and scare mongering

I don't use it a lot. But my Nextdoor is basically pictures of animal sightings, pictures of pets who got out and pictures, a few hours later, of them being found.


Aren’t those things the point of the app?


I would think you should admire her for taking a strong moral stand at the expense of the business. She added a number of grievance-oriented features to reduce "bigotry", "xenophobia", etc; thereby making the product less able to render reality; thereby hurting its value proposition; thereby losing users; thereby cratering the share price.


Not really. They just have a bit of a cult following.


I mean isn't that executive culture these days? The myth >> then the person.


Whos the third ?


Majorly cult-y.


really interesting to see them hire two folks from consumer-tech backgrounds, rather than B2B or developer tools. I wonder if that signals an increasing focus on ChatGPT as a consumer product rather than the API / Enterprise products? (Edit: Sarah actually has a background in both Enterprise and Consumer)


In my opinion, OpenAI is building a platform business [1]. OpenAI is primarily an innovation platform. It provides tools, models, and APIs that enable developers and businesses to create new applications and services, fostering a wide ecosystem of innovation where OpenAI’s technology is the beating heart, driving high-margin revenue. While it also sells API access, its main role aligns with enabling third-party innovation, similar to platforms like Microsoft and Google Android.

However, driving third-party innovation does not relieve OpenAI of the need to create consumer-facing applications like ChatGPT. If OpenAI was to remain exclusively a back-end API business, it would be leaving itself vulnerable to disintermediation as competition from other model providers like Meta enable swapping out the back end. I believe Kevin was brought in to make sure OpenAI has its own relevant front end innovation so that disintermediation is less of a threat.

[1] https://www.hbs.edu/faculty/Pages/item.aspx?num=56021


> In my opinion, OpenAI is building a platform business [1]. OpenAI is primarily an innovation platform. It provides tools, models, and APIs that enable developers and businesses to create new applications and services, fostering a wide ecosystem of innovation where OpenAI’s technology is the beating heart, driving high-margin revenue.

This could have been a description of Facebook circa 2010, for a brief moment, everyone was happy with the arrangement.

Inevitably, the platform will start competing with the developers who "innovate" on top of it in order to gain additional revenue; see 2024 Facebook which is no longer a platform by most measures, or the many apps that got Sherlocked by Apple. The countdown clock has started ticking for OpenAI-wrapper products/startups, starting with the ones with the most revenue and/or utility.


This is the most insightful nugget into OAI's consumer product strategy I've read, thank you.


I’m a passionate student of Prof. David Yoffie at HBS. His many case studies on the tech industry are worth buying, as is his book on platforms referred to in my previous comment.


So OpenAI is going to compete directly with Azure, as a PaaS/IaaS? Doesn't make sense. At best they will be a thin layer over Azure.


In this case Azure is responsible for the datacenters, billing, and support.


Why does the existence of ChatGPT reduce the risk of disintermediation?


By creating the best chat interface for its own models - presumably using capabilities that may not be shared with others via APIs - OpenAI ensures that their world class model is harder to replace.


Thought the same when I read - and honestly I think the larger growth vector for OpenAI is their B2B and API potential. But good product leaders can span both, but curious none-the-less


Do these people actually even believe in AI? If not, they're there solely for the financialization and value-extraction, which means it goes downhill from here.


> Do these people actually even believe in AI?

The moment that every one who gave a shit about safety quit OpenAI you should have understood that there was no AI coming.

This has always been, and will always be about the money.


Implying that these people quite because they realized they weren't needed (no need for safety if AI isn't coming) rather than what seems like the common perspective being that safety initiatives always lost out over money making opportunities?


Rewind the clock a few years.

The google engineer who was saying google had AGI. The MS paper saying GPT had hints of AGI. The MS deal "you can have everything up to AGI".

The approach was one where a big enough model could be made to feed back in on it self (Q*). If we feed enough data into a big enough model then it will "tip over" and turn into AGI, or sentient or sapient... The problem is that this doesn't work, it never would have worked, the math does not support it. Pointing it back at it self doesn't get you more it gets you an ouroboros.

Why do you sign a deal "up to AGI" if you dont think you're close to AGI? They did, they were high on their own supply, reality set in, people look foolish, and they moved on.


Seems to me like you’d be more likely to sign a deal “up to AGI” if you knew that was a pipe dream that would never happen. It’s easy to sign away your firstborn child if you’ll never have any children to begin with.

Edit: Nevermind, looks like it's just a way to hype then!


You got this backwards.

M$ gets everything OpenAi does till they reach AGI.

The deal is "give me everything till you have that baby"


that does not make sense,the deal is msft gets everything except AGI. They do believe they ll reach AGI whether thats pipe dream or not is a different story because they believe in agi they are fine with giving up everything upto that point


Kinda weird to expect a for-profit company to "care" about a technology outside of it making them more money TBH.


Wasn't their goal to make AGI (and to make it available)? It's why I said what I did. Financialization runs in opposition to such altruistic goals. Someone has to now pay their hundred million dollar pay packages, and that someone is the customer.


Well, when they're owned by a non-profit, it's not at all weird to expect that.


OpenAi is "for profit" organization.


But if you actually believe they'll be super AI in a year or two, why even bother with a product till then? Seems like the only focus should be getting there first.

Unless of course you don't actually believe that.


They've disbanded their superlignment group, and their staff who were able to work on the problem more deeply (STEM people) have since quit, see

https://scottaaronson.blog/?p=8047

(N.b. in the ex-employee author's pdf article mentioned above, a page dedicates his pdf to Ilya, who is also out of the company, so in context it's a pointed statement writing that dedication IMO)


This appears to project the last few years' growth continuously into the future, whereas a great many experts seem to be suggesting that we've hit a plateau.

I tend to believe the plateau theory, given how AI development has occurred over the last several decades with huge leaps forward followed by winters.


It also talks about "straight lines on a graph"...and promptly proceeds to illustrate this concept using a graph with a logarithmic scale.

Combined with the author's evident belief that facilely restating the concept of hard-takeoff singularity (as "AI can replace an machine learning engineer by...") suffices to change the nature of the basic claim he's making, I didn't see any pressing need to read further. Singularitarian sophistry is hardly novel in 2024, and at this late date retains no meaningful capacity even to entertain.


Your collection of experts seem to be different than mine, then (so I'm curious who you read instead, please let me know):

Scott Aaronson (the blogger above) is a professor and definitely is more sanguine towards the written article.

Dave Patterson (the Turing award professor of computer architecture) was interviewed last week, he said "We don't know what will happen!" or something like that: https://www.youtube.com/watch?v=YxVQsLA2ats&t=2045s

One of my own professors (a CS theoretician ) said, at an AI seminar last year, that there seem to be no known barriers left to AGI (paraphrasing).

Actually I'm personally on the fence, so while the pdf article discussed is not rigorous enough, it makes some interesting high-level arguments. One of them is that the recent growth needs to hit some threshold - this argument is not a continuous argument the way you had mentioned.


> disbanded their superlignment group

They were, for all practical purposes, acquired. The alignment group served its role in underligning that OpenAI was building Really Serious Stuff. There's no marginal benefit to having them around at this point, given their PR purpose has been fulfilled.


I think even the most bullish e/acc AI fanatic is probably not 100% sure AGI will be developed in a year or two, and so even in the most optimistic case you would want to hedge your bets.


Maybe not a year or two but the e/acc fanatics I know are 99.9% convinced its happening within the next 4-5. As in, anything short of World War 3 and it's happening. Maybe even still happening.


These don't seem like the brightest bulbs.


Of course they're convinced. They're rubes. That's their job.


Predictions aren't binary, they're probabilistic. If anyone "actually believes" we'll have AGI in 2 years, without any room for doubt, they're wrong.


You think you need 7 trillion dollars to build chips to make that AI? You think you need a product that is appealing so you remain the market leader while you build that AI? You believe you'll build the super AI but your investors want something more than, "its coming" for the next few years? You come from a background of building products and you want to build a product?

Seems like a lot of reasons why you would build a product, even if you think you're build a super AI.


The way OpenAI protect their IP suggests to me that the USP is not that U.


Curious choice re Sarah Friar - was Nextdoor that successful? Or is this more based on her Square work?


Sarah was well respected at Square as the CFO, and left when it was obvious Jack wouldn't give up the CEO role.


Would sama?


No but CFO at Open AI >>>>> CFO at Square. It's like being CFO of Etsy (Square is). Ok I guess, but not epic (which Open AI is).


It looks like Silicon Valley really values the experience of execs from Instagram:

OpenAI CPO: Kevin Weil, Instagram VP of Product 2016-2018 [1]

Anthropic CPO: Mike Krieger, Instagram Cofounder and CTO 2010-2018 [2]

OpenAI VP Consumer Product: Peter Deng, Instagram Head of Product 2013-2015 [3]

[1] https://www.linkedin.com/in/kevinweil/

[2] https://www.linkedin.com/in/mikekrieger/

[3] https://www.linkedin.com/in/peterxdeng/


Lot's of experience how to make money with data :)


The Instagram mafia.


I can't think of another consumer software product as successful in users and $ as Instagram in the '10s.


I'm amazed they were able to pull from Planet. As impressive as OpenAI is, Planet is such a stellar company with the right mix of good intentions/impact and profitable product market fit that it's hard to imagine a better place to work right now. That said, maybe Planet's success made it less attractive to someone looking to take on new and monumental challenges.


so tl;dr open api sees itself as a platform business and is hiring top level platform leaders to fill out their space.

Kevin known mainly for his work at Instagram and Sarah for her work at Square.

Also implies that agi and some of the broader ideas might not come online for a few years.


So long "Open" AI, this screams more push to turn a profit. Maybe had they hired leaders in the scientific fields it would read differently, but a cash flow perp and a top tier product owner? (tho instagram? woof don't count that as a win except a win for greed).


Honest question: Do they not each worry they are each, in their own way, going to be Linda Yaccarino?


The thing with Linda is that while she's theoretically the boss, and Musk reports to her as CTO/CPO everybody knows that Musk is the boss and she's there to... sell ads? Actually I don't know what she's supposed to do, but it doesn't matter, she's certainly not allowed to touch either the product or the technology and Musk can fire her at any point as owner/chairman.

This, OTOH is a just a simple straightforward C-Suite. Sam is the boss, and he hired two reports to handle areas where they have expertise. This might be the simplest corporate thing OpenAI has ever done, lol.


I understand they have each taken steps to avoid that, notably by installing the OpenAI app on their phone's home screen.


In what way?


I think he's insinuating that with a figurehead such as Sam Altman, that Sarah and Kevin might be more showpieces than given autonomy to operate.


I'm not insinuating. I'm asking.

It's a pretty reasonable question for both of them. Perhaps more for the CFO, given what we know of Altman's, er, lack of candour with the board where money is concerned. But maybe given the "Her" debate it applies to both of them, in different ways.

Is this a company where the executives can really be more than rubber stamps?


I don't think anyone in an executive position cares about the Her "debate". It's Twitter-level gossip.


Yeah, they don't care so much that there were direct follow ups to the up roar


Sure. Sam and other executives probably took an hour of their time to decide what they would write in the blog post to appease the critics (which is par for the course for any public-facing company as widely known as OpenAI †), and then maybe another hour meeting with their lawyers to prepare in case of a suit, three weeks ago.

But in the grand scheme of things, I don't think it's in their top 15 priorities. People here talk as if the Her thing was almost an existential threat to Altman's leadership.

† See Apple's recent apology for their ad. Do you think Tim Cook or his executives lose sleep because of the backlash? https://edition.cnn.com/2024/05/09/tech/apple-apologizes-for...


I don't think losing sleep is a requirement. However, you know that the next time an ad is going to be release, it will be in people's minds of what backlash might happen from this before it gets green lit. At least, one would hope that would be the take away.


Is this what caused AAPL stock to drop ~$3 rapidly? (mostly recovered since then)


WWDC is today, any major stock price movement is almost certainly due whatever they are announcing.


Yes, I know, I was tracking the stock while watching WWDC. But there was a very sudden precipitous fall right when this was announced.


We expected to have Apple unveiling world-class local models, and announcement of a framework where you can train and run your models in M4.

And we got a wrapper around ChatGPT 4 + a cloudy-ish dystopian spyware-like copy of Copilot+ that sends your screenshots and activity to a "secure cloud".


I also went to the bathroom around that time, so maybe that had an impact?


It only counts if you're placing buy/sell orders based on your breaks.


who cares.. everything US and everything OpenAI is incredibly overblown




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: