I'm still at Block (née Square) and have been for over 10 years. I'm not in anything related to finance, but I can say that during her time here she was widely respected and valued within the company at all levels. I don't know of a single person that wasn't saddened to hear when she was leaving.
Landing her is a huge victory for OpenAI, and I wish her all the best.
She was an awful CEO; she was basically a CFO hired to shepherd the company through the IPO (SPAC) process, but the years following were awful. I left right before the board fired her (along with many, many senior people) and hope the company is doing better without her there. Not surprised to see her failing upward.
Yeah I agree with that. OpenAI is obviously a much bigger and more relevant company than Nextdoor, but she is going back to a role she has a proven track record at.
>Nextdoor shares don't look like they did too well under her management.
Isn't it fair to look at share price under their leadership? What other objective measures do we have of their management abilities? We take the OP's word for it that they're the best? Or is this like sports teams recycling the same 30 coaches?
A bit unfair. A neighbourhood orientated social media platform is always going to have a vocal tiny minority of the people you described. I’d like to hear suggestions on how she as a CEO could prevent those issues
Nextdoor is close to the worst example I can think of in terms of dark patterns and user experience, and I don't say that lightly. I think it is quite fair to lay that at the feet of the executives.
I would be sympathetic if it were not for their insane level of dark patterns, spam emails and growth hacking schemes. I had to make a blanket ban on @nextdoor.com email addresses because a dying relative was unable to parse their actual emails over nextdoor’s spam.
A UI can promote positive interactions, and a moderation team can mitigate negative ones. Quite tired of hearing implications that social media platforms aren't shaping the way people communicate on them.
I.e. the app can brainwash people into reflecting some specific sensibilities. That's some Orwellian-level stuff there.
The platform shaping communication is a fact, but not a license to use it to program a worldview. The job of a general-purpose platform is to enable communication, but otherwise stay out of the way.
> an app that is IMO rife with bigotry, xenophobia, and scare mongering
I don't use it a lot. But my Nextdoor is basically pictures of animal sightings, pictures of pets who got out and pictures, a few hours later, of them being found.
I would think you should admire her for taking a strong moral stand at the expense of the business. She added a number of grievance-oriented features to reduce "bigotry", "xenophobia", etc; thereby making the product less able to render reality; thereby hurting its value proposition; thereby losing users; thereby cratering the share price.
really interesting to see them hire two folks from consumer-tech backgrounds, rather than B2B or developer tools. I wonder if that signals an increasing focus on ChatGPT as a consumer product rather than the API / Enterprise products? (Edit: Sarah actually has a background in both Enterprise and Consumer)
In my opinion, OpenAI is building a platform business [1]. OpenAI is primarily an innovation platform. It provides tools, models, and APIs that enable developers and businesses to create new applications and services, fostering a wide ecosystem of innovation where OpenAI’s technology is the beating heart, driving high-margin revenue. While it also sells API access, its main role aligns with enabling third-party innovation, similar to platforms like Microsoft and Google Android.
However, driving third-party innovation does not relieve OpenAI of the need to create consumer-facing applications like ChatGPT. If OpenAI was to remain exclusively a back-end API business, it would be leaving itself vulnerable to disintermediation as competition from other model providers like Meta enable swapping out the back end. I believe Kevin was brought in to make sure OpenAI has its own relevant front end innovation so that disintermediation is less of a threat.
> In my opinion, OpenAI is building a platform business [1]. OpenAI is primarily an innovation platform. It provides tools, models, and APIs that enable developers and businesses to create new applications and services, fostering a wide ecosystem of innovation where OpenAI’s technology is the beating heart, driving high-margin revenue.
This could have been a description of Facebook circa 2010, for a brief moment, everyone was happy with the arrangement.
Inevitably, the platform will start competing with the developers who "innovate" on top of it in order to gain additional revenue; see 2024 Facebook which is no longer a platform by most measures, or the many apps that got Sherlocked by Apple. The countdown clock has started ticking for OpenAI-wrapper products/startups, starting with the ones with the most revenue and/or utility.
I’m a passionate student of Prof. David Yoffie at HBS. His many case studies on the tech industry are worth buying, as is his book on platforms referred to in my previous comment.
By creating the best chat interface for its own models - presumably using capabilities that may not be shared with others via APIs - OpenAI ensures that their world class model is harder to replace.
Thought the same when I read - and honestly I think the larger growth vector for OpenAI is their B2B and API potential. But good product leaders can span both, but curious none-the-less
Do these people actually even believe in AI? If not, they're there solely for the financialization and value-extraction, which means it goes downhill from here.
Implying that these people quite because they realized they weren't needed (no need for safety if AI isn't coming) rather than what seems like the common perspective being that safety initiatives always lost out over money making opportunities?
The google engineer who was saying google had AGI. The MS paper saying GPT had hints of AGI. The MS deal "you can have everything up to AGI".
The approach was one where a big enough model could be made to feed back in on it self (Q*). If we feed enough data into a big enough model then it will "tip over" and turn into AGI, or sentient or sapient... The problem is that this doesn't work, it never would have worked, the math does not support it. Pointing it back at it self doesn't get you more it gets you an ouroboros.
Why do you sign a deal "up to AGI" if you dont think you're close to AGI? They did, they were high on their own supply, reality set in, people look foolish, and they moved on.
Seems to me like you’d be more likely to sign a deal “up to AGI” if you knew that was a pipe dream that would never happen. It’s easy to sign away your firstborn child if you’ll never have any children to begin with.
Edit: Nevermind, looks like it's just a way to hype then!
that does not make sense,the deal is msft gets everything except AGI. They do believe they ll reach AGI whether thats pipe dream or not is a different story because they believe in agi they are fine with giving up everything upto that point
Wasn't their goal to make AGI (and to make it available)? It's why I said what I did. Financialization runs in opposition to such altruistic goals. Someone has to now pay their hundred million dollar pay packages, and that someone is the customer.
But if you actually believe they'll be super AI in a year or two, why even bother with a product till then? Seems like the only focus should be getting there first.
(N.b. in the ex-employee author's pdf article mentioned above, a page dedicates his pdf to Ilya, who is also out of the company, so in context it's a pointed statement writing that dedication IMO)
This appears to project the last few years' growth continuously into the future, whereas a great many experts seem to be suggesting that we've hit a plateau.
I tend to believe the plateau theory, given how AI development has occurred over the last several decades with huge leaps forward followed by winters.
It also talks about "straight lines on a graph"...and promptly proceeds to illustrate this concept using a graph with a logarithmic scale.
Combined with the author's evident belief that facilely restating the concept of hard-takeoff singularity (as "AI can replace an machine learning engineer by...") suffices to change the nature of the basic claim he's making, I didn't see any pressing need to read further. Singularitarian sophistry is hardly novel in 2024, and at this late date retains no meaningful capacity even to entertain.
Your collection of experts seem to be different than mine, then (so I'm curious who you read instead, please let me know):
Scott Aaronson (the blogger above) is a professor and definitely is more sanguine towards the written article.
Dave Patterson (the Turing award professor of computer architecture) was interviewed last week, he said "We don't know what will happen!" or something like that: https://www.youtube.com/watch?v=YxVQsLA2ats&t=2045s
One of my own professors (a CS theoretician ) said, at an AI seminar last year, that there seem to be no known barriers left to AGI (paraphrasing).
Actually I'm personally on the fence, so while the pdf article discussed is not rigorous enough, it makes some interesting high-level arguments. One of them is that the recent growth needs to hit some threshold - this argument is not a continuous argument the way you had mentioned.
They were, for all practical purposes, acquired. The alignment group served its role in underligning that OpenAI was building Really Serious Stuff. There's no marginal benefit to having them around at this point, given their PR purpose has been fulfilled.
I think even the most bullish e/acc AI fanatic is probably not 100% sure AGI will be developed in a year or two, and so even in the most optimistic case you would want to hedge your bets.
Maybe not a year or two but the e/acc fanatics I know are 99.9% convinced its happening within the next 4-5. As in, anything short of World War 3 and it's happening. Maybe even still happening.
You think you need 7 trillion dollars to build chips to make that AI? You think you need a product that is appealing so you remain the market leader while you build that AI? You believe you'll build the super AI but your investors want something more than, "its coming" for the next few years? You come from a background of building products and you want to build a product?
Seems like a lot of reasons why you would build a product, even if you think you're build a super AI.
I'm amazed they were able to pull from Planet. As impressive as OpenAI is, Planet is such a stellar company with the right mix of good intentions/impact and profitable product market fit that it's hard to imagine a better place to work right now. That said, maybe Planet's success made it less attractive to someone looking to take on new and monumental challenges.
So long "Open" AI, this screams more push to turn a profit. Maybe had they hired leaders in the scientific fields it would read differently, but a cash flow perp and a top tier product owner? (tho instagram? woof don't count that as a win except a win for greed).
The thing with Linda is that while she's theoretically the boss, and Musk reports to her as CTO/CPO everybody knows that Musk is the boss and she's there to... sell ads? Actually I don't know what she's supposed to do, but it doesn't matter, she's certainly not allowed to touch either the product or the technology and Musk can fire her at any point as owner/chairman.
This, OTOH is a just a simple straightforward C-Suite. Sam is the boss, and he hired two reports to handle areas where they have expertise. This might be the simplest corporate thing OpenAI has ever done, lol.
It's a pretty reasonable question for both of them. Perhaps more for the CFO, given what we know of Altman's, er, lack of candour with the board where money is concerned. But maybe given the "Her" debate it applies to both of them, in different ways.
Is this a company where the executives can really be more than rubber stamps?
Sure. Sam and other executives probably took an hour of their time to decide what they would write in the blog post to appease the critics (which is par for the course for any public-facing company as widely known as OpenAI †), and then maybe another hour meeting with their lawyers to prepare in case of a suit, three weeks ago.
But in the grand scheme of things, I don't think it's in their top 15 priorities. People here talk as if the Her thing was almost an existential threat to Altman's leadership.
I don't think losing sleep is a requirement. However, you know that the next time an ad is going to be release, it will be in people's minds of what backlash might happen from this before it gets green lit. At least, one would hope that would be the take away.
We expected to have Apple unveiling world-class local models, and announcement of a framework where you can train and run your models in M4.
And we got a wrapper around ChatGPT 4 + a cloudy-ish dystopian spyware-like copy of Copilot+ that sends your screenshots and activity to a "secure cloud".
Sarah was thought of as the person who "ran" Square, and Kevin is thought of as one of the strongest product leaders.
They'd both be in my top 3 worldwide hires I'd try to make for these roles.