Hacker News new | past | comments | ask | show | jobs | submit login
OpenAI Discusses Giving Altman 7% Stake in For-Profit Shift (bloomberg.com)
78 points by dataminer 7 months ago | hide | past | favorite | 91 comments



> The company is considering becoming a public benefit corporation, tasked with turning a profit and also helping society.

What a joke.


Make everyone unemployed. Offer the unemployed a pittance for having their retinas scanned with "the orb". Claim that you are helping society.


We’re so interested in the public good that we can’t be hampered by the restrictive chains of legal obligations to the public! Don’t worry, we’ll just make rhetorical promises instead, it’s much more freeing.

After all, who cares about nonprofit law in a new era with super high stakes? It’s all about agility now that we’re irreversibly in the Intelligence Age, and if you don’t let us abandon our obligations, we won’t be able to successfully bring about the Intelligence Age!


Non-profits can make as much profit as they want, in fact they're encouraged to. The part they left out was "tasked with turning a profit (for already extremely wealthy individuals)"


There is no moat in LLMs, I don't see this ending well. NVIDIA might buy them, but not for the price they want.


If meta keeps giving the model away, wondering how that impacts their valuation


Giving away is sort of beta testing and free marketing for adaptation. Just like OpenAI(who last released GPT2 as ~open), Meta can do the same if llama becomes valuable. However, meta already has moat(user data), so I somewhat expect llama to be sort of ReactJS where it becomes industry de-facto rather than a product for sale.


The moat is in the sheer cost of training models.

At most you’ll have like 5 major players.


> At most you’ll have like 5 major players.

5 companies doing a repeated leapfrog game where consumers can flip without much effort seems like the opposite of a moat, unless and until they consolidate into a monopoly.

I'm starting to feel some diminishing returns on the new releases, too. The wow moments are still there but fundamentally we're reaching a point wherein even open models can come close enough on some tasks that we can eschew paid models for not-insignificant pieces here and there.


> 5 companies doing a repeated leapfrog game where consumers can flip without much effort seems like the opposite of a moat

One could say the same about search, browsers, image/video/text sharing, smartphones and yet, moats were built.


> One could say the same about search, browsers, image/video/text sharing, smartphones and yet, moats were built.

Most of these built moats through widespread adoption by being first or best.

And in almost every case adoption requires an investment by a consumer, who is unlikely to switch because there's a cost incurred. Switching phones, browsers, social media platforms come with real costs that we don't see with LLM vendors, where you can switch one with another with twenty minutes of work.


> Switching phones, browsers, social media platforms come with real costs that we don't see with LLM vendors

Unfortunately, these LLMs are marketed to enterprise(B2B) and not regular consumers(only beta test at exorbitant costs).

In world of enterprise, switching out to save costs takes many years or even decades as the user of the product is far removed from the decision maker who buys it.


What about search engines?


Google paid for browser deals. The address bar is what customers couldn’t easily switch.


We don’t know if they will stay exactly interchangeable forever.

Google has a deal for Reddit’s data, so its models will have that information. I can see this kind of thing being a moat as well.

And yeah obviously you’re going to feel diminishing returns.

Going from 0 to 1 always feels like a bigger step than from 1 to 2.


> Google has a deal for Reddit’s data, so its models will have that information. I can see this kind of thing being a moat as well.

I don't see this as a silver bullet but let's assume this is true and a differentiator. This kind of validates that building a robust model hits a data saturation wall at some point and the moat dries up.


Maybe Reddit specifically won’t make a difference, but think of it like this.

Imagine OpenAI makes a deal with GitHub, Stack Overflow, and O’Reilly gaining exclusive AI rights to their content.

Now there’s no reason for anyone in software to use someone else’s models.

Anthropic could make a music generation model with an exclusive license from Warner music or whoever and own the music producer market.

This seems like a pretty straightforward plan assuming those rights could be obtained.


Fresh human data feeds at scale are always going to be valuable.

   - Meta has its own
   - Reddit is monetizing its
   - Twitter was bought for its
   - MS is trying to create one
   - Amazon has a funhouse mirror version (reviews)
Apple seems oddly without a dance partner, but I'm guessing they figure they can leverage pre-built / open models for the features they're targeting.


> Fresh human data feeds at scale are always going to be valuable.

I don't disagree with this but this is a losing battle assuming the perpetual cat-and-mouse game of web scraping continues to lean to the scrapers.

To beat it means raising the walls of the gardens, making "data deals," hoping users don't move to the next thing, and aggressively protecting data via third party legal action.

This all strikes me as a very vulnerable and complicated moat.


Apple has iCloud at minimum, to use as a training data. Maybe will get other partnerships later.


They do, though using it as training data for a model cuts against their privacy-first branding.

I don't see a way Apple engineers its way out of that -- but they won't need to if open source models are good enough, or if capable models remain accessible (e.g. not first-party locked).

Ironically, it looks a lot like the Maps situation again...


Well, Apple being privacy-first concept is entirely in our heads. They are relatively more pro-privacy, but I wouldn't call them privacy-first, just like any other corporation. There is maybe a handful of such companies in the world, like Signal, Tuta or Proton.

Also let's not forget how corporations invent doublespeak and twist meanings. For example Apple may not resell private data but use it internally. Or it may use it internally only processed by scripts, and claim it is private because other human never saw it. Many other terminology manipulations can be invented if a stake is a few billions in cash.


They seem to have gone to a lot of actual technical, infrastructure-level complexity in their cloud AI approach, if it were lip service.

I expect, if they do find they need it, they'll pitch it as an opt-in paid feature.

Pay us for these extra capabilities, and know that we're training on your data.


Every single one of those is flooded with bots and spam.


That there are bots and spam doesn't matter: what matters is that the human content to noise ratio is greater than alternatives.

Which it is.

The public web is going to turn into even more of a dumpster fire once genai is fully digested by spammers. (Which is to say, once a 14 year old in a random country with third world GDP per capita can use it at scale)


SEO and spammers will gain big in GenAI, because now the SEO spam is encoded into the model and guess what the model will produce when lay person asks for best 5 of X, top 10 of Y questions? From my observation, lay person do not understand the SEO/spam games, they search for something, take the top result and treat it as holy gospel. With the new studies we see frequently(few posted here in HN last few months), people do not question “the internet”(soon to be the AI/LLM).

Just my 2c.


Isn’t their training subsidized by Microsoft? Sounds like Microsoft has the moat.


Microsoft is extremely likely to get rugged:

“Fifth, the board determines when we've attained AGI. Again, by AGI we mean a highly autonomous system that outperforms humans at most economically valuable work. Such a system is excluded from IP licenses and other commercial terms with Microsoft, which only apply to pre-AGI technology.”

https://openai.com/our-structure/


“If we discover the holy grail in the unicorn meadow, you can’t have it” Is a fine clause for MS. Hardly a rug pull.


Sorry but that quote just makes it seem more likely that Microsoft will be able to hold on to their current agreement for a fairly long time.


Sure, whatever.

The moat is still the cost.


I hear ya. But with great open source models like Llama 3.2 dropping near-daily, I wonder if/when we’ll reach a point where free models are good enough for 99% of tasks.


Llama is NOT open source. It’s freeware. We can’t reproduce it and its license has commercial restrictions.

That being said, most models are based on llama, so it may seem like a lot of movement is happening, but it’s not, really.

RWKV and deepseek are the only LLMs that’re fully open that I’m aware of.

If meta decided to stop releasing llama, that’s that.


If llama achieves valuable strides, it will definitely be put behind paywall. The play has always been the same, release something semi-polished until you gain mass adaptation, then build a paywall infront when people are reliant on it and attrition is expensive. Look at the twitter API for example, or reddit or what not.


When there's like 5 essentially interchangeable products from your competitors, you don't have a moat.


We don’t know that they will be interchangeable forever.

Just because they share a near identical interface does not mean they will always have identical capabilities


The moat is a stolen copyrighted data, used for training LLMs. For now you can probably repeat all what OpenLLM did and others too, but the window of opportunity is closing fast. Some companies are making 1-1 deals with existing LLM provides regarding their data, some are closing down, laws are being written slowly but surely.


Can we stop calling it stolen? It wasn't stolen. Copyright infringement is never stealing and it's not even well established infringement is happening.


I once thought like that too, "we need balanced, moderate approach etc.". But seeing blatant copyright violations and abuse by megacorps I tend to skew to calling it theft. Purely due to the imbalance of power between individuals and megacorps. The problem we have is that hearing "copyright infringement" people think of different aspects of it. Most people think about how individuals infringe on corpos, like pirating software or a movie and that is indeed is morally grey zone.

I was talking purely about reverse situation - when corpos infringe on the copyright of individuals. That is unquestionably abuse of power and in my opinion it is theft. This part at least.


As they make progress they slowly remove “Open” from “OpenAI”. The “o1” models give almost no details other than RL on chain of thought.

They haven't demonstrated a clear moat yet but I wouldn’t be surprised to see this restructuring occur and then they show the product that justifies the valuation.


Slowly? Their last open-source LLM (including both weights and source code, but not training data, I believe) was GPT-2, released in 2019!


Or the restructuring occurs to make them more palatable to a buyer (and incentivize Altman to sell).


Idk OpenAI sure makes it seem like they have some secret sauce that others have been unable to catch up to.


Is that why they've been barely ahead/losing in benchmarks to anthropic lately?


Last I checked Anthropic has no image, video, or audio generation. No multi model. Does it have an answer to o1?


Anthropic has had multimodal models since at least Claude 3.


Where? I only see image input, nothing else. And nothing other than text available through the API.


Multimodal generally refers to _just_ two (or more) modes. Omni-modal was invented by OpenAI to describe their approach. I'm sure Anthropic is working on something similar to gpt-4o but I don't see how that invalidates my argument about anthropic doing well on benchmarks. In particular because their model was released several months _before_ gpt-4o and still holds its own fairly well.


Personally I like Anthropic, they do one thing well, but there are a lot more applications for AI that OpenAI is offering solutions for. Anthropic, not so much.




This analysis from over a week ago may provide context as to why the shift is happening:

https://www.wheresyoured.at/subprimeai/


To me, it seems like they're choreographing an exit/acquisition or going public.


OpenAI valuations this year have ranged $80-150bn. Since it’s funny money anyway, let’s call it $100bn for easier math.

That’s a $7bn stake.


I'd like to see him go back to congress and explain his previous angelic story of not profiting by one penny from OpenAI.

Current OpenAI raise is rumored to be at $150B, so 7% is over $10B.


> I'd like to see him go back to congress and explain his previous angelic story of not profiting by one penny from OpenAI.

A businessman lied? Pikachu's everywhere were shocked.


It's incredible that a company that has been bleeding senior staff like a stuck pig is capable of getting these kinds of valuations.


I think it's at least in part FOMO from the VC's - maybe a bit like the dot-com mania, as well as them not understanding the technology and therefore having to believe the breathless hype and cherry-picked demos from the likes of Sam Altman.


I can also value anything I want at $1T. It’s just a question of who agrees with me.


If a company can make ASI/AGI I think it will be worth a trillion dollars extremely easily. Probably more. I think OpenAI is probably the most likely to make it happen right now.


Truly human level AGI, able - amongst other things - to replace many jobs (where a physical presence is not required/helpful) will obviously be very valuable, but it does not appear that LLMs are on the path to that, certainly not anytime soon (and not without extending the architecture with new capabilities).

However, OpenAI don't even seem to have the goal of achieving this type of truly human level AGI - their own definition of AGI is along the lines of "able to perform most economically valuable tasks in a mostly autonomous fashion".


Karpathy himself believes that neural networks are perfectly plausible as a key component to AGI. He has said that it doesn't need to be superseded by something better, it's just that everything else around it (especially infrastructure) needs to improve. As one of the most valuable opinions in the entire world on the subject, I tend to trust what he said.

source: https://youtu.be/hM_h0UA7upI?t=973

I think the consensus of people with the most valid opinions is that AGI is actually not that far off, definitely in our lifetimes, and likely in the next decade.

I'm personally not concerned if we recreate a human in a computer, in fact we might be better off not doing that. If we can replace nearly all manual labor, all software development, and all driving, then I would be really happy with the state of automation of "AGI" (societal ramifications aside).


Karpathy stands to gain enormous wealth and prestige of he's right so he's obviously biased.


If Karpathy truly believes that LLM's could be extended into trillion-dollar AGI, and had a workable plan how this might be achieved, then why is he not raising money to do it, or working at a company pursuing AGI? His own background is more around vision (both PhD and at Tesla) than SOTA transformers...

It's interesting to see what Noam Shazeer (largely responsible for the transformer design) did with it after leaving Google.. not pursuing AGI but instead entertainment chatbots (character.ai).

AGI may well be achieved in our lifetimes (next 50 years), but 10 seems short considering that it's already been 8 years since "attention is all you need", and so far that's all we've got! Maybe o1 is a baby step beyond transformers (or is it really just an agent?), but I think we need multiple very large steps (transformer-level innovation) to get closer to AGI.

If one wants to appeal to authority, then how about LeCun, privvy to SOTA LLM research at Meta, yet confident this is entirely the wrong approach to achieve animal/human intelligence.


it's not incredible at all - the "valuations" are just numbers that associates of OpenAI put in to the media, which then get republished.

it truly is the ultimate triumph of the VC class.


There is some reality to them - they are extrapolations of latest investment round. Say OpenAI had been valued at $100B, but some investors can be convinced to buy a 10% stake for $15B, then the company is now "valued" at $150B (10% = $15B => 100% = $150B).


It's a bit like the tail wagging the pig though, since reality is probably more like a 1% (tail) share being bought for $1.5B, and that determining the "value" of the remaining 99% (the whole pig).


Microsoft owns 50%, other investors own 20-30% presumably(?), 10% employee pool, that means all the other OpenAI founders/etc basically get nothing compared to Altman...


Well, tbf, he’s one of the only ones left. There’s so much ethical/political/cultural attrition at these companies that even the company founded by attrition (Anthropic) looks like Theseus’ ghost ship these days. And Altman is, somehow, seen as an indispensable mascot for the general public and/or politicians worldwide.

After all, what are the engineers gonna do? Quit? They’d be replaced in a month, I’m sure every open position OAI posts gets thousands of qualified applicants from people dreaming of FIRE and/or terrified of automation. And obviously they’re way too proud to consider unionization.


I guess we’ll see soon enough who gets the most credit for OpenAIs success and who’s disposable/replaceable. Sutskever in particular has quite a bit of mythology about him. Let’s see if he can fulfill all that.


> And Altman is, somehow, seen as an indispensable mascot for the general public and/or politicians worldwide.

It's not rocket science.

He figures out what people want to hear, tells them that, then does whatever he thinks is best regardless of what he said.

When put that way, it does sound a bit sociopathic though...


His sincerity and seriousness are so over the top, it comes off looking like an act, even if it is not.


Sincerity? I don't know. He seems to really believe in AGI and the singularity and all that, but everything else seems to be lie after lie. This article [0] on the New York Magazine paints him as an incredibly insencere manipulator, which lines up with accusations made by Helen Toner and others. Things he repeatedly says (not caring about money, caring a lot about safety) are often at odds with his actions.

https://nymag.com/intelligencer/article/sam-altman-artificia...


“He’s smart, like for a flyover-state community college,” said a Bay Area VC. “Do you watch Succession? You could make a Tom analogy.”

Wow.


Yeah, that's what you get when you pack the board with your sycophants.


I mean Musk wouldn't do it for that, why would Altman?


This is ridiculous. It's hard to see the source of the leverage.


I still have no idea what the "open" in OpenAI stands for, but at least we now know that it rhymes with the "non" in "nonprofit".


I lol


Where are those Jobs that AI was supposed to take over? Especially programming?


At IBM[1] and eventually to all the businesses that buy from IBM

[1]https://news.ycombinator.com/item?id=41646967


Have you seen the software development job market? It's absolutely awful, the worst it's been since the internet became mainstream.


I'm confused why people are so outraged on here. Usually everyone cheers on the CEO who becomes a billionaire by leading a successful startup.

If the complaints here are because you invested in OpenAI and feel deceived and betrayed, that's one thing. But for everyone here who is just an observer, and didn't contribute to or trade with them, what are your grounds for being upset?


Why do I need to be personally invested to think this is a deception? The company was founded as a non-profit and named "Open" AI. It's neither non-profit nor open, and is relentlessly pursuing AGI without regard for regulations or guardrails, which is a pretty clear violation of its stated goal to build "safe and beneficial" AGI.

We're talking about a CEO and a company actively trying to usher in a new age of humanity while demonstrating a lack of trustworthiness. I'm supposed to cheer this on?


I think sam is doing dumb stuff right now but can we stop with the hyperbole? OpenAI and sam are very clearly pushing for regulation and working with law makers in tons of countries. There are definitely open aspects of OpenAI but this is such a tired argument I can't be bothered to repeat it for the fifth time.

I agree this move was really the nail in the coffin of trust though. To go from claiming zero financial interest in openai as a reason we should trust him, to (from the sounds of it) about to be nearly ten billion dollars richer from openai, just reeks of bullshit


Speaking for myself, I'm worried that this adventure might lead to human extinction (among other possible negative outcomes), and one of the points of making it a non-profit was to help remove some of the incentives to charge recklessly forward. To that extent, this seems like a step in the wrong direction.


I agree to most of your opinion but

> I'm worried that this adventure might lead to human extinction

Is buying into Sam’s marketing and selling dreams to MBAs. Recall that, crypto in last decade promised to make global current financial system and banks go extinct… Corona lockdowns were expected to be the new norm… IBM watson was going to solve cancer… we were supposed to go Nuclear+renewable… Internet was going to make everyone own their own little business… Uber was going to make gig economy lucrative… Facebook was going to cure the global loneliness… Theranos was… whelp. Marketers will beat up their drums, money will be made, innovations will go one.

At this moment, at the risk of getting political, OpenAI is at much less potential to extinct human species compared to looming war in EU and ME.


My opinion on this question wasn't formed based on anything that Sam or OpenAI has said on the topic, so I don't think I've been persuaded by their marketing.


Totally fair POV. I get sense, every time this topic comes up, that people feel that OpenAI somehow owes them to remain non-profit.


Because either Sam is a future world class tyrant or a future world class liar, and both alternatives are appalling. We can dislike Elon all the time, but at least he has lead the creation of multiple useful progressive products, from which individual humans will benefit. Sam promises to either screw all of us, or fail while trying to do it. It's kinda less appealing to regular non-billionaires.


Because if OpenAI's technology is as powerful and world-shaking as Altman says, it would be nice for it to be under the control of a nominally pro-social and humanist non-profit organization with safety and equity as paramount goals rather than just another cutthroat for-profit Silicon Valley corporate juggernaut.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: