SamA is in a hurry because he's set to lose the race. We're at peak valuation and he needs to convert something now.
If the entrenched giants (Google, Microsoft and Apple) catch up - and Google 100% has, if not surpassed - they have a thousand levers to pull and OpenAI is done for. Microsoft has realized this, hence why they're breaking up with them - Google and Anthropic have shown they don't need OpenAI. Galaxy phones will get a Gemini button, Chrome will get it built into the browser. MS can either develop their own thing , use opensource models, or just ask every frontier model provider (and there's already 3-4 as we speak) how cheaply they're willing to deliver. Then chuck it right in the OS and Office first-class. Which half the white collar world spends their entire day staring at. Apple devices too will get an AI button (or gesture, given it's Apple) and just like MS they'll do it inhouse or have the providers bid against each other.
The only way OpenAI David was ever going to beat the Goliaths GMA in the long run was if it were near-impossible to catch up to them, á la TSMC/ASML. But they did catch up.
It's doubtful if there even is a race anymore. The last significant AI advancement in the consumer LLM space was fluent human language synthesis around 2020, with its following assistant/chat interface. Since then, everything has been incremental — larger models, new ways to prompt them, cheaper ways to run them, more human feedback, and gaming evaluations.
The wisest move in the chatbot business might be to wait and see if anyone discovers anything profitable before spending more effort and wasting more money on chat R&D, which includes most agentic stuff. Reliable assistants or something along those lines might be the next big breakthrough (if you ask certain futurologists), but the technology we have seems unsuitable for any provable reliability.
ML can be applied in a thousand ways other than LLMs, and many will positively impact our lives and create their own markets. But OpenAI is not in that business. I think the writing is on the wall, and Sama's vocal fry, "AGI is close," and humanity verification crypto coins are smoke and mirrors.
Saying LLMs have only incrementally improved is like saying my 13 year old has only incrementally approved over the last 5 years. Sure, it's been a set of continuous improvements, but that has taken it from a toy to genuinely insanely useful.
Personally, deep research and o3 have been transformative, taking LLMs from something I have never used to something that I am using daily.
Even if the progress ends up plateauing (which I do not believe will happen in the near term), behaviors are changing; OpenAI is capturing users, and taking them from companies like Google. Google may be able to fight back and win - Gemini 2.5 Pro is great - but any company sitting this out risks being unable to capture users back from Open AI at a later date.
> any company sitting this out risks being unable to capture users back from Open AI at a later date.
Why? I paid for Claude for a while, but with Deepseek, Gemini and the free hits on Mistral, ChatGPT, Claude and Perplexity I'm not sure why I would now. This is anecdotal of course, but I'm very rarely unique in my behaviour. I think the best the subscription companies can hope for is that their subscribers don't realize that Deepseek and Gemini can basically do all you need for free.
I doubt it. Google is shoving Gemini on everyone’s face through search, and Meta AI is embedded in every Meta product. Heck, instagram created a bot marketplace.
They might not “know” the brand as well as ChatGPT, but the average consumer has definitely been exposed to those at the very least.
DeepSeek also made a lot of noise, to the point that, anecdotally, I’ve seen a lot of people outside of tech using it.
I can't square how OpenAi can capture users and presumably retain them when the incumbents have been capturing users for multiple decades and why can they not retain them?
If every major player had an AI option, i'm just not understanding how because OpenAi moved first or got big first, the hugely massively successful companies that did the same thing for multiple decades don't have the same advantage?
Who knows how this will play out, but user behavior is always somewhat sticky and OpenAI now has 400M+ weekly active users. Currently, I'm not sure there is much of a moat, as many would jump if, say, Google released a model that is 10x better. However, there are myriad ways that OpenAI could slowly try to make their userbase even stickier:
1. OpenAI is apparently in the process of building a social network.
2. OpenAI is apparently working with Jonny Ive on some sort of hardware.
3. OpenAI is increasingly working on "memory" as a LLM feature. Users may be less likely to switch as an LLM increasingly feels like a person that knows you, understands you, has a history with you, etc.
4. Google and MSFT are leveraging their existing strengths. Perhaps you will stick with Gemini given deep integration with Android, Google Drive, Sheets, Docs, etc.
5. LLMs, as depressing as this sounds, will increasingly be used for romantic/friend purposes. These users may not want to switch, as it would be like breaking up and finding a new partner.
6. Your chat history, if it can't be easily exported/imported, may be a sticky feature, especially if it can be improved (e.g. easily search, cross-reference, chats, like a supercharged interconnecting note app with brains).
I could list 100 more of these. Perhaps none of the above will happen, but again, they have 400M weekly users and they will find ways to keep them. It's a lot easier to keep users that have a habit of showing up, then getting them in the first place. There's a reason that Google is treating this like an emergency; they are at serious risk of having their search cash cow permanently disrupted if they don't act fast to win.
Very thought provoking reply. #3 sounds the most sticky to me, in the product sense that you'd build "your own LLM/agent" and plug it other services. I heard this on a product podcast [1], think of it like Okta SSO integration: access controls for your personal/sensitive LLM stuff vs all other services trying to get you to use their LLM.
#5 stands out as well as a substantial barrier.
The rest to me our sticky, but no more uniquely sticky than any other service that retains data. Like the switching cost of email or a browser. It does stick but not insurmountable and once the switch is made, it's like why did I wait so long? (I'm a Safari user!)
6 (can’t export/import chat history) is already a wrap since every user is prohibited from using ChatGPT chat logs to “develop models that compete with OpenAI,” if you export your chats and give it to Gemini or Claude or post it on X and Grok reads it, then you just violated the OpenAI terms, that’s grounds for a permaban or lawsuit for breach of contract (lol) … maybe your companies accept this risk but I’m in malicious compliance mode
Google is alright, but they have similar stupid noncompete vendor lock in rule, and no way to opt out of training, so there’s no real reason to trust Google. Yeah they could ship tool use in reasoning to catch up to o3, but it’ll just be catching up and not passing unless they fix the stupid legal terms.
Claude IDK how to trust, they train on feedback and everything is feedback, and they have the noncompete rule written even more broadly, dumb to use that.
Grok has a noncompete rule but also has a way to opt out of training, so it’s on the same tier of ClosedAI. I use it sometimes for jokey toy image generation crap but there’s no way to use it for anything serious since it has a copypasted closed ai prohibition
Mistral needs better models and simpler legalese, it’s so complicated and impossible to know which of the million legal contracts applies
IMHO meta is the only player, but they shot themselves in the foot by making Llama 4 too big for the local llama community to even use, super dumb, killed their most valuable thing which was the community.
That means the best models we can use for work without needing to worry about a lawsuit, are Qwen, and DeepSeek distills, no American AI is even in the same ballpark, Gemma 3 is refusal king if you even hint at something controversial. basically, America is getting actively stomped by China in AI right now, because their stuff is open and interoperable, and ours is closed and has legal noncompete bullshit, what can we actually build that doesn’t compete with these companies? Nothing
No, it's still just a toy. Until they can make the models actually consistently good at things, they aren't going to be useful. Right now they still BS you far too much to trust them, and because you have to double check their work every time they are worse than no tool at all.
It's been five years. There is no AI killer app. Agentic coding is still hot garbage. Normal people don't want to use AI tools despite them being shoved into every SaaS under the sun. LLMs are most famous among non-tech users for telling you to put glue into pizza. No one has been able to scale their chatbots into something profitable, and no one can put a date on when they'll be profitable.
Why are you still pretending anything is going to come out of this?
To extend your illustration, 5 years ago no one could train an LLM with the capabilities of a 13 year old human; now many companies can both train LLMs and integrate them into products.
> taken it from a toy to genuinely insanely useful.
Just to get things right. The big AI LLM hype started end of 2022 with the launch of ChatGPT, DALL-E 2, ....
Most people in society connect AI directly to ChatGPT and hence OpenAI. And there has been a lot of progress in image generation, video generation, ...
So I think your timeline and views are slightly off.
> Just to get things right. The big AI LLM hype started end of 2022 with the launch of ChatGPT, DALL-E 2, ....
GPT-2 was released in 2019, GPT-3 in 2020. I'd say 2020 is significant because that's when people seriously considered the Turing test passed reliably for the first time. But for the sake of this argument, it hardly matters what date years back we choose. There's been enough time since then to see the plateau.
> Most people in society connect AI directly to ChatGPT and hence OpenAI.
I'd double-check that assumption. Many people I've spoken to take a moment to remember that "AI" stands for artificial intelligence. Outside of tongue-in-cheek jokes, OpenAI has about 50% market share in LLMs, but you can't forget that Samsung makes AI washing machines, let alone all the purely fraudulent uses of the "AI" label.
> And there has been a lot of progress in image generation, video generation, ...
These are entirely different architectures from LLM/chat though. But you're right that OpenAI does that, too. When I said that they don't stray much from chat, I was thinking more about AlexNet and the broad applications of ML in general. But you're right, OpenAI also did/does diffusion, GANs, transformer vision.
This doesn't change my views much on chat being "not seeing the forest for the trees" though. In the big picture, I think there aren't many hockey sticks/exponentials left in LLMs to discover. That is not true about other AI/ML.
>In the big picture, I think there aren't many hockey sticks/exponentials left in LLMs to discover. That is not true about other AI/ML.
We do appear to be hitting a cap on the current generation of auto-regressive LLMs, but this isn't a surprise to anyone on the frontier. The leaked conversations between Ilya, Sam and Elon from the early OpenAI days acknowledge they didn't have a clue as to architecture, only that scale was the key to making experiments even possible. No one expected this generation of LLMs to make it nearly this far. There's a general feeling of "quiet before the storm" in the industry, in anticipation of an architecture/training breakthrough, with a focus on more agentic, RL-centric training methods. But it's going to take a while for anyone to prove out an architecture sufficiently, train it at scale to be competitive with SOTA LLMs and perform enough post training, validation and red-teamint to be comfortable releasing to the public.
Current LLMs are years and hundreds of millions of dollars of training in. That's a very high bar for a new architecture, even if it significantly improves on LLMs.
ChatGPT was not released to the general public until November 2022, and the mobile apps were not released until May 2023. For most of the world LLM's did not exist before those dates.
This site and many others were littered with OpenAI stories calling it the next Bell Labs or Xerox PARC and other such nonsense going back to 2016.
And GPT stories kicked into high gear all over the web and TV in 2019 in the lead-up to GPT-2 when OpenAI was telling the world it was too dangerous to release.
Certainly by 2021 and early 2022, LLM AI was being reported on all over the place.
>For most of the world LLM's did not exist before those dates.
Just because people don't use something doesn't mean they don't know about it. Plenty of people were hearing about the existential threat of (LLM) AI long before ChatGPT. Fox News and CNN had stories on GPT-2 years before ChatGPT was even a thing. Exposure doesn't get much more mainstream than that.
As another proxy, compare Nvidia revenues - $26.91bln in 2022, $26.97bln in 2023, $60bln 2024, $130bln 2025. I think it's clear the hype didn't start until 2023.
You're welcome to point out articles and stores before this time period "hyping" LLM's, but what I remember is that before ChatGPT there was very little conversation around LLM's.
If you're in this space and follow it closely, it can be difficult to notice the scale. It just feels like the hype was always big. 15 years ago it was all big data and sentiment analysis and NLP, machine translation buzz. In 2016 Google Translate switched to neural nets (LSTM) which was relatively big news. The king+woman-man=queen stuff with word2vec. Transformer in 2017. BERT and ELMo. GPT2 was a meme in techie culture, there was even a joke subreddit where GPT2 models were posting comments. GPT3 was also big news in the techie circles. But it was only after ChatGPT that the average person on the street would know about it.
Image generation was also a continuous slope of hype all the way from the original GAN, then thispersondoesnotexist, the sketch-to-photo toys by Nvidia and others, the avocado sofa of DallE. Then DallE2, etc.
The hype can continue to grow beyond our limit of perception. For people who follow such news their hype sensor can be maxed out earlier, and they don't see how ridiculously broadly it has spread in society now, because they didn't notice how niche it was before, even though it seemed to be "everywhere".
There's a canyon of a difference between excitement and buzz vs. hype. There was buzz in 2022, there was hype in 2023. No one was spending billions in this space until a public demarcation point that, not coincidentally, happened right after ChatGPT.
I'd say Chain-of-Thought has massively improved LLM output. Is that "incremental"? Why is that more incremental than the move from GPT-2 to GPT-3? Sure you can say that this is when LLMs first passed some sort of Turing test, but fundamentally there was no technological difference from GPT-3 to GPT-4. In fact I would say the quality of GPT-4 unlocked thousands (millions?) more use-cases that were not very viable with the quality delivered by GPT-3. I don't see any reason for more use-cases to keep being unlocked by LLM improvements.
Yes. But they have also improved a lot. Incremental just means that the function is going up without breaking points. We haven't seen anything revolutionary, just evolutionary in the last 3 years. But the models do provide 2 or 3 times more value. So their pace of advancement is not slow.
The better you know a field the more it looks incremental. In other words, incrementalness is more a function of how much attention you pay or how deep you research it. Relativity and quantum mechanics were also incremental. Copernicus and Kepler were incremental. Deep learning itself was incremental. Based on almost identical networks from the 90s (CNN), which were using methods from the 80s (backprop) on architectures from the 70s (neocognitron) using activation functions from the 60s and the basic neuron model from the 40s (McCullough and Pitts), which was just a mathematization of observations in biology via microscopy integration with mathematical logic and electrical logic gates developed around the same time (Shannon), so it's just logic as formalized by Gödel and others and it goes back to Hilbert's program, which can be extrapolated from Leibniz etc. etc. It's not hard to say that "it's really just previous thing X plus previous thing Y, nothing new under the sun" to literally anything.
"It just suddenly appeared out of nowhere" is just a perception based on missing info. Many average people think ChatGPT was a sudden innovation specifically by OpenAI seemingly out of nowhere. Because they didn't follow it.
Well I think you’re correct that they know the jig is up, but I would say they know the AI bubble is about to burst so they want to cash out before that happens.
There is little to no money to be made in GAI, it will never turn into AGI, and people like Altman know this, so now they’re looking for a greater fool before it is too late.
AI companies are already automating huge swaths of document analysis, customer service. Doctors are straight up using ChatGPT to diagnose patients. I know it’s fun to imagine AI is some big scam like crypto, but you’d have to be ignoring a lot of genuine non hype economic movement at this point to assume GAI isn’t making any money.
Why does the forum of an incubator that now has a portfolio that is like 80% AI so routinely bearish on AI? Is it a fear of irrelevance?
> AI companies are already automating huge swaths of document analysis, customer service. Doctors are straight up using ChatGPT to diagnose patients
I don't think there is serious argument that LLMs won't generate tremendous value. The question is who will capture it. PCs generated massive value. But other than a handful of manufacturers and designers (namely, Apple, HP, Lenovo, Dell and ASUS), most PC builders went bankrupt. And out of the value generated by PCs in the world, the vast majority was captured by other businesses and consumers.
Doctors were using Google to diagnose patients before. The thing is, it's still the doctor delivering the diagnosis, the doctor writing the prescription, and the doctor billing insurance. Unless and until patients or hospitals are willing and legally able to use ChatGPT as a replacement for a doctor (unwise), ChatGPT is not about to eat any doctor's lunch.
Not OP, but I think this makes the point, not argues against it. Something has come along that can supplant Google for a wide range of things. And it comes without ads (for now). It’s an opportunity to try a different business model, and if they succeed at that then it’s off to the races indeed.
When the wright brothers made their plane they didn't expect today that there are thousands of planes flying at a time.
When the Internet was developed they didn't imagine the world wide Web.
When cars started to get popular people still thought there would be those who are going to stick with horses.
I think you're right on the AI we're just on the cusp of it and it'll be a hundred times bigger than we can imagine.
Back when oil was discovered and started to be used it was about equal to 500 laborers now automated. One AI computer with some video cards are now worth x number of knowledge workers. That never stop working as long as the electricity keeps flowing.
They did actually imagine the World Wide Web at the time of developing the first computer networks. This is one of the most obvious outcomes of a system of networked devices.
Even five years into this "AI revolution," the boosters haven't been able to paint a coherent picture of what AI could reasonably deliver – and they've delivered even less.
Lol they are not using ChatGPT for the full diagnosis. They're used in steps of double checking knowledge like drug interactions and such. If you're gonna speak on something like this in a vague manner I'd suggest you google this stuff first. I can tell you for certain that that part in particular is a highly inaccurate statement.
The article you posted describes a patient using ChatGPT to get a second opinion from what their doctor told them, not the doctor themself using ChatGPT.
The article could just as easily be about “Delayed diagnosis of a transient ischemic attack caused by talking to some rando on Reddit” and it would be just as (non) newsworthy.
People aren't saying that AI as a tool is going to go bust. Instead, people are saying that this practice of spending 100s of millions, or even billions of dollars on training massive models is going bust.
AI isn't going to be the world changing, AGI, that was sold to the public. Instead, it will simply be another B2B SaaS product. Useful, for sure. Even profitable for startups.
They made $4 billion last year, not really "little to no money". I agree it's not clear they can justify their valuation but it's certainly not a bubble.
But didn't they spend $9 billion? If I have a machine that magically turns $9 billion of investor money into $4 billion in revenue, I need to have a pretty awesome story for how in the future I am going to be making enormous piles of money to pay back that investment. If it looks like frontier models are going to be a commodity and it is not going to be winner-take-all... that's a lot harder story to tell.
There is a pretty significant different between “buy $9 for $4” and selling a service that costs $9 to build and run per year for $4 per year. Especially when some people think that service could be an absolute game changer for the species.
It’s ok to not buy into the vision or think it’s impossible. But it’s a shallow dismissal to make the unnuanced comparison, especially when we’re talking about a brand new technology - who knows what the cost optimization levers are. Who knows what the market will bear after a few more revs.
When the iPhone first came out, it was too expensive, didn’t do enough, and many people thought it was a waste of apples time when they should be making music players.
It's a commodity technology and VCs are investing as if this were still a winner-takes-all play. It's obviously not, if there were any doubt about that, Deepseek's R1 release should have made it obvious.
> But it’s a shallow dismissal to make the unnuanced comparison, especially when we’re talking about a brand new technology - who knows what the cost optimization levers are. Who knows what the market will bear after a few more revs.
You're acting as-if OpenAI is still the only player in this space. OpenAI has plenty of competitors who can deliver similar models for cheaper. Gemini 2.5 is an excellent and affordable model and Google has a substantially better capacity to scale because of a multi-year investment in its TPUs.
Whatever first mover advantage OpenAI had has been quickly eliminated, they've lost a lot of their talent, and the chief hypothesis they used to attract the capital they've raised so far is utterly wrong. VCs would be mad to be continuing to pump money into OpenAI just to extend their runway -- at 5 Bln losses per year they need to actually consider cost, especially when their frontier releases are only marginal improvements over competitors.
... this is a bubble despite the promise of the technology and anyone paying attention can see it. For all of the dumb money employed in this space to make it out alive, we'll have to at least see a fairly strong form of AGI developed, and by that point the tech will be threatening the general economic stability of the US consumer.
Every new tech has companies start and fail, as the consumer market changes and things are tried and fail. There’s no way to predict ahead of time what will work, what won’t - and so a thousand ships are launched with only a few reaching shore.
Is that a bubble? I suppose it is; it’s also probably the right strategy.
> When the iPhone first came out, it was too expensive, didn’t do enough, and many people thought it was a waste of apples time when they should be making music players.
This comparison is always used when people are trying to hype something. For every "iPhone" there are thousands of failures
> I started a business that would give people back $9 if they gave me $4
I feel like people overuse this criticism. That's not the only way that companies with a lot of revenue lose money. And this isn't at all what OpenAI is doing, at least from their customers' perspective. It's not like customers are subscribing to ChatGPT simply because it gives them something they were going to buy anyway for cheaper.
Facebook had immense network effects working for it back then.
What network effect does OpenAI have? Far as I can tell, moving from OpenAI to Gemini or something else is easy. It’s not sticky at all. There’s no “my friends are primarily using OpenAI so I am too” or anything like that.
OpenAI (or, more specifically, Chat GPT) is CocaCola, not Facebook.
They have the brand recognition and consumer goodwill no other brand in AI has, incredibly so with school students, who will soon go into the professional world and bring that goodwill with them.
I think better models are enough to dethrone OpenAI in API, B2C and internal enterprise use cases, but OpenAI has consumer mindshare, and they're going to be the king of chatbots forever. Unless somebody else figures out something which is better by orders of magnitude and that Open AI can't copy quickly, it's going to stay that way.
Apple had the opportunity to do something really great here. With Siri's deep device integration on one hand and Apple's willingness to force 3rd-party devs to do the right thing for users on the other, they could have had a compelling product that nobody else could copy, but it seems like they're not willing to go that route, mostly for privacy, antitrust and internal competency reasons, in that order. Google is on the right track and might get something similar (although not as polished as typical Apple) done, but Android's mindshare among tech-savvy consumers isn't great enough for it to get traction.
> Unless somebody else figures out something which is better by orders of magnitude and that Open AI can't copy quickly, it's going to stay that way.
This will happen, and it won't be another model which Open AI can't copy, it'll be products.
I don't doubt OpenA I can create the better models but they're no moat if they're not in better products. Right now the main product is chat, which is easy enough to build, but as integrations get deeper how can OpenAI actually ensure it keeps traffic?
Case in point, Siri. Apple allows you to use ChatGPT with Siri right now. If Apple chooses so, they could easily remove that setting. On most devices ChatGPT lives within the confines of an app or the browser. A phone with deep AI integration is arguably a fantastic product— much better than having to open an app and chat with a model. How quickly could Open AI build a phone that's as good as those of the big phone companies today?
To draw a parallel— Google Assistant has long been better than Siri, but to use Siri you don't have to install an app. I've used both Android and iOS, and every time I'm on iPhone I switch back to Siri because in spite of being a worse assistant, it's overall a better product. It integrates well with the rest of the phone, because Apple has chosen to not allow any other voice assistant integrate deeply with the rest of the phone.
Does Google not have brand recognition and Consumer good will? We might read all sorts of deep opinions of Google on HN, but I think Search and Chrome market share speak themselves. For the average consumer, I'm skeptical that OpenAI carries much weight.
> For the average consumer, I'm skeptical that OpenAI carries much weight.
My friend teaches at a Catholic girls’ high school and based on what he tells me, everyone knows about ChatGPT, both staff and students. He just had to fail an entire class on an assignment because they all used it to write a book summary (which many of them royally screwed up because there’s another book with a nearly identical title).
It’s all anecdotal and whatnot but I don’t think many of them even know about Claude or Gemini, while ChatGPT has broad adoption within education. (I’m far less clear on how much mindshare it has within the general population though)
> who will soon go into the professional world and bring that goodwill with them.
...Until their employer forces them to use Microsoft Copilot, or Google Gemini, or whatever, because that's what they pay for and what integrates into their enterprise stack. And the new employee shrugs and accepts it.
> Just like people are forced to use web Office and Microsoft Teams, and start prefering them over Google Docs and Slack? I don't think so
...yes. Office is the market leader. Slack has between a fifth and a fourth of the market. Coca-Cola's products have like 70% market share in the American carbonated soft-drink market [1].
Coca Cola does insane amounts of advertising to maintain their position in the mind of the consumer. I don't think it is as sticky as you say it is for OpenAi
Yep, I mostly interact with these AIs through Cursor. When I want to ask it a question, there's a little dropdown box and I can select openai/anthropic/deepseek whatever model. It's as easy as that to switch.
Yeah but I remember when search first started getting integrated with the browser and the "switch search engine" thing was significantly more prominent. Then Google became the default and nobody ever switched it and the rest is history.
So the interesting question is: How did that happen? Why wasn't Google search an easily swapped commodity? Or if it was, how did they win and defend their default status? Why didn't the existing juggernauts at the time (Microsoft) beat them at this game?
I have my own answers for these, and I'm sure all the smart people figuring out strategy at Open AI have thought about similar things.
It's not clear if Open AI will be able to overcome this commodification issue (personally, I think they won't), but I don't think it's impossible, and there is prior art for at least some of the pages in this playbook.
Yes, I think people severely underrate the data flywheel effects that distribution gives an ML-based product, which is what Google was and ChatGPT is. It is also an extremely capital-intensive industry to be in, so even if LLMs are commoditized, it will be to the benefit of a few players, and barring a sustained lead by any one company over the others, I suspect the first mover will be very difficult to unseat.
Google is doing well for the moment, but OpenAI just closed a $40 billion round. Neither will be able to rest for a while.
Yeah, a very interesting metric to know would be how many tokens of prompt data (that is allowed to be used for training) the different products are seeing per day.
> So the interesting question is: How did that happen? Why wasn't Google search an easily swapped commodity? Or if it was, how did they win and defend their default status? Why didn't the existing juggernauts at the time (Microsoft) beat them at this game?
Maybe the big amount of money they've given to Apple which is their direct competitor in the mobile space. Also good amount of money given to Firefox, which is their direct competitor in the browser space, alongside side Safari from Apple.
Most people don't care about the search engine. The default is what they will used unless said default is bad.
I don't think my comment implied that the answers to these questions aren't knowable! And indeed, I agree that the deals to pay for default status in different channels is a big part of that answer.
So then apply that to Open AI. What are the distribution channels? Should they be paying Cursor to make them the default model? Or who else? Would that work? If not, why not? What's different?
My intuition is that this wouldn't work for them. I think if this "pay to be default" strategy works for someone, it will be one of their deeper pocketed rivals.
But I also don't think this was the only reason Google won search. In my memory, those deals to pay to be the default came fairly long after they had successfully built the brand image as the best search engine. That's how they had the cash to afford to pay for this.
A couple years ago, I thought it seemed likely that Open AI would win the market in that way, by being known as the clear best model. But that seems pretty unclear now! There are a few different models that are pretty similarly capable at this point.
Essentially, I think the reason Google was able to win search whereas the prospects look less obvious for Open AI is that they just have stronger competition!
To me, it just highlights the extent to which the big players at the time of Google's rise - Microsoft, Yahoo, ... Oracle maybe? - really dropped the ball on putting up strong competition. (Or conversely, Google was just further ahead of its time.)
From talking to people, the average user relies on memories and chat history, which is not easy to migrate. I imagine that's the part of the strategy to keep people from hopping model providers.
No one has a deep emotional connection with OpenAI that would impede switching.
At best they have a bit of cheap tribalism that might prevent some incurious people who don't care much about using the best tools noticing that they aren't.
IMHO "ChatGPT the default chatbot" is a meaningful but unstable first-mover advantage. The way things are apparently headed, it seems less like Google+ chasing FB, more like Chrome eating IE + NN's lunch.
OpenAI is a relatively unknown company outside of the tech bubble. I told my own mom to install Gemini on her phone because she's heard of Google and is more likely going to trust Google with whatever info she dumps into a chat. I can’t think of a reason she would be compelled to use ChatGPT instead.
Consumer brand companies such as Coca Cola and Pepsi spend millions on brand awareness advertising just to be the “default” in everyone’s heads. When there’s not much consequence choosing one option over another, the one you’ve heard of is all that matters
I know a single person who uses ChatGPT daily, and only because their company has an enterprise subscription.
My impression is that Claude is a lot more popular – and it’s the one I use myself, though as someone else said the vast majority of people, even in software engineering, don’t use AI often at all.
> OpenAI has been on a winning streak that makes ChatGPT the default chatbot for most of the planet
OpenAI has like 10 to 20% market share [1][2]. They're also an American company whose CEO got on stage with an increasingly-hated world leader. There is no universe in which they keep equal access to the world's largest economies.
Not sure if Google+ is a good analogy, it reminds me more of the Netscape vs IE fight. Netscape sprinted like it was going to dominate the early internet era and it worked until Microsoft bundled IE with Windows for free.
LLMs themselves aren't the moat, product integration is. Google, Apple and Microsoft already have the huge user bases and platforms with a big surface area covering a good chunk of our daily life, that's why I think they're better positioned if models become a commodity. OpenAI has the lead now, but distribution is way more powerful in the long run.
That's not at all the same thing: social media has network effects that keep people locked in because their friends are there. Meanwhile, most of the people I know using LLMs cancel and resubscribe to Chat-GPT, Claude and Gemini constantly based on whatever has the most buzz that month. There's no lock-in whatsoever in this market, which means they compete on quality, and the general consensus is that Gemini 2.5 is currently winning that war. Of course that won't be true forever, but the point is that OpenAI isn't running away with it anymore.
And nobody's saying OpenAI will go bankrupt, they'll certainly continue to be a huge player in this space. But their astronomical valuation was based on the initial impression that they were the only game in town, and it will come down now that that's no longer true. Hence why Altman wants to cash out ASAP.
The comparison of Chrome and IE is much more apt, IMO, because the deciding factor as other mentioned for social media is network effects, or next-gen dopamine algorithms (TikTok). And that's unique to them.
For example, I'd never suggest that e.g. MS could take on TikTok, despite all the levers they can pull, and being worth magnitudes more. No chance.
Google+ absolutely would have won, and it was clear to me that somebody at Google decided they didn't want to be in the business of social networking. It was killed deliberately, it didn't just peter out.
Even Alibaba is releasing some amazing models these days. Qwen 3 is pretty remarkable, especially considering the variety of hardware the variants of it can run on.
On the other hand...If you asked, 5-6-7 years ago, 100 people which of the following they used:
Slack? Zoom? Teams?
I'm sure you'd get a somewhat uniform distribution.
Ask the same today, and I'd bet most will say Teams. Why Teams? Because it comes with office / windows, so that's what most people will use.
Same logic goes for the AI / language models...which one are people going to use? The ones that are provided as "batteries included" in whatever software or platform they use the most. And for the vast majority of regular people / workers, it is going to be something by microsoft / google / whatever.
About 95% of people know the Coca Cola brand, about 70% of soda drinkers in the US drink one of its sodas, and about 40% of all people in the US drink it.
Agreed on Google dominance. Gemini models from this year are significantly more helpful than anything from OAI.. and they're being handed out for free to anyone with a Google account.
> SamA is in a hurry because he's set to lose the race.
OpenAI trained GPT-4.1 and 4.5—both originally intended to be GPT-5 but they were considered disappointments, which is why they were named differently.
Did they really believe that scaling the number of parameters would continue indefinitely without diminishing returns? Not only is there no moat, but there's also no reasonable path forward with this architecture for an actual breakthrough.
Makes for a good underdog story! But OpenAI is dominating and will continue to do so. They have the je ne sais quoi. It’s therefore laborious to speak to it, but it manifests in self-reinforcing flywheels of talent, capital, aesthetic, popular consciousness, and so forth. But hey, Bing still makes Microsoft billions a year, so there will be other winners. Underestimating focused breakout leaders in new rapidly growing markets is as cliche as those breakouts ultimately succeeding, so even if we go into an AI winter it’s clear who comes out on top the other side. A product has never been adopted this quickly, ever. AGI or not, skepticism that merely points to conventional resource imbalances misses the big picture and such opinions age poorly. Doesn’t have to be obvious only in hindsight if you actually examine the current record of disruptive innovation.
I probably need to clarify what I'm talking about, so that peeps like @JumpCrisscross can get a better grasp of it.
I do not mean the total market share of the category of businesses that could be labeled as "AI companies", like Microsoft or NVIDIA, on your first link.
I will not talk about your second link because it does not seem to make sense within the context of this conversation (zero mentions or references to market share).
What I mean is:
* The main product that OpenAI sells is AI models (GPT-4o, etc...)
* OpenAI does not make hardware. OpenAI is not in the business of cloud infrastructure. OpenAI is not in the business of selling smartphones. A comparison between OpenAI and any of those companies would only make sense for someone with a very casual understanding of this topic. I can think of someone, perhaps, who only used ChatGPT a couple times and inferred it was made by Apple because it was there on its phone. This discussion calls for a deeper understanding of what OpenAI is.
* Other examples of companies that sell their own AI models, and thus compete directly with OpenAI in the same market that OpenAI operates by taking a look at their products and services, are Anthropic (w/ Claude), Google (w/ Gemini) and some others ones like Meta and Mistral with open models.
* All those companies/models, together, make up some market that you can put any name you want to it (The AI Model Market TM)
That is the market I'm talking about, and that is the one that I estimated to be 90%+ which was pretty much on point, as usual :).
> that is the market that I'm talking about, and that is the one that I (correctly, as usual) estimated to be around 90% [1][2]
Your second source doesn’t say what it’s measuring and disclaims itself as from its “‘experimental era’ — a beautiful mess of enthusiasm, caffeine, and user-submitted chaos.” Your first link only measures chatbots.
ChatGPT is a chatbot. OpenAI sells AI models, including via ChatGPT. Among chatbots, sure, 84% per your source. (Not “90%+,” as you stated.) But OpenAI makes more than chatbots, and in the broader AI model market, its lead is far from 80+ percent.
TL; DR It is entirely wrong to say the “market share of OpenAI is like 90%+.”
One, you suggested OP had not “looked at the actual numbers.” That implies you have. If you were just guessing, that’s misleading.
Two, you misquoted (and perhaps misunderstand) a statistic that doesn’t match your claim. Even in your last comment, you defined the market as “companies that sell their own AI models” before doubling down on the chatbot-only figure.
> not even in Puchal wildest dreams
Okay, so what’s your source? Because so far you’ve put forward two sources, a retracted one and one that measures a single product that you went ahead and misquoted.
I have no problem with 'OpenAI', so much as the individual running it and, more generally, rich financiers making the world worse in every capitalizable way and even some they can't capitalize on.
I asked Gemini today to replace the background of a very simple logo and it refused. ChatGPT did it no problem (though it did take a long time because apparently lots of people were doing image generation).
I guess Gemini just refused because of a poor filter for sensitive content. But still, it was annoying.
Literally the founder of Y Combinator all but outright called Sam Altman a conniving dickbag. That’s the consensus view advanced by the very man who made him.
This seems like misinformation, are you talking about how Sam left YC after OpenAI took off? What PG said was "we didn't want him to leave, just to choose one or the other"[1].
That says PG thinks Sam is clever. I don't think there's any moral judgement there. The statement I posted suggests PG likes Sam and would love to keep working with him.
Google is pretty far behind. They have random one off demos and they beat benchmarks yes, but try to use Google’s AI stuff for real work and it falls apart really fast.
Anecdotally, I've switched to Gemini as my daily driver for complex coding tasks. I prefer Claude's cleaner code, but it is less capable at difficult problems, and Anthropic's servers are unreliable.
If the entrenched giants (Google, Microsoft and Apple) catch up - and Google 100% has, if not surpassed - they have a thousand levers to pull and OpenAI is done for. Microsoft has realized this, hence why they're breaking up with them - Google and Anthropic have shown they don't need OpenAI. Galaxy phones will get a Gemini button, Chrome will get it built into the browser. MS can either develop their own thing , use opensource models, or just ask every frontier model provider (and there's already 3-4 as we speak) how cheaply they're willing to deliver. Then chuck it right in the OS and Office first-class. Which half the white collar world spends their entire day staring at. Apple devices too will get an AI button (or gesture, given it's Apple) and just like MS they'll do it inhouse or have the providers bid against each other.
The only way OpenAI David was ever going to beat the Goliaths GMA in the long run was if it were near-impossible to catch up to them, á la TSMC/ASML. But they did catch up.