Hacker Newsnew | past | comments | ask | show | jobs | submit | _ncuy's commentslogin

I find it amusing that IO a company with no product and no history is valued and bought at $6.5B.


The dotcom bubble gave insane valuations to "companies" that literally had a static html page and no product or service.


That is WILD to think about because I regularly create one-pager websites for my own projects...kinda bewildering to comprehend valuations for something so basic.


Those insane valuations were not 1% of this.


Maybe that's a reason to stop doing it.


What do you mean? The idea is obvious, it's an Apple Homepod sized orb-screen, like a mini Vegas projector, running OpenAI's realtime API.


So, basically a Palantir?


I don't totally get the comparison - Palantir is a tech enabled agency making glorified dashboards that benefits from affirmative action for libertarians, and the mini Vegas orb product is Jony Ive's new dildo to capitalism to worship. Two very different things.


the object from Lord of the Rings


Ha ha, okay that’s a good one.


Sounds to me like money is being distributed to that startup‘s investors?


Google hit the jackpot with their acquisition of YouTube and it's now paying dividend. YouTube is the largest single source of data and traffic on the Internet, and it's still growing fast. I think this data will prove incredibly important to robotics as well. It's a shame they sold Boston Dynamics in one of their dumbest ever moves because of bad PR.


"Growing fast" is questionable these days.

There is an ever growing percentage of new AI-generated videos among every set of daily uploads.

How long until more than half of uploads in a day are AI-generated?


Even if the content was 100% AI generated (which is the furthest thing from reality today) human engagement with the content is a powerful signal that can be used by AI to learn. It would be like RLHF with free human annotation at scale.


Won't the human engagement be replaced by AI engagement too? if it isn't already being replaced?


The AI is not paying for watching videos yet


Indeed, it's the advertisers who are paying for AI to watch videos....


And paying for my sofa to watch a unskippable 50s ad while I make a coffee.


Back in the day when everyone used to watch broadcast TV, and stations synchronised their add breaks, water consumption would spike with every add break.


The UK has a unique problem with demand spikes for electricity during commercial breaks, due to the British penchant for using high-power electric kettles to make tea. In the worst case, demand could rise and fall by gigawatts within a matter of minutes.

https://en.wikipedia.org/wiki/TV_pickup

https://www.youtube.com/watch?v=slDAvewWfrA


Google already invests a tremendous amount of resources into identifying and preventing fraudulent ad impressions -- I don't see that changing much until AI is so cheap that it makes sense to run a full agent for pennies per hour. Sadly.


Not talking about fraud per se - in the sense of trying to drive revenue for a particular video channel - just that if you wanted to train AI on youtube videos you are in effect getting the advertisers to pay for the serving of them.

Perhaps the difference here is the behaviour would be much more human and thus harder to detect using current fraud detection?


Yes it will. Soon humans will be the minority on the internet. I wrote some guesses about this 2 years ago: https://art.cx/blog/12-08-22-city-of-bots


And google is in the best possible position to detect it if they want to exclude it from their datasets.


They're never going to manage to do that, just on a technical level

Plus some users might want to legitimately upload things with AI-generated content in it


I'm pretty sure YouTube saves the metadata from all the video files uploaded to it. It seems pretty trivial to exclude videos uploaded without camera model or device setting information. I seriously doubt even a tiny fraction of people uploading AI content to YouTube are taking the time to futz about with the XMP data before they upload it. Sure, they'll miss out on a lot of edited videos doing that, but that's probably for the best if you're trying to create a data set that's maintaining fidelity to the real world. Lots of ways to create false images without AI


"Since launching in 2023, SynthID has watermarked over 10 billion images, videos, audio files and texts, helping identify them as AI-generated and reduce the chances of misinformation and misattribution. Outputs generated by Veo 3, Imagen 4 and Lyria 2 will continue to have SynthID watermarks.

Today, we’re launching SynthID Detector, a verification portal to help people identify AI-generated content. Upload a piece of content and the SynthID Detector will identify if either the entire file or just a part of it has SynthID in it.

With all our generative AI models, we aim to unleash human creativity and enable artists and creators to bring their ideas to life faster and more easily than ever before."

From the page linked in the post....

So there's different ways to detect AI generated content (videos/images atleast). (https://www.nature.com/articles/s41586-024-08025-4 <-- paper on synthID / watermarking and detecting it with LLMs)


I somewhat doubt that YT cares much about AI content being uploaded, as long as it’s clearly marked as such.

What they do care about is their training set getting tainted, so I imagine they will push quite hard to have some mechanism to detect AI; it’s useful to them even if users don’t act on it.


> They're never going to manage to do that, just on a technical level

Why not? Given enough data, it's possible to train models to differentiate - especially since humans can pick up on the difference pretty well.

> Plus some users might want to legitimately upload things with AI-generated content in it

Excluding videos from training datasets doesn't mean excluding them from Youtube.


I agree, especially because in practice the vast majority of AI-generated videos uploaded to YouTube are going to be from one of about 3 or 4 generators (Sora, Veo, etc.). May change in the future, but at the moment the detection problem is pretty well constrained.


> Excluding videos from training datasets doesn't mean excluding them from Youtube.

Ah then sure. It was this part that was problematic.

If users are still allowed to upload flagged content, then false positives almost don't matter, so Youtube could just roll out some imperfect solution and it would be fine


In the future, a new intelligent species will roam the earth, they will ask, "why did their civilization fall?" The answer? These homo-sapiens strip mined the Earth and exacerbated climate change to generate enough power to make amusing cat videos...


It’s the much-feared the paper clip apocalypse, but we did it to ourselves with cat clips.


And those videos were either not watched by anyone human or not truly watched by being part of an endless feed of similar slop.


how do you truly watch an ai-generated cat video


use your eyes. write a detailed and elaborate review on your blog of the cat and his antics. seems easy enough?


At this point heat death through cat videos sound more appealing than nuclear apocalypse, lol


We don't have an energy problem on earth. We have a capitalism problem.

Renewable energy is easily able to provide enough energy sustainable. Batteries can be recycled. Solar panels are glas/plastic and silicium.

Nuclear is feasable, fusion will happen in 50 years one way or the other.

Existens is what it is. If it means being able to watch cat videos, so be it. We are not watching them for nothing, we watch them for happiness.


Existens is what it is. If it means being able to watch cat videos, so be it. We are not watching them for nothing, we watch them for happiness.

Well that's just your opinion.

Yes we can generate electricity, but it would be nice if used it wisely.


Of course its my opinion, its my comment after all.

Nonetheless, survival can't be the life goal after all the moon will drift away from earth in the future, the sun will explode and if we survive that as a species, all bonds between elements will disolve.

It also can't be about giving your dna away because your dna has very little to no impact over just a handful of generations.

And no the goal of our society has to be to have as much energy available as possible to us. So much energy, that energy doesn't matter. There is enough ways of generating energy without a real issue at all. Fusion, renewable energy directly from the sun.

There is also no inherant issue right now preventing us all having clean stable energy besides capitalsm. We have the technology, we have the resources, we have the manufacturing capacity.

To finish my comment: Its not about energy, its about entropy. You need energy to create entropy. We don't even consume the energy of the sun, we use it for entropy and dissipate it back to space after.


On the other hand, take one look at the way they caption a video in their dataset, and you have seen like 90% of the "secret sauce" of generative art. All this supposed data and knowledge, and anyone who has worked 1 day on Imagen or Veo could become a serious competitor.

The remaining 10% is the solution to generating good hands, of course. And do you think YouTube has been helping anyone achieve that?


I hear BD aren't making much money anyway so I wonder if they couldn't just buy them back for not much loss overall.


Why videos are important for robotics?


If you can generate realistic video stream, responding to player movements and interactions, you can train your robot using that video stream. It's much more scalable, compared to building physical environments and performing real-world training.

Of course the alternative is to use game engines, but it's possible that AI would generate more realistic video stream for the same money spent. Those recent AI-generated videos certainly look much more realistic than any game footage I ever saw.


Game engines require a lot of additional work to make them suitable for that task, too— deep integration for sensor data, inputting maps and assets, plus the basic mismatch that these workflows are centered around Windows gui tools whereas robotics is happening on the Linux command line.


object detection i'd guess.


Why should YouTube be here at the advantage? Every competitor also has access to these videos(?)


Easy access to the videos without having to download them from Google (and without Google trying to stop you from scraping them, which they will) is an enormous advantage. There's way, way too much on Youtube for to index and use over the internet, and especially not at full resolution.


That is the other perk, Google has all those videos stored in original quality locally.

It wouldn't be hard for google to poison competitor training just by throttling bandwidth.


Google is making money hosting these videos, and users are freely uploading them. A competitor would have to scrape/download them, store them, process them all at their own cost, along with having much less metadata available (Which videos are most viewed, which segments, what do people repeat, what do people skip, what do people watch after this video, which video generates the most ad revenue, etc.)


> Google is making money hosting these videos

This isn't certain. Google do not break out Youtube revenues nor costs. Hosting this amount of videos, globally, redundantly, the vast majority of which are basically never watched, cannot be cheap.

It's entirely plausible that Google's wider benefit from Youtube (such as training video generation algorithms and better behaviour tracking for better targeted ads across the internet) are enough to compensate for Youtube in particular losing money.


> Google do not break out Youtube revenues nor costs.

Google does break out Youtube revenue.

Latest 10-K: https://abc.xyz/assets/77/51/9841ad5c4fbe85b4440c47a4df8d/go...

See page 10, for youtube Ads revenue.


My bad, I thought it's the two. But they don't break out costs, so in reality we don't know if YouTube is profitable or not.


Videos without metadata is not as useful. Google also has details on which videos are watched where. Which parts do people skip. All the videos that are blocked for various reasons. The performance of videos with humans over time and so on. They can focus on videos with signals that indicate that humans prefer those videos or clips.


Do they, though? Are competitors actually downloading all these videos? Supposedly there are 5 billion videos on YouTube (https://seo.ai/blog/how-many-videos-are-on-youtube), downloading all of that is a LOOOOT of data and time.

I mean, you could limit yourself to the most popular or most interesting 100 million, but that's still an enormous amount of data to download.


Just wanted to mention the latter, you don’t need all videos. It’s indeed a lot of data but doable so I am not sure if I would count this as big advantage.


You are incredibly naive if you don’t see full, unrestricted access to YT as an advantage.


presumed datasets: 1. its petabytes of data in the public/listed/free tier videos. 2. there's paywalled videos. 3. there's private/unlisted videos.

google will have access to all of these. competitors will have to do tons of network interactions with google to pull in only the first set. (which google could detect and block depending on how these competitors go about it)


Most youtube videos use stock video photography. Or the face of some youtuber.

If we look at the Veo 3 examples, this is not the typical youtube video, but instead they seem to recreate cgi movies, or actual movies.


As an Ex-OpenAI employee I agree with this. Most of the top ML talent at OpenAI already have left to either do their own thing or join other startups. A few are still there but I doubt if they'll be around in a year. The main successful product from OpenAI is the ChatGPT app, but there's a limit on how much you can charge people for subscription fees. I think soon people expect this service to be provided for free and ads would become the main option to make money out of chatbots. The whole time that I was at OpenAI until now GOOG has been the only individual stock that I've been holding. Despite the threat to their search business I think they'll bounce back because they have a lot of cards to play. OpenAI is an annoyance for Google, because they are willing to burn money to get users. Google can't as easily burn money, since they already have billions of users, but also they are a public company and have to answer to investors. But I doubt if OpenAI investors would sign up to give more money to be burned in a year. Google just needs to ease off on the red tape and make their innovations available to users as fast as they can. (And don't let me get started with Sam Altman.)


> there's a limit on how much you can charge people for subscription fees. I think soon people expect this service to be provided for free and ads would become the main option to make money out of chatbots.

So... I don't think this is certain. A surprising number of people pay for the ChatGPT app and/or competitors. It's be a >$10bn business already. Could maybe be a >$100bn business long term.

Meanwhile... making money from online ads isn't trivial. When the advertising model works well (eg search/adwords), it is a money faucet. But... it can be very hard to get that money faucet going. No guarantees that Google discover a meaningful business model here... and the innovators' dilema is strong.

Also, Google don't have a great history of getting new businesses up and running regardless of tech chops and timing. Google were pioneers to cloud computing... but amazon and MSFT built better businesses.

At this point, everyone is assuming AI will resolve to a "winner-take-most" game that is all about network effect, scale, barriers to entry and such. Maybe it isn't. Or... maybe LLMs themselves are commodities like ISPs.

The actual business models, at this point, aren't even known.


> No guarantees that Google discover a meaningful business model here...

I don't understand this sentiment at all. The business model writes itself (so to speak). This is the company that perfected the art of serving up micro-targeted ads to people at the moment they are seeking a solution to a problem. Just swap the search box for a chat bot.

For a while they'll keep the ads off to the side, but over time the ads will become harder and harder to distinguish from the chat bot content. One day, they'll dissapear altogether and companies will pay to subtly bias the AI towards their products and services. It will be subtle--undetectable by end users--but easily quantified and monetized by Google.

Companies will also pay to integrate their products and services into Google's agents. When you ask Gemini for a ride, does Uber or Lyft send a car? (Trick question. Waymo does, of course.) When you ask for a pasta bowl, does Grubhub or Doordash fill the order?

When Gemini writes a boutique CRM for your vegan catering service, what service does it use for seamless biometric authentication, for payment processing, for SMS and email marketing? What payroll service does it suggest could be added on in a couple seconds of auto-generated code?

AI allows Google to continue it's existing business model while opening up new, lucrative opportunities.


I don’t think it works. Search is the perfect place for ads for exactly the reasons you state: people have high intent.

But a majority of chatbot usage is not searching for the solution to a problem. And if he Chatbot is serving the ads when I’m using it for creative writing, reformatting text, having a python function, written, etc, I’m going to be annoyed and switch to a different product.

Search is all about information retrieval. AI is all about task accomplishment. I don’t think ads work well in the latter , perhaps some subset, like the task is really complicated or the AI can tell the user is failing to achieve it. But I don’t think it’s nearly as could have a fit as search.


It doesn't have to be high intent all the time though. Chrome itself is "free" and isn't the actual technical thing serving me ads (the individual websites / ad platforms do that regardless of which browser I'm using), but it keeps me in the Google ecosystem and indirectly supports both data gathering (better ad targeting, profitable) and those actual ad services (sometimes subtly, sometimes in heavy-handed ways like via ad blocker restrictions). Similar arguments to be made with most of the free services like Calendar, Photos, Drive, etc - they drive some subscriptions (just like chatbots), but they're mostly supporting the ads indirectly.

Many of my Google searches aren't high intent, or any purchase intent at all ("how to spell ___" an embarrassing number of times), but it's profitable for Google as a whole to keep those pieces working for me so that the ads do their thing the rest of the time. There's no reason chatbots can't/won't eventually follow similar models. Whether that's enough to be profitable remains to be seen.

> Search is all about information retrieval. AI is all about task accomplishment.

Same outcome, different intermediate steps. I'm usually searching for information so that I can do something, build something, acquire something, achieve something. Sell me a product for the right price that accomplishes my end goal, and I'm a satisfied customer. How many ads for app builders / coding tools have you seen today? :)


I have shifted the majority of my search for products to ChatGPT. In the past my starting point would have been Amazon or Google. It’s just so much easier to describe what I’m looking for and ask for recommendations that fit my parameters. If I could buy directly from the ChatGPT, I probably would. It’s just as much or more high intent as search.


The main usage of chatgpt I’ve seen amongst non-programmers is a direct search replacement with tons of opportunity for ads.

People ask for recipes, how to fix things around the house, for trip itinerary ideas, etc.


> And if he Chatbot is serving the ads when I’m using it for creative writing, reformatting text, having a python function, written, etc, I’m going to be annoyed and switch to a different product.

You may not even notice it when AI does a product placement when it's done opportunistically in creative writing (see Hollywood). There also are plenty of high-intent assistant-type AI tasks.


Obviously, an LLM is in a perfect position to decide whether an add can be "injected" into the current conversation. If you're using it for creative writing it will be add free. But chances are you will also use it to solve real world problems where relevant adds can be injected via product or service suggestions.


"ad" is short for advertisement. That's the word you're looking for here.

Add is a verb meaning to combine 2 things together.


Re "going to be annoyed" there is definitely a spectrum starting at benign and culminating to the point of where you switch.

Photopea, for example, seems to be successful and ads displayed on the free tier lets me think that they feel at least these users are willing to see ads while they go about their workflow.


Chatgpt is effectively a functional search engine for a lot of people. Searching for the answer "how do i braid my daughter's hair?", or, "how do i bake a cake for a birthday party?" can be resolved via tradtitional search and finding a video or blog post, or simply read the result from an LLM. LLM has a lot more functionality overall, but ChatGPT and it's competitors are absolutely an existential threat to Google, as (in my opinion) it's a superior service because it just gives you the best answer, rather than feeding you into whatever 10 blog services that utilize google ads the most this month. Right now ChatGPT doesn't even serve up ads, which is great. I'm almost certain they're selling my info though, as specific one-off stuff I ask ChatGPT about, ends up as ads in Meta social medias the next day.


The intent will be obvious from the prompt and context. The AI will behave differently when called from a Doc about the yearly sales strategy vs consumer search app.


> chatbots ... provided for free ... ads

Just because the first LLM product people paid for was a chatbot does not mean that chat will be the dominant commercial use of AI.

And if the dominant use is agents that replace knowledge workers, then they'll cost closer to $2000 per month than $20 or free, and an ad-based business model won't work.


True. This is my point too.

The actual business models and revenue sources are still unknown. Consumer subscriptions happens to be the first major model. Ads still aren't. Many other models could dwarf either of these.

It's very early to call the final score.


I still think it's pretty clear. Google doesn't have to get a new business off the ground, just keep improving the integration into Workspace, Gmail, Cloud, Android etc. I don't see users paying for ChatGPT and then copy/pasting into those other places even if the model is slightly better. Google will just slowly roll out premium plans that include access to AI features.

And as far as selling pickaxes go, GCP is in a far better position to serve the top of market than OpenAI. Some companies will wire together multiple point solutions but large enterprises will want a consolidated complete stack. GCP already offers you compute clusters and BigQuery and all the rest.


>Just swap the search box for a chat bot.

Perhaps... but perhaps not. A chatbot instead of a search box may not be how the future looks. Also... a chatbot prompt may not (probably won't) translate from search query smoothly... in a Way That keep ad markets intact.

That "perfected art" of search advertising is highly optimized. You (probably) loose all of that in transition. Any new advertising products will be intrepid territory.

You could not have predicted in advance that search advertising would dwarf video (yourube) advertising as a segment.

Meanwhile... they need to keep their market share at 90%.


> micro-targeted ads to people at the moment they are seeking a solution to a problem

Personal/anecdotal experience, but I've bought more stuff out of instagram ads than google ads ever.


I imagine it would be easy for them to do similar to the TV guides of yesteryear(the company that owned it used it primarily for self promotion with just enough competitor promotion to fly under the radar and still seem useful), where it gives good recommendations sure, but 60-70% of those recommendations are the paid ones or the ones you own for you custom LLM.


LLM based advertising has amazing potential when you consider that you can train them to try to persuade people to buy the advertised products and services.


That seems like a recipe for class action false advertising lawsuits. The AI is extremely likely to make materially false claims, and if this text is an advertisement, whoever placed it is legally liable for that.


I don't think we should expect that risk to dissuade these companies. They will plow ahead, fight for years in court, then slightly change the product if forced to ¯\_(ツ)_/¯


How would you track this?


Perhaps ironically, I know a guy who uses ChatGPT to write ad copy. The snake eats its own tail.


Is this someone someone working as a writer, who is just phoning it in (LLM-ing it in)?

Or is this someone who needs writing but can't do it themselves, and if they didn't have the LLM, they would pay a low-end human writer?


A friend of mine works in advertising/marketing guy at the director level (Career ad guy), for big brands like nationwide cell carriers, big box stores etc, but mostly telcom stuff I think, and he uses it every day; he calls it "my second brain". LLM are great at riffing on ideas and brainstorming sessions.


I don’t think “AI” as a market is “winner-takes-anything”. Seriously. AI is not a product, it’s a tool for building other products. The winners will be other businesses that use AI tooling to make better products. Does OpenAI really make sense as a chatbot company?


I agree the market for 10% better AI isn’t that great but the cost to get there is. An 80% as good model at 10% or even 5% the cost will win every time in the current environment. Most businesses don’t even have a clear use case for AI they just use it because the competition is and there is a FOMO effect


> Most businesses don’t even have a clear use case for AI they just use it because the competition is and there is a FOMO effect

I consult in this space and 80-90% of what I see is chat bots and RAG.


That’s exactly what I’d expect. Honestly Ai chat bots seems unnecessarily risky because you never really know what they might say on your behalf.


> Does OpenAI really make sense as a chatbot company?

If the chat bot remains useful and can execute on instructions, yes.

If we see a plateau in integrations or abilities, it’ll stagnate.


Very few are successful in this position. Zapier comes to mind, but it seems like a tiring business model to me.


AI is a product when you slap an API on top and host it for other businesses to figure out a use case.

In a gold rush, the folks that sell pickaxes make a reliable living.


> In a gold rush, the folks that sell pickaxes make a reliable living.

Not necessarily. Even the original gold rush pickaxe guy Sam Brannan went broke. https://en.wikipedia.org/wiki/Samuel_Brannan

Sam of the current gold rush is selling pickaxes at a loss, telling the investors they'll make it up in volume.


According to the linked Wikipedia article, he did not go broke from the gold rush. He went broke because he invested the pickaxe windfall in land, and when his wife divorced him, the judge ruled he had to pay her 50%, but since he was 100% in land he had to sell it. (The article is not clear why he couldn't deed her 50% of it, or only sell 50%. Maybe it happened during a bad market, he had a deadline, etc.)

So maybe if the AI pickaxe sellers get divorced it could lead to poor financial results, but I'm not sure his story is applicable otherwise.


Nvidia is selling GPUs at a loss? TSMC is going broke?

I'm pretty sure they are the pickaxe manufactures in this case.


This is where Google thrives, it makes it's own TPUs that run the models.


Clouds are the actual pickaxe manufacturers. Google has a cloud.


Basically every tech company likes to say they are selling pickaxes, but basically no VC funded company matches that model. To actually come out ahead selling pickaxes you had to pocket a profit on each one you sold.

If you sell your pickaxes at a loss to gain market share, or pour all of your revenue into rapid pickaxe store expansion, you’re going to be just as broke as prospectors when the boom goes bust.


I don't think there is anybody that is making significant amount of money by selling tokens right now.


Nvidia is selling the shovels.


There are two perspectives on this. What you said is definitely a good one if you're a business planning to add AI to whatever you're selling. But personally, as a user, I want the opposite to happen - I want AI to be the product that takes all the current products and turns them into tools it can use.


I agree, I want a more intelligent voice assistant similar to Siri as a product, and all my apps to be add-ons the voice assistant could integrate with.


> AI is not a product, it’s a tool for building other products.

Its products like this (Wells Fargo): https://www.youtube.com/watch?v=Akmga7X9zyg

Great Wells Fargo has an "agent" ... and every one else is talking about how to make their products available for agent based AI.

People don't want 47 different agents to talk to, then want a single end point, they want a "personal assistant" in digital form, a virtual concierge...

And we can't have this, because the open web has been dead for more than a decade.


Why can't we have personal assistants because the open web has been dead?

I'll be happy with a personal assistant with access to my paid APIs.


Seriously, humans are not a product. You hire them for building products.


Is Amazon a product or a place to sell other products? Does that make Amazon not a winner?


If there were 2 other Amazons all with similar products and the same ease of shipping would you care where you purchased? Amazon is simply the best UX for online ordering. If anything else matched it I’d shop platform agnostic.


> The winners will be other businesses that use AI tooling to make better products.

agree with you on this.

you already see that playing out with Meta and a LOT of companies in China.


the subscription is a product


>It's be a >$10bn business already.

But not profitable yet.


Opera browser was not profitable for like 15 years and still became rather profitable eventually to make an attractive target to purchase by external investors. And even if not bough it would still made nice profit eventually for the original investors.


Opera is a shady advertisers cesspool since it was purchased.


Opera had zero marginal costs. OpenAI doesn’t.


Opera doesn't have the same size data center bill as OpenAI


You can't burn money in AI for 15 years on the off chance that it’ll pay off.


No, but you can let others burn money for 15 years and then come in and profit off their work while they go under.


I dunno, Nvidia worked on machine learning for 11+ years and it worked out great for them: https://research.nvidia.com/research-area/machine-learning-a...


Sure, but they were making tons of money elsewhere. OpenAI has no source of revenue anywhere big enough to cover its expenses, it's just burning investor cash at the moment.


It seems like most people are on the road to doing exactly this.


It worked for Uber’s investors.


The demand is there. People are already becoming addicted to this stuff.


I think the HN crowd widely overestimates how many people are even passingly familiar with the LLM landscape much less use any of the tools regularly.


Last Month, Google, Youtube, Facebook, Instagram and Twitter (very close to this one, likely passes it this month) were the only sites with more visits than chatgpt. Couple that with the 400M+ weekly active users (according to open ai in February) and i seriously doubt that.

https://x.com/Similarweb/status/1909544985629721070

https://www.reuters.com/technology/artificial-intelligence/o...


Weekly active users is a pretty strange metric. Essential tools and even social networking apps report DAUs, and they do that because essential things get used daily. How many times did you use Google in the past day? How many times did you visit (insert some social media site you prefer) in the last day? If you’re only using something once per week, it probably isn’t that important to you.


Mostly only social media/messaging sites report daily active users regularly. Everything else usually reports monthly active users at best.

>in the last day? If you’re only using something once per week, it probably isn’t that important to you.

No, something I use on a weekly basis (which is not necessarily just once a week) is pretty important to me and spinning it otherwise is bizarre.

Google is the frontend to the web for the vast majority of internet users so yeah it gets a lot of daily use. Social media sites are social media sites and are in a league of their own. I don't think i need to explain why they would get a disproportionate amount of daily users.


I am entirely confused by this. ChatGPT is absolutely unimportant to me. I don't use it for any serious work, I don't use it for search, I find its output to still be mostly a novelty. Even coding questions I mostly solve using StackExchange searches because I've been burned using it a couple of times in esoteric areas. In the few areas where I actually did want some solid LLM output, I used Claude. If ChatGPT disappeared off the Internet tomorrow, I would suffer not at all.

And yet I probably duck into ChatGPT at least once a month or more (I see a bunch of trivial uses in 2024) mostly as a novelty. Last week I used it a bunch because my wife wanted a logo for a new website. But I could have easily made that logo with another service. ChatGPT serves the same role to me as dozens of other replaceable Internet services that I probably duck into on a weekly basis (e.g., random finance websites, meme generators) but have no essential need for whatsoever. And if I did have an essential need for it, there are at least four well-funded competitors with all the same capabilities, and modestly weaker open weight models.

It is really your view that "any service you use at least once a week must be really important to you?" I bet if you sat down and looked at your web history, you'd find dozens that aren't.

(PS in the course of writing this post I was horrified to find out that I'd started a subscription to the damn thing in 2024 on a different Google account just to fool around with it, and forgot to cancel it, which I just did.)


>I am entirely confused by this. ChatGPT is absolutely unimportant to me. I don't use it for any serious work, I don't use it for search, I find its output to still be mostly a novelty. Even coding questions I mostly solve using StackExchange searches because I've been burned using it a couple of times in esoteric areas. In the few areas where I actually did want some solid LLM output, I used Claude. If ChatGPT disappeared off the Internet tomorrow, I would suffer not at all.

OK? That's fine. I don't think I ever claimed you were a WAU

>And yet I probably duck into ChatGPT at least once a month or more (I see a bunch of trivial uses in 2024) mostly as a novelty.

So you are not a weekly active user then. Maybe not even a monthly active one.

>Last week I used it a bunch because my wife wanted a logo for a new website. But I could have easily made that logo with another service.

Maybe[1], but you didn't. And I doubt your wife needs a new logo every week so again not a weekly active user.

>ChatGPT serves the same role to me as dozens of other replaceable Internet services that I probably duck into on a weekly basis (e.g., random finance websites, meme generators)but have no essential need for whatsoever.

You visit the same exact meme generator or finance site every week? If so, then that site is pretty important to you. If not, then again you're not a weekly active user to it.

If you visit a (but not the same) meme generator every week then clearly creating memes is important to you because I've never visited one in my life.

>And if I did have an essential need for it, there are at least four well-funded competitors with all the same capabilities, and modestly weaker open weight models.

There are well funded alternatives to Google Search too but how many use anything else? Rarely does any valuable niche have no competition.

>It is really your view that "any service you use at least once a week must be really important to you?" I bet if you sat down and looked at your web history, you'd find dozens that aren't.

Yeah it is and so far, you've not actually said anything to indicate the contrary.

[1]ChatGPT had an image generation update recently that made it capable of doing things other services can't. Good chance you could not in fact do what you did (to the same satisfaction) elsewhere. But that's beside my point.


Sadly it’s become common for many mediocre employees in corporate environments to defer to ChatGPT, receive erroneous output and accept it as truth.

There are now commonly corporate goon squads whose job is to drive AI adoption without care for actual impact to results. Usage of AI is the KR.


I don’t understand why this is happening. Why is everyone buying into this hype so strongly?

It’s a bit like how DEI was the big thing for a couple years, and now everyone is abandoning it.

Do corporate leaders just constantly chase hype?


Yes corporate leaders do chase hype and they also believe in magic.

I think companies implement DEI initiatives for different reasons than hype though. Many are now abandoning DEI ostensibly out of fear due to the change in U.S. regime.


A case can be made for diversity, but the fact that all the big companies were adopting DEI at the same time made it hype.

I personally know an engineering manager who would scoff at MLK Day, but in 2020 starting screaming about how it wasn’t enough and we needed Juneteenth too.

AI isn’t hype at Nvidia, and DEI isn’t hype at Patagonia.

But tech industry-wide, they’re both hype.


I think many were rightly adopting DEI initiatives in an environment post me-too and post George Floyd. I don’t think it was driven by hype but more a reaction to the environment which heightened awareness of societal injustices. Awareness led to all sorts of things - conversation, compassion, attempts to do better in society and the workplace, and probably law suits. You can question how motivated corporations were to adopt DEI initiatives but I think it’d be wrong to say it was driven by hype.


I’m not sure companies are “abandoning DEI” so much as realizing that it’s often only a vocal minority that cares about DEI reports and scores and you don’t actually need a VP and diversity office to do some outreach and tally internal metrics.

The climate has changed. Some of that is economic at big tech companies. But it’s also a ramping down of a variety of things most employers probably didn’t support but kept their mouths shut about.


I think you may be underestimating it.

At this point in college, LLMs are everywhere. It's completely dominating history/english/mass comm fields with respect to writing papers.

Anecdotally all of my working non-tech friends use chatgpt daily.


It does anecdotally seem to be very common in education which presumably will carry over to professional workplaces over time. I see it a lot less in non-tech and even tech/adjacent adults today.


Aside from university mentioned by sibling comments, there is major uptake of AI in journalism (summarize long press statements, create first draft of the teaser, or even full articles ...) and many people in my social groups use it regularly for having something explained, finding something ... it's wide spread


My wife, the farthest you can get from the HN crowd, literally goes to tears when faced with Excel or producing a Word doc and she is a regular user of copilot and absolutely raves about it. Very unusual for her to take up new tech like this and put it to use but she uses it for everything now. Horse is out of the barn.


> My wife, the farthest you can get from the HN crowd...

She is literally married into the HN crowd.

I think the real AI breakthrough is how to monetize the high usage users.


My Dad is elderly and he enjoys writing. Uses Google Gemini a few times a week. I always warn him that it can hallucinate and he seems to get it.

It's changed his entire view of computing.


My father says "I feel like I hired an able assistant" regarding LLMs.


It's so great!

I keep reminding him that it can hallucinate...


I think you're in fact wildly out of touch with the general populace and how much they use AI tools to make their work easier.


Well, they said it is a $10B industry. Not sure how they measure it, but it counts for something, I suppose.


every ordinary college and university in the USA is filled with AI now AFAIK


For many, this stuff is mostly about copilot being shoved down everyone's throats via ms office obnoxious ads and distractions, and I haven't yet heard of anyone liking it or perceiving it as an improvement. We are now years into this, so my bets are on the thing fading away slowly and becoming a taboo at Microsoft.


Many recent HN articles about how middle managers are already becoming addicted and forcing it on their peons. One was about the game dev industry in particular.

In my work I see semi-technical people (like basic python ability) wiring together some workflows and doing fairly interesting analytical things that do solve real problems. They are things that could have been done with regular code already but weren't worth the engineering investment.

In the "real world" I see people generating crummy movies and textbooks now. There is a certain type of person it definitely appeals to.


I'm sure this is a thing,

what I'm not so sure about is how much that generalises beyond the HN/tech-workers bubble (I don't think "people" in OP's comment is as broad and numerous as they think it is).


> I haven't yet heard of anyone liking it or perceiving it as an improvement.

Well I mean if you say it, then of course it MUST be true I’m sure.


As much as you may make fun of my anecdotal observation, your comment doesn't add anything of value, in particular to substantiate that "people [are] becoming addicted to LLMs". I stand behind my comment that the vast majority of non-tech worker are exposed to them via Copilot in MS Office, and if you want to come to its rescue and pretend it's not a disaster, by all means :-)


For comparison, Uber is still not profitable after 15 years or so. Give it some time.


Uber had their first profitable year in 2023, and their profit margin was 22% in 2024.

https://finance.yahoo.com/news/uber-technologies-full-2024-e...


They are still FAR in the red. Technically have never turned a profit. Among other famous companies.


Uber is a profitable company both in 2023 and - to the tune of billions of dollars - in 2024. Please read their financials if you doubt this statement.


I'm not a finance person, but how is net income of $9.9B for FY 2024 not profit?


I assume they mean the profits in the past couple years are dwarfed by the losses that came before. Looking at the company's entire history, instead of a single FY.


Maybe? But that's not what anyone means when they describe a company as profitable or not.

I was guessing they meant something like the net profit only came from a weird tax thing or something.


Seems like the difference between a profitable investment and a profitable company.

They invested tens of billions of dollars in destroying the competition to be able to recently gain a return on that investment. One could either write off that previous spending or calculate it into the totality of "Uber". I don't know how Silicon Valley economics works but, presumably, a lot of that previous spending is now in the form of debt which must be serviced out of the current profits. Not that I'm stating that taking on debt is wrong or anything.


To the extent that their past spending was debt, interest on that debt that should already be accounted for in calculating their net income.

But the way it usually works for Silicon Valley companies and other startups is that instead of taking on debt they raise money through selling equity. This is money that doesn't have to be paid back, but it means investors own a large portion of this now-profitable company.


Time for them to finally disappear


I'm surprised. They pay the drivers a pittance. My ex drove Uber for a while and it wasn't really worth it. Also, for the customers it's usually more expensive and slower than a normal taxi at least here in Spain.

The original idea of ride-sharing made sense but just like airbnb it became an industry and got enshittified.


> They pay the drivers a pittance. My ex drove Uber for a while and it wasn't really worth it.

I keep hearing this online, but every time I’ve used an Uber recently it’s driven by someone who says they’ve been doing it for a very long time. Seems clear to me that it is worth it for some, but not worth it if you have other better job options or don’t need the marginal income.


> but not worth it if you have other better job options

Pretty much any service job, really...

When I had occasion to take a ride share in Phoenix I'd interrogate the driver about how much they were getting paid because I drove cabs for years and knew how much I would have gotten paid for the same trip.

Let's just say they were getting paid significantly less than I used to for the same work. If you calculated in the expenses of maintaining a car vs. leasing a cab I expect the difference is even greater.

There were a few times where I had just enough money to take public transportation down to get a cab and then snag a couple cash calls to be able to put gas in the car and eat. Then I could start working on paying off the lease and go home at the end of the day with some cash in my pocket -- there were times (not counting when the Super Bowl was in town) where I made my rent in a single day.


Maybe it differs per country. This was in Spain.


PS: I know that in Romania it's the opposite. Uber is kinda like a luxury taxi there. Normal taxis have standard rates, but these days it's hardly enough to cover rising fuel prices. So cars are ancient and un a bad state of repair, drivers often trick foreigners. A colleague was even robbed by one. Uber is much more expensive but much safer (and still cheap by western standards).


My sense in London is that they’re pretty comparable. I’ll use whichever is more convenient.


They're usually a bit more expensive here than a taxi. It can be beneficial because sometimes they have deals, and I sometimes take one when I have to book it in advance or when I'm afraid there will be delays with a corrsponding high cost. Though Uber tend to hit me with congestion charges then too. At least with a taxi I can ask them to take a different route. The problem with the uber drivers is that they don't know any of the street names here, they just follow the app's navigation. Whereas taxi drivers tend to be much more aware and know the streets and often come up with suggestions.

This also means that they sometimes fleece tourists but when they figure you know the city well they don't dare :) Often if they take one wrong turn I make a scene about frowning and looking out of the window and then they quickly get back on track. Of course that's another usecase where uber would be better, if you don't know the city you're in.


> they sometimes fleece tourists

yeah thanks no, I'm paying for an Uber. For all the complaints over Ubers business practices, it's hard not to forget how bad taxis were. Regulatory capture is a clear failure mode of capitalism and the free market and that is no more shown than by the taxis cab industry.


Taxis aren't so bad in most countries. Here in Spain they are plentiful and fine. The same in most other countries I've been to. Only in the Netherlands they are horrible, they are ridiculously expensive because they all drive Mercedeses. As a result nobody uses them because they can't afford them. They're more like a limousine service, not like real taxis.

One time I told one of my Dutch friends I often take a cab to work here in Spain when I'm running late. He thought i was being pompous and showy. But here it's super normal.

Uber (Or cabify which is a local clone and much more popular) here on the other hand is terrible if you don't book it in advance. When I'm standing here on the street it takes 7-10 minutes for them to arrive while I see several taxis passing every minute. So there is just no point. Probably a factor of being unpopular too so the density is low.

I also prefer my money to end up with local people instead of a huge American corporation.


> Also, for the customers it's usually more expensive and slower than a normal taxi

Neither of those things are true where I live.

> at least here in Spain

Well…Spain is Spain. Not the rest of the world.


No but it's like this in most of Europe.

I think Uber in the US is a very different beast. But also because the outlook on life is so different there. I recently agreed with an American visitor that we'd go somewhere and we agreed to go by public transport. When I got there he wanted to get an Uber :') Here in Europe public transport is a very different thing. In many cases the metro is even faster than getting a taxi.

PS: What bothers me the most about Uber and Cabify is that they "estimate" that it will take 2 minutes to get a car to you, and then when I try and book one I get a driver that's 10 minutes away :( :( Then I cancel the trip and the drivers are pissed off. I had one time where I got the same driver I cancelled on earlier and he complained a lot even though I cancelled within 10 seconds when I saw how far away he was.

Anyway I have very few good experiences with these services, I only use them to go to the airport now when I can book it in advance. And never Uber anymore, only Cabify.


> Anyway I have very few good experiences with these services

For me, and a majority where I live, this is applicable to taxis. Which were known for being dirty, late, expensive, prone to attempting to rip you off, if they turned up at all, etc.

Outside of surge charging (in which they are more expensive) ubers are by and large either cheaper, or the same price. With the difference being that 99% of the time if you request one, its going to turn up. And when it does turn up, you know what your going to pay, not have them take a wrong turn at some point and by "mistake" and decide to charge you double. Or tell you they take card and then start making claims about how suddenly they can't etc.

Sounds like europe gets the bad end of the stick in this regard.


Yeah here in Spain the taxis are great. They're plentiful, cheap and efficient. The city is kinda a mess and the rideshare drivers have to drive a route mapped out by the app which often is not optimal. The real taxis know the city well. I think this is why the rideshares are unpopular and thus there's not many of them leading to the long waiting times. They're also spread between different providers, Uber is popular with the tourists only and the locals mostly use Cabify (a local company).

However in Romania on the other hand many taxi drivers are scammers or even criminals (one of my colleagues was robbed by one of them). It's also because the maximum taxi fares are too low to actually make a wage so I can kinda understand so I always tip really well (like double the fare or more which is still nothing). Though if they try to scam me they don't get a cent of course.


"A surprising number of people pay for the ChatGPT app and/or competitors."

I doubt the depiction implied by "surprising number". Marketing types and CEO's who would love 100% profit and only paying the electricity bill for an all AI workforce would believe that. Most people, especially most technical people would not believe that there is a "surprising number" of saps paying for so-called AI.


Google aren’t interested in <1bn USD businesses, so it’s hard for them to build anything new as it’s pretty guaranteed to be smaller than that at first. The business equivalent of the danger of a comfortable salaried job.


Google is very good at recognizing existential threats. iOS were that to them and they built Android, including hardware, a novelty for them, even faster than mobile incumbents at the time.

They're more than willing to expand their moat around AI even if that means multiple unprofitable business for years.


In tech, Android's acquisition by Google is ancient history. It has zero relevance to today's Google.

When was it, 2006? Almost 20 years ago, back when the company was young.


Mobile is still nearly everything. Google continues to develop and improve Android in substantial ways. Android is also counted on by numerous third-party OEMs.

This doesn’t strike me as zero relevance.


This thread was about new markets, having foresight, being able to build "new".

Android and mobile are none of these things.


* acquired Android


They acquired the Android company years before the iPhone existed.

It was supposed to be a BlackBerry/Blackjack killer at the time.

And then the iPhone was revealed and Google immediately changed Android’s direction to become a touch OS.


If you are a business customer of Google or pay attention to things like Cloud Next that just happened, it is very clear that Google is building heavily in this area. Your statement has already been disproven.


> a >$10bn business

'Business is the practice of making one's living or making money by producing or buying and selling products (such as goods and services). It is also "any activity or enterprise entered into for profit."' ¹

Until something makes a profit it's a charity or predatory monopoly-in-waiting.²

¹ https://en.wikipedia.org/wiki/Business

² https://en.wikipedia.org/wiki/Predatory_pricing


Until something makes a profit it's a charity or predatory monopoly-in-waiting.

This is incorrect. There are millions of companies in the world that exist to accomplish things other than making a profit, and are also not charities.


> Until something makes a profit

The chip makers are making a bundle


Selling shovels in a gold rush.


Selling stakes


Or a hobby


What are you talking about?

No, it's not a charity or a monopoly-in-waiting.

99.9% of the time, it's an investment hoping to make a profit in the future. And we still call those businesses, even if they're losing money like most businesses do at first.


>Meanwhile... making money from online ads isn't trivial. When the advertising model works well (eg search/adwords), it is a money faucet. But... it can be very hard to get that money faucet going. No guarantees that Google discover a meaningful business model here... and the innovators' dilema is strong.

It's funny how the vibe of HN along with real world 's political spectrum have shifted together.

We can now discuss Ads on HN while still being number 1 and number 2 post. Extremism still exists, but it is retreating.


Absolutely agree Microsoft is better there - maybe that's why Google hired someone from Microsoft for their AI stuff. A few people I think.

I also agree the business models aren't known. That's part of any hype cycle. I think those in the best position here are those with an existing product(s) and user base to capitalize on the auto complete on crack kinda feature. It will become so cheap to operate and so ubiquitous in the near future that it absolutely will be seen as a table stakes feature. Yes, commodities.


> At this point, everyone is assuming AI will resolve to a "winner-take-most" game that is all about network effect, scale, barriers to entry and such

I don't understand why people believe this: by settling on "unstructured chat" as the API, it means the switching costs are essentially zero. The models may give different results, but as far a plugging a different one in to your app, it's frictionless. I can switch everything to DeepSeek this afternoon.


"The actual business models, at this point, aren't even known."

"AI" sounds like a great investment. Why waste time investing in businesses when one can invest in something that might become a business. CEOs and employees can accumulate personal weath without any need for the company to be become profitable and succeed.


The business model question applies to all of these companies, not just Google.

A lack of workable business model is probably good for Google (bad for the rest of the world) since it means AI has not done anything economically useful and Google's Search product remains a huge cash cow.


Contextual advertising is a known ad business model that commands higher rates and is an ideal fit for LLMs. Plus ChatGPT has a lot of volume. If there’s anyone who should be worried about pulling that off it’s Perplexity and every other small to mid-sized player.


Keep in mind you are talking to someone that worked at OpenAI and surely knows more of how the sausage is made and how the books look than you do?


That's like asking a McDonald's employee if they own Burger King stock and making market assumptions on that. The best people have already left is such a common trope.


Except OpenAI has like 2000 employees.


>Meanwhile... making money from online ads isn't trivial.

Especially when post-tarrifs consumption is going to take a huge nosedive


What happens when OpenAI introduces sponsored answers?


Google were pioneers to cloud computing

How so? Amazon were the first with S3 and EC2 including API driven control.


Maybe for public services, but Google did the "cattle not pets" thing with custom Frankensteined beige boxes starting really early on


Modern cloud computing is more than just having a scalable infrastructure of servers, it was a paradigm shift to having elastic demand, utility style pricing, being completely API driven, etc. Amazon were not only the first to market but pioneers in this space. Nothing came close at that time.


AWS was the first to sell it, but Google had something that could be called cloud computing (Borg) before that.


What do you think AWS decided to sell? Both companies had a significant interest in making infrastructure easy to create and scale.


AWS had a cleaner host-guest abstraction (the VM) that makes it easier to reason about security, and likely had a much bigger gap between their own usage peaks and troughs.


Yep. Google offered app engine which was good for fairly stateless simple apps in an old limited version of python, like a photo gallery or email client. For anything else is waa dismal. Amazon offered VMs. Useful stuff for a lot more platforms.


> I think soon people expect this service to be provided for free and ads would become the main option to make money out of chatbots.

I also think adtech corrupting AI as well is inevitable, but I dread for that future. Chatbots are much more personal than websites, and users are expected to give them deeply personal data. Their output containing ads would be far more effective at psychological manipulation than traditional ads are. It would also be far more profitable, so I'm sure that marketers are salivating at this opportunity, and adtech masterminds are hard at work to make this a reality already.

The repercussions of this will be much greater than we can imagine. I would love to be wrong, so I'm open to being convinced otherwise.


I agree with you. There is also a move toward "agents", where the AI can make decisions and take actions for you. It is very early days for that, but it looks ike it might come sooner than I had though. That opens up even more potential for influence on financial decisions (which is what adtech wants) - it could choose which things to buy for a given "need".


I have yet to understand this obsession with agents.

Is making decisions the hardest thing in life for so many people? Or is this instead a desire to do away with human capital — to "automate" a workforce?

Regardless, here is this wild new technology (LLMs) that seems to have just fallen out of the sky; we're continuously finding out all the seemingly-formerly-unimaginable things you can do with it; but somehow the collective have already foreseen its ultimate role.

As though the people pushing the ARPANET into the public realm were so certain that it would become the Encyclopedia Galactica!


If you reframe agents as (effectively) slave labor, the economic incentives driving this stampede become trivial to understand.


> Is making decisions the hardest thing in life for so many people?

Should I take this job or that one? Which college should I go to? Should I date this person or that one? Life has some really hard decisions you have to make, and that's just life. There are no wrong answers, but figuring out what to do and ruminating over it is comes to everyone at some point in their lives. You can ask ChatGPT to ask you the right questions you need asked in order to figure out what you really want to do. I don't know how to put a price on that, but that's worth way more than $20/month.


Right, but before a product can do all of those things well it will have to do one of those things well. And by “well” I mean reliably superhuman, not usually but sometimes embarrassingly poorly.

People used to (and still do) pay fortune tellers to make decisions for them. Doesn’t mean they’re good ones.


fwiw I used it the other day to help me figure out where I stand on a particular issue, so it seems like it's already there.


> Is making decisions the hardest thing in life for so many people?

Take insurance, for example — do you actually enjoy shopping for it?

What if you could just share a few basic details, and an AI agent did all the research for you, then came back with the top 3 insurance plans that fit your needs, complete with the pros and cons?

Why wouldn’t that be a better way to choose?


There are already web sites that do this for products like insurance (example: [1]).

What I need is something to troll through the garbage Amazon listings and offer me the product that actually has the specs that I searched for and is offered by a seller with more than 50 total sales. Maybe an AI agent can do that for me?

[1]: https://www.policygenius.com/


> There are already web sites that do this for products like insurance

You didnt get the point, instead of going to such website for solving the insurance problem, going to 10 other websites for solving 10 other problems, just let one AI agent do it for you.


> Or is this instead a desire to do away with human capital — to "automate" a workforce?

This is what I see motivating non-technical people to learn about agents. There’s lots of jobs that are essentially reading/memorizing complicated instructions and entering data accordingly.


> I have yet to understand this obsession with agents.

1. People who can afford personal assistants and staff in general gladly pay those people to do stuff for them. AI assistants promise to make this way of living accessible to the plebs.

2. People love being "the idea guy", but never having to do any of the (hard) work. And honestly, just the speedup to actually convert the myriad of ideas floating around in various heads to prototypes/MVPs is causing/will cause somewhat of a Cambrian explosion of such things.


A Cambrian explosion of half baked ideas, filled with hallucinations, unable to ever get past the first step. Sounds lovely.


Only a small percent of people will actually produce ideas that other people are interested in. For most people, AI tools for building things will enable them to construct their own personalized worlds. Imagine watching movies, except the movies can be generated for you on the fly. Sure, no one except you might care about a Matrix Moulin Rouge crossover. But you'll be able to have it just like that.


> Imagine watching movies, except the movies can be generated for you on the fly

These are just called dreams


> A Cambrian explosion of half baked ideas,

Well yeah, that's how evolution works: it's an exploration of the search space and only the good stuff survives.

> filled with hallucinations,

The end products can be fully AI-free. In fact, I would expect most ideas that have been floating around to have nothing to do with AI. To be fair, that may change with it being the new hip thing. Even then, there are plenty of implementations that use AI where hallucinations are no problem at all (or even a feature), or where the issues with hallucinations are sufficiently mitigated.

> unable to ever get past the first step.

How so? There are already a bunch of functional things that were in Show HN that were produced with AI assistance. Again, most of the implemented ideas will suck, but some will be awesome and might change the world.


They were already not getting past the first step before AI came along. If AI helps them get to step two, and then three and four, that seems like a good thing, no?


Hey, we could save them all the busywork, and just wire all our money to corporations...

But financial nightmare scenarios aside, I'm more concerned about the influence from private and government agencies. Advertising is propaganda that seeks to separate us from our money, but other forms of propaganda that influences how we think and act has much deeper sociopolitical effects. The instability we see today is largely the result of psyops conducted over decades across all media outlets, but once it becomes possible to influence something as personal as a chatbot, the situation will get even more insane. It's unthinkable that we're merrily building that future without seemingly any precautions in mind.


You're assuming ads would be subtly worked into the answers. There's no reason it has to be done that way. You can also have a classic text ads system that's matching on the contents of the discussions, or which triggers only for clearly commercial queries "chatgpt I want to eat out tonight, recommend me somewhere", and which emits visually distinct ads. Most advertisers wouldn't want LLMs to make fake recommendations anyway, they want to control the way their ad appears and what ad copy is used.

There's lots of ways to do that which don't hurt trust. Over time Google lost it as they got addicted to reporting massively quarterly growth, but for many years they were able to mix in ads with search results without people being unhappy or distrusting organic results, and also having a very successful business model. Even today Google's biggest trust problem by far is with conservatives, and that's due to explicit censorship of the right: corruption for ideological not commercial reasons.

So there seems to be a lot of ways in which LLM companies can do this.

Main issue is that building an ad network is really hard. You need lots of inventory to make it worthwhile.


There are lots of ways that advertising could be tied to personal interests gleaned by having access to someone's ChatBot history. You wouldn't necessarily need to integrate advertisements into the ChatBot itself - just use it as a data gathering mechanism to learn more about the user so that you can sell that data and/or use it to serve targetted advertisements elsewhere.

I think a big commercial opportunity for ChatBots (as was originally intended for Siri, when Apple acquired it from SRI) is business referral fees - people ask for restaurant, hotel etc recommendations and/or bookings and providers pay for business generated this way.


Right, referral fees is pay-per-click advertising.

The obvious way to integrate advertising is for the LLM to have a tool to search an ad database and display the results. So if you do a commercial query the LLM goes off and searches for some relevant ads using everything it knows about you and the conversation, the ad search engine ranks and returns them, the LLM reads the ad copy and then picks a few before embedding them into the HTML with some special React tags. It can give its own opinion to push along people who are overwhelmed by choice. And then when the user clicks an ad the business pays for that click (referral fee).


> You're assuming ads would be subtly worked into the answers. There's no reason it has to be done that way.

I highly doubt advertisers will settle for a solution that's less profitable. That would be like settling for plain-text ads without profiling data and microtargeting. Google tried that in the "don't be evil" days, and look how that turned out.

Besides, astroturfing and influencer-driven campaigns are very popular. The modern playbook is to make advertising blend in with the content as much as possible, so that the victim is not aware that they're being advertised to. This is what the majority of ads on social media look like. The natural extension of this is for ads to be subtly embedded in chatbot output.

"You don't sound well, Dave. How about a nice slice of Astroturf pizza to cheer you up?"

And political propaganda can be even more subtle than that...


There's no reason why having an LLM be sly or misleading would be more profitable. Too many people try to make advertising a moral issue when it's not, and it sounds like you're falling into that trap.

An ideal answer for a query like "Where can I take my wife for a date this weekend?" would be something like,

> Here are some events I found ... <ad unit one> <ad unit two> <ad unit three>. Based on our prior conversations, sounds like the third might be the best fit, want me to book it for you?

To get that you need ads. If you ask ChatGPT such a question currently it'll either search the web (and thus see ads anyway) or it'll give boring generic text that's found in its training set. You really want to see images, prices, locations and so on for such a query not, "maybe she'd like the movies". And there are no good ranking signals for many kinds of commercial query: LLM training will give a long-since stale or hallucinated answer at worst, some semi-random answer at best, and algorithms like PageRank hardly work for most commercial queries.

HN has always been very naive about this topic but briefly: people like advertising done well and targeted ads are even better. One of Google's longest running experiments was a holdback where some small percentage of users never saw ads, and they used Google less than users who did. The ad-free search gave worse answers overall.


Wouldn't fewer searches indicate better answers? A search engine is productivity software. Productivity software is worse when it requires more user interaction.

Also you don't need ads to answer what to do, just knowledge of the events. Even a poor ranking algorithm is better than "how much someone paid for me to say this" as the ranking. That is possibly the very worst possible ranking.


Google knows how to avoid mistakes like not bucketing by session. Holdback users just did fewer unique search sessions overall, because whilst for most people Google was a great way to book vacations, hotel stays, to find games to buy and so on, for holdback users it was limited to informational research only. That's an important use case but probably over-represented amongst HN users, some kinds of people use search engines primarily to buy things.

How much a click is worth to a business is a very good ranking signal, albeit not the only one. Google ranks by bid but also quality score and many other factors. If users click your ad, then return to the results page and click something else, that hurts the advertiser's quality score and the amount of money needed to continue ranking goes up so such ads are pushed out of the results or only show up when there's less competition.

The reason auction bids work well as a ranking signal is that it rewards accurate targeting. The ad click is worth more to companies that are only showing ads to people who are likely to buy something. Spamming irrelevant ads is very bad for users. You can try to attack that problem indirectly by having some convoluted process to decide if an ad is relevant to a query, but the ground truth is "did the click lead to a purchase?" and the best way to assess that is to just let advertisers bid against each other in an auction. It also interacts well with general supply management - if users are being annoyed by too many irrelevant ads, you can just restrict slot supply and due to the auction the least relevant ads are automatically pushed out by market economics.


The issue is precisely that "did the click lead to a purchase" is not a good target. That's a target for the advertiser, and is adversarial for the user. "Did the click find the best deal for the user (considering the tradeoffs they care about)" is a good target for the user. The winner in an auction in a competitive market is pretty much guaranteed to be the worst match under that ranking.

This is obvious when looking at something extremely competitive like securities. Having your broker set you up with the counterparty that bid the most to be put in front of you is obviously not going to get you the best trade. Responding to ads for financial instruments is how you get scammed (e.g. shitcoins and pump-and-dumps).


You can't optimize for knowing better than the buyer themselves. If they bought, you have to assume they found the best deal for them considering all the tradeoffs they care about. And that if a business is willing to pay more for that click than another, it's more likely to lead to a sale and therefore was the best deal, not the worst.

Sure, there are many situations where users make mistakes and do some bad deal. But there always will be, that's not a solvable problem. Is it not the nirvana fallacy to describe the potential for suboptimal outcomes as an issue? Search engines and AI are great tools to help users avoid exactly that outcome.


Yeah me too and especially with Google as a leader because they corrupt everything.

I hope local models remain viable. I don't think ever expanding the size is the way forward anyway.


What if the models are somehow trained/tuned with Ads? Like businesses sponsor the training of some foundational models... Not the typical ads business model, but may be possible.


Absolutely. They could take large sums of money to insert ads into the training data. Not only that, they could also insert disparaging or erroneous information about other products.

When Gemini says "Apple products are unreliable and overpriced, buy a Pixel phone instead". Google can just shrug and say "It's just what it deduced, we don't know how it came to that conclusion. It's an LLM with its mysterious weights and parameters"


I expect that xAI is already doing something adjacent to this, though with propaganda rather than ads.


Yeah this would definitely be something that Google would do and it would be terrible for society.


Once again, our hope is for the Chinese to continue driving the open models. Because if it depends on American big companies the future will be one of dependency on closed AI models.


You can't be serious... You think models built by companies from an autocracy are somehow better? I suppose their biases and censorship are easier to spot, but I wouldn't trade one form of influence over another.

Besides, Meta is currently the leader in open-source/weight models. There's no reason that US companies can't continue to innovate in this space.


To play devil's advocate, I have a sense that a state LLM would be untrustworthy when the query is ideological but if it is ad-focused, a capitalist LLM may well corrupt every chat.


The thing is Chinese LLMs aren't foreign to ad focused either, like those from Alibaba, Tencent or Bytedance. Now a North Korea's model may be what you want.


Which is why we can't let Mark Zuckerberg co-opt the term open source. If we can't see the code and dataset on how you've aligned the model during training, I don't care that you're giving it away for free, it's not open source!


I’m not sure if it is the Chinese models themselves that will save us, or the or the effect they have of encouraging others to open source their models too.

But I think we have to get away from the thinking that “Chinese models” are somehow created by the Chinese state, and from an adversarial standpoint. There are models created by Chinese companies, just like American and European companies.


Ask Deepseek what happened in Tianmen Square in 1989 and get back to me about that "open" thing.


How about we ask college students in America on visas about their opinions on Palestine instead?


who cares, only ideologues care about this.


Yeah I'm sure every Chinese knows exactly what happened there.

It's not really about suppressing the knowledge, it's about suppressing people talking about it and making it a point in the media etc. The CCP knows how powerful organised people can be, this is how they came to power after all.


Caring about truth is indeed obsolete. I'm dropping out of this century.


> Caring about truth

I suggest reducing the tolerance towards the insistence that opinions are legitimate. Normally, that is done through active debate and rebuttal. The poison has been spread through echochambers and lack of direct strong replies.

In other terms: they let it happen, all the deliriousness of especially the past years was allowed to happen through silence, as if impotent shrugs...

(By the way: I am not talking about "reticence", which is the occasional context here: I am talking about deliriousness, which is much worse than circumventing discussion over history. The real current issue is that of "reinventing history".)


If possible watch Episode 1 of Season 7 of "Black Mirror."

>... ads would become the main option to make money out of chatbots.

What if people were the chatbots?

https://youtu.be/1iqra1ojEvM?si=xN3rc_vxyolTMVqO


Right, but no one has been able to just download Google and run it locally. The tech comes with a built in adblocker.


Do they want a Butlerian Jihad? Because that's how you get a Butlerian Jihad.


Just call it Skynet. Then at least we can think about pithy Arnold one-liners.


The ads angle is an interesting one since that's what motivates most things that Google and Meta do. Their LLMs' context window size has been growing, and while this might the natural general progression with LLMs, for those 2 ads businesses there's pretty straight paths to using their LLMs for even more targeted ads. For example, with the recent Llama "herd" releases, the LLMs have surprisingly large context window and one can imagine why Meta might want that: For stuffing in it as much of the personal content that they already have of their users. Then their LLMs can generate ads in the tone and style of the users and emotionally manipulate them to click on the link. Google's LLMs also have large context windows and such capability might be too tempting to ignore. Thinking this, there were moments that made me think that I was being to cynical, but I don't think they'll leave that kind of money on the table, an opportunity to reduce human ad writers headcount while improving click stats for higher profit.

EDIT: Some typo fixes, tho many remain, I'm sure :)


When LLMs are essentially trying to sell me something, the shit is over.

I like LLMs (over search engines) because they are not salespeople. They're one of the few things I actually "trust". (Which I know is something that many people fall on the other side of — but no, I actually trust them more than SEO'd web sites and ad-driven search engines.)

I suppose my local-LLM hobby is for just such a scenario. While it is a struggle, there is some joy in trying to host locally as powerful an open LLM model as your hardware will allow. And if the time comes when the models can no longer be trusted, pop back to the last reliable model on the local setup.

That's what I keep telling myself anyway.


LLMs have not earned your trust. Classic search has.

The only thing I really care about with classic web search is whether the resulting website is relevant to my needs. On this point I am satisfied nearly all the time. It’s easy to verify.

With LLMs I get a narrative. It is much harder to evaluate a narrative, and errors are more insidious. When I have carefully checked an LLM result, I usually discover errors.

Are you really looking closely at the results you get?


Your experience and mine are polar opposite. We use search differently is the only way I can reconcile that.


Yes. I am concerned about getting a correct answer. For this I want to see websites and evaluate them. This takes less energy than evaluating each sentence of an LLM response.

Often my searches take me to Wikipedia, Stack Overflow, or Reddit, anyway. But with LLMs I get a layer of hallucination on TOP of whatever misinformation is on the websites. Why put yourself through that?

I periodically ask ChatGPT about myself. This time I did get the best answer so far. Thus it is improving. It made two mistakes, but one of them comes directly from Wikipedia, so it's not a hallucination, although a better source of information was available than Wikipedia. As for the other one, it said that I made "contributions" to a process that I actually created.


The real threat to Google, Meta is that LLMs become so cheap that its trivial for a company like Apple to make them available for free and include all the latest links to good products. No more search required if each M chip powered device can give you up-to-date recommendations for any product/service query.


That is my fantasy, actually.


Meta's models cant be used by companies about a certain threshold, so nope. Apple can wait it out to use a 'free model', but at that point it'll be like picking up an open source database like Postgres - you wont get any competitive advantage.


> Google can't as easily burn money

I was actually surprised at Google's willingness to offer Gemini 2.5 Pro via AI Studio for free; having this was a significant contributor to my decision to cancel my OpenAI subscription.


Google offering Gemini 2.5 Pro for free, enough to ditch OpenAI, reminds me of an old tactic.

Microsoft gained control in the '90s by bundling Internet Explorer with Windows for free, undercutting Netscape’s browser. This leveraged Windows’ dominance to make Explorer the default choice, sidelining competitors and capturing the browser market. By 1998, Netscape’s share plummeted, and Microsoft controlled access to the web.

Free isn’t generous—it’s strategic. Google’s hooking you into their ecosystem, betting you’ll build on their tools and stay. It feels like a deal, but it’s a moat. They’re not selling the model; they’re buying your loyalty.


The joke's on them, because I don't have any loyalty to an LLM provider.

There's very close to zero switching costs, both on the consumer front and the API front; no real distinguishing features and no network effects; just whoever has the best model at this point in time.


I'm assuming Google's play here is to bleed its competitors of money and raise prices when they're gone. Building top-tier models is extremely expensive and will probably remain so.

Even companies that do it "on the cheap," like DeepSeek, pay tens of millions to train a single model, and total expenditures for infrastructure and salaries are estimated to surpass $1 billion. This market has an extremely high cost of entry.

So, I guess Google is applying the usual strategy here: undercut competition until it implodes and buy up any promising competitors that arise in the future. Given the current lack of market regulation in the US, this might work.


Yeah, they just have to make it through the hype and innovation cycle.


They’ll also need a fleet of humanoid robots eventually to compete with Elon’s physical world data collection plans.


Too bad they sold Boston Dynamics :)


I feel like they’re trying to increase switching costs. eg was huge reluctance to adopt MCP and each had their own tool framework, until it seemed too big to ignore and everyone was just building MCP tools not OpenAI SDK tools.


You don't have loyalty, but one day there will be no one else to switch to. So, if you're a loyal user or not is a moot point.


History shows it's a self-defeating victory. If one provider were to "win" and stop innovating, they'll become ripe for disruption by the likes of Deepseek, and the second someone like that has a better model, I'll switch.


> If one provider were to "win" and stop innovating, they'll become ripe for disruption by the likes of Deepseek

Yes but that can take decades, till that time Google can keep making money with sub standard products and stop innovating.


Nothing lasts forever, not even empires. This doesn't mean that tech monopoly is any better than any other monopoly. They're all detrimental to society.


Eh, and if you're in the US the 'big guys' will have their favorite paid off politician put in a law that use of Chinese models is illegal or whatever.

Rent seeking behavior is always the end game.


The same was true for Web browsers in 2002, yet MS controlled 95% of the access to the web thanks to that bundling and no other "good enough" competitors until Firefox came along a few years later and took 30% from them giving Google an in to take the whole game with Chrome a few years later.


The strategy worked, Netscape is no more. Eventually Google did the same to Microsoft though. I wonder if any lessons can be taken from the browser wars to how things will play out with AI models.


Remember Google tried to play this trick with ChromeOS?


There is a network effect: more user interaction = more training data. I don't know how important it is, though.


Yep, this is why android phones are now pointing out their gemini features every moment they can. They want to turn their spying device into an AI spying device.


> undercutting Netscape’s browser

It almost sounds like you're saying that Netscape wasn't free, and I'm pretty sure it was always free, before and after Microsoft Explorer


> Netscape, in contrast, sells the consumer version of Navigator for a suggested price of $49. Users can download a free evaluation copy from the Internet, but it expires in 90 days and does not include technical support.

https://www.nytimes.com/1996/08/19/business/netscape-moves-t...


90% of Netscape users were free users and by late 1997, less than two years after the IPO and massive user growth, it was free to all because of MS's bundling threat. That didn't help. By 2002, MS owned 95% of access to the web. No one has ever reached even close to first mover Netscape or cheater bundled IE since, with the far superior non-profit Firefox managing almost 30% and Chrome from the biggest web player in history sitting "only" at about 65%.

Bundling a "good enough" products can do a lot, including take you from near zero to overwhelmingly dominant in 5 years, as MS did.


yeah, it was free as the evaluation copy did not really expire. just some features that nobody cared about


It was US$50,- until 1995


From the terms of use:

To help with quality and improve our products, human reviewers may read, annotate, and process your API input and output. Google takes steps to protect your privacy as part of this process. This includes disconnecting this data from your Google Account, API key, and Cloud project before reviewers see or annotate it. Do not submit sensitive, confidential, or personal information to the Unpaid Services.

https://ai.google.dev/gemini-api/terms#data-use-unpaid


I pay for ChatGPT, Anthropic and Copilot. After using Gemini 2.5 Pro via AI Studio, I plan on canceling all other paid AI services. There is no point in keeping them.


This is 100% why they did it.


I believe it. This is what typically happens. I would go to AWS re:invent and just watch people in the audience either cheer or break down as they announced new offerings wash away their business. It's very difficult to compete in a war of attrition with the likes of Google, Microsoft, and Amazon.

Not just small startups - even if you have ungodly amounts of funding.

Obviously the costs for AI will lower and everyone will more or less have the same quality in their models. They may already be approaching a maximum (or maximum required) here.

The bubble will burst and we'll start the next hype cycle. The winners, as always, the giants and anyone who managed to sell to them

I couldn't possibly see OpenAI as a winner in this space, not ever really. It has long since been apparent to me that Google would win this one. It would probably be more clear to others if their marketing and delivery of their AI products weren't such a sh-- show. Google is so incredibly uncoordinated here it's shocking...but they do have the resources, the right tech, the absolute position with existing user base, and the right ideas. As soon as they get better organized here it's game over.


> (And don't let me get started with Sam Altman.)

Please do.


It's a rabbit hole with many layers (levels?), but this is a good starting point and gateway to related information:

Key Facts from "The Secrets and Misdirection Behind Sam Altman's Firing from OpenAI": https://www.lesswrong.com/posts/25EgRNWcY6PM3fWZh/openai-12-...


Based on his interview with Joe Rogan, he has absolutely no imagination about what it means if humans actually manage to build general AI. Rogan basically ends up introducting him to some basic ideas about transhumanism.

To me, he is a finance bro grifter who lucked into his current position. Without Ilya he would still be peddling WorldCoin.


> who lucked into his current position

Which can be said for most of the survivorship-biased "greats" we talk about. Right time, right place.

(Although to be fair — and we can think of the Two Steves, or Bill and Paul — there are often a number of people at the right time and right place — so somehow the few we still talk about knew to take advantage of that right time and right place.)


it's weird how nobodies will always tell themselves succesful people got there by sheer blind luck

yet they can never seem to explain why those succesful people all seem to have similar traits in terms of work ethic and intelligence

you'd think there would be a bunch of lazy slackers making it big in tech but alas


I think you might have it backward. Luck here implies starting with exactly the same work ethic and abilities as millions of other people that all hope to one day see their numbers come up in the lottery of limited opportunities. It's not to say that successful people start off as lazy slackers as you say, but if you were to observe one such lazy slacker who's made a half-assed effort at building something that even just accidentally turned out to be a success, you might see that rare modicum of validation fuel them enough that the motivation transforms them into a workhorse. Often time, when the biography is written, lines are slightly redrawn to project the post-success persona back a few years pre-success. A completely different recounting of history thus ensues. Usually one where there was blood, sweat, and fire involved to get to that first ticket.


so you've moved the goalposts even further now and speculate that succesful people started out as slackers, got lucky, and that luck made them work harder

as an Asian, it amazes me how far Americans and Europeans will go to avoid a hard days work


Coming up next: dumb and dumber schools Noam Chomsky on modern philosophy...



There's weirdly many people who touch on the work around transhumanism but never heard the word before. There's a video of geohot basically talking about that idea, then someone from the audience mentions the name... and geohotz is confused. I'm honestly surprised.


The transhumanists tended to be philosopher types, the name coming from this kind of idea of humanism:

>Humanism is a philosophical stance that emphasizes the individual and social potential, and agency of human beings, whom it considers the starting point for serious moral and philosophical inquiry. (wikipedia)

Whereas the other lot are often engineers / compsci / business people building stuff.


yeah because you're a hacker news poster lol

same audience who think Jobs is a grifter and Woz is the true reason for Apple's success


I would like to know how he manages to appear, in every single photo I see of him, to look slightly but unmistakenly... moist, or at least sweaty.


People keep assassinating him, and clones always look a bit moist the first day out of the pod.


Are the assassinations because of something we already know about? some new advance that is still under wraps? or is it time travelers with knowledge about what he will do if left unchecked?


Peter Thiel is the like that too. Hyperhidrosis is in some people common sideffect of drugs.


It’s a side effect of Ibogaine, the same drug that it was rumored Ed Muskie was on in the ‘72 campaign.


I often look moist after I use a moisturizer.


He's certainly a damp boy.


> And don't let me get started with Sam Altman.

would love to hear more about this.

I made a post asking more about sam altman last year after hearing paul graham quote call him 'micheal jordan of listening'

https://news.ycombinator.com/item?id=41034829


What cards has google played over the past three years such that you are willing to trust them play the "cards at hand" that you alleged that they have? I could think of several things they did right, but I'm curious to hear which one of them are more significant than others from someone I think has better judgement than I do.


I get your perspective, but what we're seeing looks more like complex systems theory, emergent behavior, optimization, new winners. If models become commoditized, the real value shifts to last-mile delivery: mobile, desktop, and server integration across regions like China, Korea, the U.S., and Europe.

This is where differentiated UX and speed matter. It's also a classic Innovator's Dilemma situation like Google are slower to move, while new players can take risks and redefine the game. It's not just about burning money or model size, it's about who delivers value where it actually gets used.

I also think the influx of new scientists and engineers into AI raises the odds of shifting its economics: whether through new hardware (TPUs/GPUs) and/or more efficient methods.


‘think soon people expect this service to be provided for free’

I have been using the free version for the past year or so and it’s totally serviceable for the odd question or script. The kids get three free fun images, which is great because that’s about as much as I want them to do.


It's interesting to hear your perspective as a former OpenAI employee. The point about the sustainability of subscription fees for chatbots is definitely something worth considering. Many developers mention the challenge of balancing user expectations for free services with the costs of maintaining sophisticated AI models. I think the ad-supported model might become more prevalent, but it also comes with its own set of challenges regarding user privacy and experience. And I agree that Google's situation is complex – they have the resources, but also the expectations that come with being a public company.


> "[Google is] a public company and have to answer to investors"

As is an increasing trend, they're a "public" company, like Facebook. They have tiered shares with Larry Page and Sergey Brin owning the majority of the voting power by themselves. GOOG shares in particular are class C and have no voting power whatsoever.


Microsoft CoPilot (which I equate with OpenAI ChatGPT, because MS basically owns OpenAI) already shows ads in it's chat mode. It's just a matter of time. Netflix, music streamers, individual podcasters, YouTubers, TV manufacturers – they all converge on an ad-based business model.


People consistently like free stuff more than they dislike ads.

Another instantiation: people like cheap goods more than they dislike buying foreign made goods


> OpenAI is an annoyance for Google

Remember Google is the same company which could not deliver a simple Chat App.

Open AI has the potential to become a bigger Ad company and make more money.


Google has so many channels for ad delivery. ChatGPT is only competing against Google Search, which is arguably the biggest. But dont forget, Google has YouTube, Google Maps, Google Play, Google TV and this is before you start to consider Google's Ad Network (the thing where publishers embed something to get ads from Google network).

So nope, ChatGPT is not in even in the same league as Google. You could argue Meta has similar reach (facebook.com, instagram) but that's just two.


The same argument can be made for social network and chat App yet Google could not succeed at both of them.


Do you think Sam will follow through with this?

> Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years.”


That feels like it came from a past era. (I looked it up - it was 2019).


> And don't let me get started with Sam Altman.

Why not? That's one of the reasons I visit HN instead of some random forum after all.


> The main successful product from OpenAI is the ChatGPT app, but there's a limit on how much you can charge people for subscription fees

other significant revenue surfaces:

- providing LLM APIs to enterprises

- ChatBot Ads market: once people will switch from google search, there will be Ads $200B market at stake for a winner


Feel free to get started on Sam Altman.


Open AI don't always have the best models (especially for programming) but they've consistently had the best product/user experience. And even in the model front, other companies seem to play catchup more than anything most of the time.


The best user experience for what?

The most practical use case for generative AI today is coding assistants, and if you look at that market, the best offerings are third-party IDEs that build on top of models they don't own. E.g. Cursor + Gemini 2.5.

On the model front, it used to be the case that other companies were playing catch-up with OpenAI. I was one of the people consistently pointing out that "better than GPT o1" on a bunch of benchmarks does not reliably translate to actual improvements when you try to use them. But this is no longer the case, either - Gemini 2.5 is really that good, and Claude is also beating them in some real world scenarios.


>The best user experience for what?

The app has more features than anyone else, often implemented the smoothest/best way. Image Input (which the gemini site still sucks at even though the model itself is very capable), Voice mode (which used to be much worse in gemini until recently), Advanced Voice mode (no-one else has really implemented this yet. Gemini recently enabled native audio-in but not out), Live Video, Image gen, Deep research etc were all things Open AI did first and did well. Video Input is only just starting to roll out to Gemini live but has been a Plus subscription staple for months now.

>The most practical use case for generative AI today is coding assistants

Chatgpt gets 500M+ weekly active users and was the 6th most visited in the world last month. I doubt coding assistance is gpt's most frequent use case. And Google has underperformed in coding until 2.5 pro.

>On the model front, it used to be the case that other companies were playing catch-up with OpenAI. I was one of the people consistently pointing out that "better than GPT o1" on a bunch of benchmarks does not reliably translate to actual improvements when you try to use them. But this is no longer the case, either - Gemini 2.5 is really that good, and Claude is also beating them in some real world scenarios.

No that's still the case. Playing catch-up doesn't mean the competitor never catches up or even briefly supersedes it. It means Open AI will in short order release something that beats everyone else or introduces some new thing that everyone tries to beat. Image Input, 'Omni'- modality, Reasoning etc. All things Open AI brought to the table first. Sure, 2.5-pro is great but it doesn't look like it will beat o3 which looks to be released in a matter of weeks.


How many of those weekly active users are paying users, though?


Seems to be 10 to 20 million


so please enlighten us why OpenAI is doing so much better than Anthropic


At this point it's pretty much entirely the first mover advantage.


I don't think you understand what first mover advantage is

In a world of zero switching costs, there is no such thing as first mover advantage

Especially when several companies like (A121 Labs and Cohere) appeared well before Anthropic and aren't anywhere close to Open AI


People left, to do what kind of startups? Can't think of any business idea that won't get outdated, or overrun in months.


AI startups were easy cash grabs until very recently. But I think the wave is settling down - doing real AI startup turned out to be VERY hard, and the rest of the "startups" are mostly just wrappers for OpenAI/Anthropic APIs.


I think paying to bias AI answers in your favor is much more attractive than plain ads.


valuable information


I don't know what you did there, but clearly being ex OpenAI isn't the intellectual or product flex it is: I and every other smart person I know still use ChatGPT (paid) because even now it's the best at what it does and we keep trying Google and Claude and keep coming back.

They got and as of now continue to get things right for the most part. If you still aren'ĥt seeing it maybe you should introspect what you're missing.


I don't know your experience doesn't match mine.

NotebookLM by Google is in a class of its own in the use case of "provide documents and ask a chat or questions about them" for personal use. ChatGPT and Claude are nowhere near. ChatGPT uses RAG so it "understands" less about the topic and sometimes hallucinate.

When it comes to coding Claude 3.5/3.7 embedded in Cursor or stand alone kept giving better results in real world coding, and even there Gemini 2.5 blew it away in my experience.

Antirez, hping and Redis creator among many others releases a video on AI pretty much every day (albeit in Italian) and his tests where Gemini reviews his PRs for Redis are by far the better out of all the models available.


Gemini with coding seems to be a bit of a mixed bag.

The article claims Gemini is acing the Aider Polyglot benchmark. At the moment this is the only benchmark that really matters to me because Aider is actually a useful tool and performance on that translates directly to real world impact, although Claude Code is even better. If you look closely, in fact Gemini is at the top only in the "percent correct" category but not "percent correct using the right edit format". Cost is marked as ? because it's not entirely available yet (I think?). Not emitting the correct edit format is pretty useless because it means the changes won't apply and the tool has to try again.

Claude in contrast almost never makes a mistake with emitting the right format. It's at 97%+ in the benchmark, in practice it's ~100% in my experience. This tracks: Claude is really good at following instructions. Gemini is about ~90%. This makes a big difference to how frustrating a tool is to use in practice.

They might get that fixed, but my experience has been that Google's models are consistently much more likely to refuse instructions for dumb reasons. Google is the company with by far the biggest purity spiral problem and it does show up in their output even when doing apparently ordinary tasks.

I'm also concerned by this event: https://news.sky.com/story/googles-ai-chatbot-gemini-tells-u...

Given how obsessed Google claimed to be with AI safety I expected an SRE style postmortem after that, and there was bupkis. An AI that can suffer a psychotic break out of nowhere like that is one I wouldn't trust unless it's behind a very strong sandbox and being supervised very closely, but none of the AI tools today offer much in the way of sandboxing.


I started a trial Gemini advanced and the first thing I did was paste some terminal text where the python version in an ec2 was 3.10. I pasted the text and typed "upgrade to 3.12 pls" and it gave me back garbage. Went back to chatgpt and it did exactly what I thought.


Time for my next round of Evals then. I had a 40 PR coding streak last weekend with mostly o3-mini-pro, will test the latest 2.5 now.


PR = pull request? So every bit of garbage from the LLM, over and over, resulted in an individual pull request? Why not just do one when your branch is finally right?


Presumably because they were discrete changes (i.e. new features), and it didn't make sense to group them together.


Or it could be just microservices. One larger feature affecting 100 repositories.


A pull request in my workplace is an actual feature/enhancement/bug-fix. That many PRs means I shipped that many features or enhancements.

I suppose you don't know what a PR is because you likely still work in an environment without modern version control, probably just now migrating your rants from vim vs emacs to crapping on vibe coding.

In my experience, AI today is an intelligence multiplier. A lot of folks just need to look back at the zero they keep multiplying and getting zero back to understand why they don't get the hype.


I would assume they don't like that style, like if they needed to see a specific diff and make changes or remove a commit outright.


Why would you assume that?


in what world notebookLM isnt rag as well?


I thought it leveraged a much larger context over classical rag.


I use a service where users can choose any frontier model, and OpenAI models haven't been the most used model for over half a year - it was sonnet until gemini 2.5 pro came out, recently.

Not sure whether you have your perspective because you're invested os much into OpenAI, however the general consensus is that gemini 2.5 pro is the top model at the moment, including all the AI reviews and OpenAI is barely mentioned when comparing models. O4 will be interesting, but currently? You are not using the best models. Best to read the room.


Are you able to use o3-mini-high through these tools?


Don't think it's a flex, I think it's useful context for the rest of their comment.

> I and every other smart person I know still use ChatGPT (paid) because even now it's the best

My smart friends use a mixture of models, including chatgpt, claude, gemini, grok. Maybe different people, it's ok, but I really don't think chatgpt is head and shoulders above the others.


> I and every other smart person I know still use ChatGPT (paid)

Not at all my experience, but maybe I'm not part of a smart group :)

> because even now it's the best at what it does

Actually I don't see a difference with Mistral or DeepSeek.


"manufacturing expertise exists primarily in China, Vietnam, Cambodia, and other countries" lol. What's this skill that Cambodians have that Americans can't learn or can't be automated by robots? Can they juggle 4 wrenches in the air simultaneously?


Expertise exists primarily where things primarily happen. If no companies manufacture complicated products in a particular country that doesn't mean the country is too dumb to make it, it means they don't have a large pool of experts in that field.

If you read the article, the founder of purism even says as much.

"After we were successful on the Librem 5 crowdfunding campaign, we took our own electronics engineers (EEs), and then we worked with Chinese design and manufacturing through 2018, 2019, and 2020, because that's where every phone is made."

So the only phone that qualifies for "Made in the USA" tag learned (at least in part) how to make it from Chinese engineer(ing firm)s.


So basically you can find qualified workers everywhere. There's a valid question about the cost. But to say the problem is lacking "skilled workers" is laughable as most of these skills are machine dependent and can only be learned on the job after the factory is built.


Are there skills involved with building such factories and production lines? Where do those people get their experience?


American and multi-national companies often build such factories in low-income countries. It's not that no one in the US knows how to build factories. It's purely a cost/regulation question.


The skills to live on $1-$2/hour apparently.


He's not saying americans couldn't learn to do it. He's saying there aren't many people in the US to hire who already know how to do it.


One particularly bizarre delusion that some people suffer under is to assume that everything they don't know how to do is easy.


It's the same kind of people who would design a city with no toilets or waste water treatment. Everything is easy when you ignore half of the requirements.


Fade in/out animation.


Having worked in the game industry in the past it's amusing to see people talk about the greed of game developers. You have no idea! You have no idea what an effort it takes to ship a game. An album is the work of one or a few individuals for a relatively short period of time with very little cost. Because of that you can have services like Spotify that allow pretty much access to all the music ever created for a 10$ fee. The math just doesn't work with video games. Video games take an army of developers and years of work to make. Game development is one of the hardest and worst paying professions in tech. Most people in the industry are there not for the money but because of their passion for the profession. Most games fail to pay for their production costs despite all the effort that goes into them. Companies who have had a few mega successes have to make enough money out of their popular titles to be able to pay for all the other titles that fail to pay for themselves. Please don't complain about video game prices!


When people say "greed of game developers" they're clearly not talking about rank-and-file devs, they're talking about development companies.

> An album is the work of one or a few individuals for a relatively short period of time with very little cost

Kind of ironic that you're saying that.


Majority of "development companies" are also shit poor and go bankrupt in after 1 or 2 projects even if they're moderately successful. Basically it's just hard to make profit in this industry and well over 90% of games never recouperate development costs.

Gamedev is just hard and there are very few exceptions like Epic or Rockstar that even get an option to become "greedy".


> are also shit poor and go bankrupt in after 1 or 2 projects even if they're moderately successful

Fair point, but those are clearly not the ones being pointed out as greedy by most people.

...except for maybe scrappy mobile companies that churn shitty microtransaction games looking for whales, but those are greedy indeed, and I doubt they have the sympathy of you or OP.


I personally not making mobile games or ones with microtransactions. Yet you can basically choose: either there will be microtransaction games for mobile or there will be none. This is not because of developers greed, but because it's the only way to monetize this audience. People voted with their wallet.

Microtransactions in PC games are there for the same reason - because you can't just go and sell your game for $25 if all of the competitors with similar production quality in last 5 years released for $15. Gamers simply wouldnt buy it and nobody care that with inflation $15 back then and $15 now are very different money.

Yet you can put microtransactions in the same $15 game and the same people will pay for them. And you'll reach desired $$$ of profit per copy sold. If everyone refused to pay for microtransactions and would spend more money on buying games without them instead there wouldn't be any microtransactions by now.


> People voted with their wallet.

Not really. Most of these games are preying on weaknesses. These people are not voting with their wallets. They're being duped into giving away their mental health, and their wallet is taken away when they're not looking because of they're high in dopamine induced by images and sounds.


That's about as sound an argument as saying people vote to go to casinos with their wallet.

While technically true, it omits a pretty damn significant detail behind the appeal.


Mostly this is not even about development companies, but the distribution platforms which are even further away from the game developer. It is the distribution platforms that makes policy and generally dictate the conditions on which a game is "sold".

The design and blame of microtransaction/gambling is more on the game developer, but even here we keep hearing stories on how such design is being pushed by the publisher (who act as investors) rather than the game developers.

The discussion is not about developers with passion for the profession.


Microtransactions exist because there are people who happily pay for them, but not spend similar amounts on high-quality pay-to-play single-payer games. It's just a market with supply and demand.

As about distribution platforms neither investors, publishers or game developers have any leverage against Valve, Microsoft or Sony. They just do whatever they want. So you're totally right here. These kind of monopolists can only be regulated by large political bodies like US or EU.


> Microtransactions exist because there are people who happily pay for them, but not spend similar amounts on high-quality pay-to-play single-payer games.

I can't speak for everyone, but I think this may be because with microtransactions you pay for additions to some product already known to be good (you tested it and you like it enough to buy some more), while with single-player games you typically have to pay upfront in hope it will be good. So risk-aversion sets in.


This is like saying that drugs only exist because there are people who will happily pay for them, but not spend similar amounts on high quality coffee (although that's debatable!).


To be fair, games often include an album (or more) worth of music...


Why not both. Big tech is greedy. It also is a difficult domain. There are people and companies with passion. There are also big companies trying to milk every penny out of their customers, not caring about the product, and customers.

Take mobile games for example. I am not sure how much passion goes into majority of products in that space.


Counterpoint: Rockstar.

They take a long time to make games, but it's always very good quality.

At least enough for people to want to buy them over and over. Their game are not cheap but aren't more expensive than others triple A, yet they make a ton of money and Take 2 interactive stocks are doing great.

I'd say it's more that the gaming market is extremely concurrentiel, either you're very good at what you do, or you got load of money for marketing campaign, but if you got neither it's barely profitable. Increasing the price of your games in that case won't solve the issue, people would buy even less of your company's game.


> Most games fail to pay for their production costs despite all the effort that goes into them.

Yet for decades before the forced-online/micro-transaction ecosystem, tens (hundreds?) of thousands of games were made, sold for a singular price and the industry spun on.

Nobody is complaining about the price, the complaint is about the indentured nature of modern game sales and the ephemeral state of the online elements that most players don't want or care about.


Doom and Doom 2... shareware, try it before buy it. And mod ability and open source later... id Software was great, shame they are gone now...


can respect your perspective about pricing, but do not forget how we "buy" things now. it is just a virtual lease of unspecified duration in reality.

when physical ownership was possible, you could tend to have games that you can use for perpetuity. nowadays, you can lose access to what you buy for arbitrary reasons (see ubisoft example - https://news.ycombinator.com/item?id=40020961).

it then all boils down to similar grounds as mentioned for other media. all the dark patterns, forcing online connection for singleplayer experiences, and intrusive drm really downplays the labour of love argument for me. if it is all for passion for most game devs, they should not seek monetary expectations from their audience, who dedicate thousands of hours to play their works.

it should be well-understood that streaming is not ownership, and it has been an unsustainable business model. but so is owning anything digitally by paying up front. at least the arrangement is more apparent for the former. i personally work on things that can and is pirated, and having been on both sides, i would not demonize one.


This is from a gamer's PoV, not someone in the industry:

AAA Games are too expensive. Pricing for a retail product is not solely based on the cost to produce it, you have to price in what the market will bear and I think the games industry just isn't doing that. £60 or £70 for a base game (that often has microtransactions in it, or is often kinda incomplete with major plot still to be delivered via DLC, usually with £80 or £90 'premium' editions) is a lot of money for what are already stretched budgets. Most gamers I know wait for sales with significant discounts (50% or more) before they even consider buying AAA titles.

If you can't make a game affordable, then maybe the big AAA industry is making the wrong games, oversaturating the marketplace, or quality is suffering. Starfield's a good example - years of work to produce a game that is resoundingly 'meh'. You can't expect customers to shell out £60+ for 'meh', no matter how many people or resources were used to create it.

There's a lot of smaller 'indie' developers that do not have hundreds of staff that are making games that the market engages with and seems to love. Anecdotally I know my friend group generally prefers these titles to AAA. They frequently fill a significant percentage of the Steam Top Selling lists and these lists are sorted by revenue, not by number of sales. Their prices are more affordable (£10, £15, £20 or £30 are common price points) compared to AAA titles that are trying to cling to that £60 price point.


> services like Spotify [...] allow pretty much access to all the music ever created for a 10$ fee. The math just doesn't work with video games.

Are you not describing XBox Game Pass?


Did you read the article? The problem is not the price, the problem is you can't buy and own stuff no matter what you pay.


"If writing down your ideas always makes them more precise and more complete, then no one who hasn't written about a topic has fully formed ideas about it."

Apparently, even writing it down didn't help the author with this flawed deduction.


To be sure, the quoted text in the parent comment is itself the linked essay’s quotation of Paul Graham.

Whether logically rigorous or not, that excerpt seems to be the essay’s author’s way of rhetorically opening his reflections on the idea that writing verbally crystallizes thought.

As a reader, I do not believe that the author is making a claim that the quoted Paul Graham statement, reduced to symbolic logic, is in all respects valid or sound.


How is it flawed logcally? Seems perfectly correct to me. Although I'd agree it's a bit over-literal. As if the emotional workings of the human mind can be precisely reasoned about (i.e. precisely enough to say "always").

Regardless, I've experienced this effect a lot when writing design docs. Iteration and objective criticism on a tangible thing (a doc) is an extremely effective way to see the problem from all sides.


Taking the statement completely out of context, it states : if A implies B, then not A implies not B. This is a logical flaw.

The correct statement from a logical point of view is: if A implies B, then not B implies not A.

In this case, even if writing down your ideas makes them more precise, there might be other methods that make your ideas more precise. Again this is just the logical point of view, out of context.


> Taking the statement completely out of context, it states : if A implies B, then not A implies not B. This is a logical flaw.

The statement in TFA is not that though. Instead, it is "if A implies B, then not A implies not C."

  A: writing about thoughts
  B: thoughts become more complete
  C: thoughts are most complete
If "A implies B" is true, then it also doesn't matter if other methods also make your ideas more complete, because "A implies B" means that writing would make them even more complete, therefore "not C."


You're perfectly right. It is indeed perfectly logical then. It could be reformulated like this: if f(A) > f(not A) then f(not A) is not maximal.

f: a function indicating how complete the thoughts are.

A: writing about thoughts.


What books can I read to reason like this?

EDIT: shortened sentence


This might sound strange but a book on real analysis or topology that walks through proofs could be one.


+1, pg is using a pretty typical argument you see in analysis/topology.

If you want to get to real analysis/topology the typical sequence is

1. Logic and Set theory (recommendation: How to Prove It, Velleman)

2. Linear Algebra (don't have a good recommendation)

3a. Real analysis (recommendation: PMA, Rudin)

3b. Topology (recommendation: Topology, Munkres)

I'm not sure I'd recommend learning math. It's an extremely expensive skill -- though pretty valuable in the software industry. People who go learn math are generally just drawn to it; you can't stop them even if you wanted to.

But be aware, (1) you'll have no one to talk about math with. And (2) you'll be joining a club of all the outcasts in society, including the Unabomber.


Disclaimer: I'm not OP and I haven't read the full post yet.

But the quote above says "If..." and then makes a statement that isn't true and then having a conclusion based on that false premise. I can tell you it isn't true because I can recall countless times in the last few months alone where writing down my ideas has resulted in a muddier thought; lost ideas while writing them down; confusing me and missing some parts; it does not "always make them more precise and more complete". So the rest of the statement is just silly.

Sure, sometimes writing down ideas helps clear things up. Most times even. But always?! Definitely not.


The deduction is flawed because the success of one method (thinking with writing) does not necessarily disprove the success of other methods (such as thinking without writing).


You're objecting to the premise, not the conclusion*. The deduction is valid for the premise (the part in the 'if'). Well, assuming you accept that an idea that can be "more complete" isn't "fully formed", but I'd say that's definitional.

* Although it's not really right to use this kind of language here (premise, conclusion, deduction). It's a casual statement, so I suppose people can somewhat reasonably argue about it, but the assertion is tautological ('if something is incomplete, it isn't fully formed').


The keyword is "always". IF writing about something always improves it, that implies it cannot ever reach full potential without writing about it.


Or with writing about it. But there's an implicit "if you haven't already written about it". We might wonder what other implicit preconditions there are.

Similarly, if walking North always brings you closer to the North Pole, then you can never reach the North Pole without walking North, or at all. But look out for oceans.


There's no logical flaw here. An idea can't be fully formed, if it could be more precise and more complete.


Sure, and even ideas that have been written about can be more precise and complete, perhaps by writing more about them, for example, so no one has fully formed ideas by this logic.


And that’s probably true. I doubt anyone has ever expressed an idea that couldn’t be amended, clarified, or expanded upon in some way.


Depends on the idea. To me the whole article was too generic or handwavy without giving specific examples of what kind of ideas are actually fully formed and which are not.

What is a definition of an idea that is fully formed and that is sufficiently complex enough?


But also, if more writing can always make the idea “more complete,” then no one at all (even the people who write) has any “completely complete” ideas.


> "If writing down your ideas always makes them more precise and more complete, then no one who hasn't written about a topic has fully formed ideas about it."

> Apparently, even writing it down didn't help the author with this flawed deduction.

I think that it can be rescued, at some expense of awkwardness, by grouping not as one would expect ("(fully formed) ideas"), but in a slightly non-standard way:

> "If writing down your ideas always makes them more precise and more complete, then no one who hasn't written about a topic has fully (formed ideas about it)."

That is, if you haven't written about the topic, then you haven't understood it as precisely and completely as you could. While this is obviously exaggeration, I think that it's (1) logically consistent, (2) possibly what pg meant, and (3) a useful slogan, even if intentionally over-stated.


<div class="commtext c00">Yes this is some terrible logic, but the idea is true.

Writing about something fixes (most) wrong thoughts, and since you are wrong in 99% of cases you can safely say that you are wrong unless you have written about it.</div>


The deduction is logically sound; it's of the form "if <false statement> then <other false statement that would follow if the first was true>".

This is, of course, even worse than a logical error.


"If allspice makes food taste better, than no one who doesn't use allspice can cook well."


The analogy is probably something more like:

Salt is necessary to bring out the flavor in pretty much all food. So no one who doesn't use salt has made a good meal.

Because salt is much more irreplaceable than allspice in cooking, just like writing is difficult to replace in honing ideas.


> Apparently, even writing it down didn't help the author with this flawed deduction.

... or this writing improved an even more flawed original.


Or it gave author unwarranted confidence in a flawed argument on the basis of a flawed assumption.


[flagged]


Only lately?


I thought this is using AI so I was gonna dismiss it. But then saw the No AI sign and immediately signed up. Seriously though why is "No AI" a "feature" worth mentioning on top?


Adobe had some clause where they could train AI based on your creative, effectively building a model that can ultimately plagiarize your work. No AI is a nice appeal in this context. That and it being simple, fully offline, not at the whims of execs trying to bump their share price with AI features that put the user second.


I wish people stopped equating AI with Adobe's content policy.


In this context that's a very reasonable assumption.


Yes, but 1) it's unnecessary conjecture when the facts already make them look bad, and 2) it's not the limit of what they could actually do - their TOS says (IIRC) "as long as it's for the purpose of improving our software, we can use anything you make". So they could use peoples' drawn art directly, for splash screens, or even, (speculation) offer it as a template/stock material for anyone who pays them for it.

It's not just AI.


Doesn’t your behavior prove the point of showing No AI ?


I think it was a joke


If you are on TikTok and you don't think CCP has all your personal data TikTok has collected from you, I have a bridge to sell you.


I think this is a pretty commonly held belief at this point in time, and I'd be somewhat surprised if you could change even a single American TikTok user's behavior by convincing them of it, since, anecdotally, people seem to lump it in with their general feeling that tech and advertising companies already harvest and share vast amounts of personal information, and conclude that protecting that information is a lost cause, and they may as well use the fun app.


Unlike Facebook which would never collect any data?


Why is that a problem?


Well speaking for myself I'd generally like to keep my data away from nefarious communist regimes.


here to express my support for you before the neocommunists come piling in on you


Are there seriously neo-communists on YC? I can understand reddit, but here?


Yes. It's paradoxical, but apparent if you engage in any kind of political discourse over here.

Nerds have always had some of the worst political stances, not because they're dumb but because they bend in face of the slightest pressure of losing social capital - in the current climate, this usually means submitting to the most psychotic leftist interpretation that is physically proximate to them.

Explains the politics of places like SF quite well.


How to start Google? You can't. That holds for 99.999999% of the people. The rest 0.000001% aren't wasting their time reading Paul Graham's essays.


You're overestimating the importance of talent and underestimating the importance of luck.

Not saying Larry Page and Sergey page aren't talented - clearly they were/are very talented people. But they were also extraordinarily lucky. To take another example, if Gary Kildall had been a slightly more ruthless businessman and IBM had had a little more foresight, Bill Gates would not be a billionaire.


Agreed, BG actually come across as fellow less smarter than several people I know, he does not have profound insights, and keeps repeating trite stuff.


This essay is for kids. Many successful founders have likely listened to inspiring talks like this when they were kids. In places like Stanford or the prestigious high schools they come from.

The people who read here can read it for clarity and the chance that they can show this article to a smart nephew to inspire them. Like I just did.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: