Hacker Newsnew | past | comments | ask | show | jobs | submit | brokencode's commentslogin

YouTube is 20 years old now. Either the encrapification is very slow or they landed on a decent ad model.

Plus there is a subscription that eliminates ads. I think it’s a great experience for users. Many creators also seem to do well too.

I think this should be the model for a new generation of search. Obviously there will be ads/sponsored results. But there should be a subscription option to eliminate the ads.

The key part here will be monetization for content creators. People are no longer clicking links, so how do they get revenue?

I think direct payments from AI companies to content creators will be necessary or the whole internet will implode.


It's funny, I had YouTube's paid offering for a few years (I used the service a lot and want to support non ad-based revenue streams). But they changed something a while back that started giving me a degraded experience, and eventually made the site unusable. Did some digging and it turns out they were detecting my adblock and intentionally making my experience bad despite being a paid customer. I submitted a ticket or whatever but of course nobody gave a shit. I ended up upgrading my adblocker to something that worked on the new YouTube but of course at that point why keep the subscription if I have to fight some ads arms race anyway?

Ads are useful and have their place in keeping the web accessible to everyone, but Google's anti user policies really stretch that relationship.


Bullying their paying customers is such an insane choice

I've paid for Youtube Premium for a decade, use adblock in my browser, have no issues with performance on Youtube.

It's funny how experiences can be so different (likely by Google's design, of course). I've been having degraded experience with YouTube using uBlock Origin on Vivaldi. I elected to make use of a one-month trial for Premium. Suddenly these problems went away. Interestingly, after canceling the trial, the problems still haven't come back (yet). Things like, I would load a video, it'd start playing, but the browser tab itself would just block for a good 20-30 seconds. The entire time, the video is playing (well, I could hear the audio but the visuals were frozen). Then things would unblock and comments would appear, etc.

The difference between my YouTube interface with and without premium is stark. Aside from the ads, it seemed like the algorithm pushed less slop in front of me to avoid. Purely anecdotal, and likely affected by A/B bullshit (or nowadays would it be more like A/B/C/D/E/F/G/H/I/J/K/L/M/N/O/P/Q/R/S/T/U/V/W/X/Y/Z).


I only watch YouTube on my iPad or rarely my android TV, and there, the premium experience is worth it, since it's difficult to block ads on those platforms anyway.

If your experience with YouTube is primarily through browser then yeah I can see why that experience is shitty.

I'm fine with sites detecting adblock, in the sense that I will just not go to those sites. But if I already pay for an ad free experience then there's no reason for them to care about my adblock, unless they're just mad they can't track me, in which case, they can fuck all the way off.

And yes, I know that Google is in that camp, so they can indeed fuck all the way off.


I was in complete agreement until:

> Ads are useful and have their place in keeping the web accessible to everyone,

No. Advertising is a cancer on commerce.


Why would you use Adblock if you pay for premium?

There are other websites on the internet, and I don't want to/didn't consider toggling off ghostery, noscript, ublock origin, etc per domain that I choose to pay for.

Because Adblock doesn't just block ads, it also blocks invasive trackers that I consider malware.

Paying to remove Ads means I don't want ads, it doesn't mean I consent to all of the other invasive tracking they do.


I do subscribe so I don't see ads. My complaints with YouTube are: I don't want "Shorts" in my suggestions, and yes they recently added the option to remove them but it's only temporary. They always come back and I always say "don't show me this" and they say "got it, we won't show you Shorts anymore" but in a few weeks they always come back. Do they think I forgot?

And they have some kind of little games now, which I don't have any interest in, but they have no option to remove them from my suggestions.


For me the text has changed from "don't show me this" to "show less of this" and they come back about once a week now. I also have no option to remove them from the subscriptions feed.

I think a similar thing is happening with their crappy games too. They keep coming back (the games still say "don't show me this" though).


Its encrapification is real. It has been slow though, mostly affecting niche interests and smaller creators. And the ad experience has definitely gotten worse, but adblockers help. Try using youtube without and adblocker.

I pay for the subscription and don’t see any ads. It comes with YouTube Music. It’s great.

None of your videos have in-video ads? "This segment is sponsored by NordVPN!" style stuff?

Content creators still have their embedded ads. You just avoid all the non-skippable you tube ads

I use Youtube on a Chromecast with the SmartTube-beta app, which skips in-video ads, if they are demarcated by the creator - and most videos with in-video ads have that. The app just skips right by the in-video ad, as well as a bunch of other non-interesting video content if it is specified in the video timecodes by the creator.

Another great feature of SmartTube-beta - and it's the feature that brought me to that app - is the ability to completely remove all "shorts" from the entire app. No more shorts. I've configured the app to eliminate them completely like they never existed.


This sounds amazing. I personally resorted to FreshRSS and an extension that allows me to spoof feeds from YouTube and socials as if they were RSS. It’s not perfect, but it is a chronological plain-text (apart from hyperlinks) list of content that I feel materially healthier for having switched to. My past experiences with alternative frontend interfaces for YouTube is that they last a few months, then Google tweaks their API just enough to break them all for a few weeks.

I also pay for YT Premium, and I have maintained a family subscription since they initially offered one. I wish they would just provide Premium users with options for turning off shorts, comments (per-channel ideally, but across the board would be fine too), games, and everything else I don’t care to ever engage with.

I also run a self-hosted AdGuard service for DNS-level adblocking, but it sounds like Google’s getting around that as well. Next stop will be DNS with SSL and a proxy. I am a little concerned that I am having to establish what must appear from the outside to be a very sketchy anonymizing infrastructure, and it’s all just to use the web the way I always have, whilst avoiding the increasingly intrusive and anxiety-inducing tracking and advertising.


> if they are demarcated by the creator - and most videos with in-video ads have that

I'm almost positive that SmartTube is using the SponsorBlock database, which does not depend on creator-submitted demarcation, but rather on user-generated/crowd-sourced segment tagging. https://sponsor.ajay.app/


There's another plug-in called SponsorBlock that will skip over most of those.

In the old days people would pay to host video content and now people pay Google to watch other people's hosted video content. It's funny how easily people can be brainwashed into giving companies money for nothing. I'm still waiting for the first company to start selling bottled air next!

You're talking as if video content has no intrinsic value of its own. Of course it does.

"Now people pay cable companies to watch TV shows. It's funny how easily people can be brainwashed into giving companies money for nothing."


I mean, when it launched the point of paying for cable instead of getting TV for free via broadcast was no ads

Now cable has ads and costs a fortune; I didn't know anyone who has it. I do still watch a little broadcast though, the price is right even if the programming isn't great.

If there's nothing on I turn it off and look at my phone


> point of paying for cable instead of getting TV for free via broadcast was no ads

No, the point of paying for cable was to get more TV. Most cable stations have always had ads. You're probably thinking of HBO, which is a tiny subset of overall cable output.


The original point of cable was Community Antenna TV, where you'd get a much better quality signal (and often even additional out-of-market but nearby channels). Then broadcasters decided to go into specifically seeking nationwide coverage (Ted Turner was a pioneer in this area). They also decided, due to the sports leagues, that cable should only deliver local stations in the same market as your location through blackouts (through my childhood I went from getting three ABC affiliates and two CBS affiliates, to one of each). It became unprofitable to manage blacking out the out-of-market station any time they were both running network or sports programming, so the out-of-market stations were generally removed (I also wouldn't be surprised if negotiations for retransmission consent included terms preventing carriage of out-of-market stations).

I don't think there was a time Cable didn't have ads, certain channels like HBO yeah, but never cable as a whole. The attraction was just having way more content.

In the 1950s when Cable started in the US, there were no Cable channels. Cable was literally renting a pipe to a big antenna instead of your own small antenna in your house, so you got broadcast with better signal strength.

The first Cable channel was HBO. The second was TBS, it had ads from the beginning.


There are tons of companies selling bottled air. Here's a story from 7 years ago. There are lots more now: https://www.theguardian.com/global/2018/jan/21/fresh-air-for...

I get a lot of value from youtube - hours of entertainment. Also I don't pay and use an ad blocker which is maybe a bit unfair but thanks to the people who do pay.

Using an adblocker certainly won't help smaller creators and niche interests. If you don't want ads but want to support creators, pay for Premium.

A lot of them have sponsors which pay more than the ads, or are on Patreon, or are also on other platforms that pay them a higher proportion or allow videos that risk demonetisation on Youtube, or sell merch, or something else.

Sure, and if you're a patreon supporter, or support them on a non-YouTube platform, great. But if they're monetized and you're just watching them on YouTube, which probably 90% of people do, then running an adblocker is preventing them from earning money they would otherwise have earned. Whether or not they're _also_ earning money via other means is irrelevant.

Lol, if you want to support them pay their patreon. The few cents they get from you paying premium won't support them.

I don't understand what point you're attempting to make. Yes, of course if you pay someone $5 a month on Patreon, they'll be getting more money from you than if you just used Premium or disabled your adblocker. And if you paid them $100/month, that would be more than $5. So?

Why does that make it okay to use an adblocker?


I have a question. How much do small creators get for views from Premium users? Say they get a few thousand views per video, would they get anything from Premium users?

I've seen some breakdowns, and (depending on the content, because different ad segments can be more or less lucrative) view time from Premium users tends to be worth more, and often way more.

As I understand it, a chunk of your membership fee is divided amongst all monetized creators you watch on a monthly basis, proportional to your watch time. A different chunk of your membership fee is divided between the creators and record labels, for your watch/listen time of Shorts and Youtube Music.

So the size of the creator is only relevant insofar as it can determine whether the channel is eligible for monetization. View time is not worth a different amount depending on the size of the creator.


Probably not much for a few thousand. My understanding is that it requires continually producing videos that attract 100k+ viewers. It doesn't pay a lot, but it attracts direct sponsors who pay better. The biggest money comes from selling your own products and services, like "How to make millions on YouTube" seminars.

I use patreon for ones I care about. And many of the niche interests I'm interested in are demonetized anyways which is the crux of the issue.

My YT premium recently expired for a payment issue and ffs the ads are absolutely insane.

The YouTube search has been unusable for me for about the last year or so (maybe longer?), since every ~5 results are interrupted with clickbait only barely related to my query (and then, past a certain point, they all become unrelated).

YouTube only still "works" because of the cat and mouse ad blocker game. I don't know how but my new ad blocker seems to fast forward through all the ads. For a little while YouTube had them licked and I was watching 10 to 20 second ads all the time so temporarily gave up on YouTube until the ad blockers caught up again. Now YouTube is still functionally broken on TVs and mobile phones but works fine on a desktop computer still.

Why not just pay for premium?

Not OP. I did pay, for 10y. But the video quality kept slowly degrading (lower bitrate). And ads in the video content kept increasing.

YouTube also increased advertising some paying shows, YouTube shorts, and more. No way to say no, only "yes forever" or "no thanks not right now". And it comes back in a few weeks.

It also constantly sneakily lowers the video quality.

So I stopped paying. I combine ad block and sponsor block and I forget another one to cleanup the UI.

Often I download the video so that I can actually seek around without buffering (because YouTube buffers as little as possible to save cost, which I can understand).

Content nowadays is 30min instead of 5min. So you better be ready to skip and seek.


YT premium has higher bit rates and sponsor block built in, but they don't call it that or advertise that it even exists. Instead they say it allows you to "skip commonly skipped segments of video" but basically it is sponsor block.

It's the bitrate we used to have before they made it a premium plus all star plus+ feature and downgraded the rest.

Netflix did the same. In fact they even silently downgraded us from 4k HDR surround sound during a software update. And nothing can get us back the max quality anymore. I stopped paying all together.

So you know what doesn't buffer, has the absolute best quality (like 4x the bitrate etc), all the languages and what not? pirated content.

It's just stupid how much easier it is to obtain predictable quality without stutter by downloading rather than actually paying a streamer service.

Plus the ads and other UX dark patterns are through the roof.


I’m amazed people can get rid of shorts for a few weeks. For me, I tell it I don’t want to see them and it’s literally back as soon as I refresh the feed. It’s aggressively anti-UX.

Pay for a global monopoly that has always subsided its operation with infinite money from a near ad and search monopoly and private equity? Yeah, I will keep my uBlock Origin active, no thanks.

Watching things without having to log in is my use case. Not something that Google would want to ever cater for, so ad blocking it is.

>YouTube is 20 years old now. Either the encrapification is very slow or they landed on a decent ad model.

Have you seen how many ads are in a video on YouTube? On desktop its no issue, but I use the YouTube app on my Apple TV now and then, and I tried to watch a few relatively short video, and I saw easily 4-6 ads per video, some of which were 90+ seconds long. Its awful


I feel like YouTube's enshittification is already here. The algorithm has long been terrible, they now punish users for disabling watch history, and the ads are more frequent, longer, and more annoying. If not for inertia (lots of video creators still uploading primarily or solely there), I'd have abandoned YouTube entirely a long time ago.

YouTubes content moderation guidelines / removal of videos that have any content just the slightest topic they don't want to see discussed is kind of a no-go why they don't get my money.

So websites move to the Spotify model of getting paid... that's gonna suck.

The advertising tier has gradually gotten worse on YouTube.

> …they landed on a decent ad model.

You must be joking. YT is so insufferable, I can only watch it via Firefox with uBlock Origin and Privacy Badger active. And even then only if and as long as I absolutely have to.


Just because they have a plan, doesn’t mean they have a clue. Their reasoning for this could be as simple as “government bad”.

Just look at the wild abandon with which DOGE has gone about its cuts. They literally fired the experts who inspect our nuclear arsenal, then had to scramble to hire them back because you know.. it’s literally nuclear bombs we’re talking about.


They have caused a lot of mayhem for federal employees. They have suppressed a ton of information and research. They have kept a significant amount of the public in the dark about their actions.

Yeah, they have done things we would both deem as "dumb" but their internal motive is what decides whether or not they have a clue. Furthering our motives has nothing to do with that.


Usually you try to build something, realize it’s slow, then do a combination of searching for possible solutions, trying them, and profiling again and again to gain this knowledge.

Experience is the best teacher.


Most popular business books are like this. They have an idea that could fit in like a single chapter, but have to write a whole book about it to get paid, so they pad it with numerous anecdotes and their entire life’s story.

I wish there were more reliable ways to monetize for authors between clickbait and published book. I know there are many paid Substacks and newsletters out there, some of which are really great. But I feel like you need a lot of luck or self promotion skills for this to work.


We need to return to the grand history of the publication of the pamphlet, which is exactly what we want. Short form content, 5 to 50 pages, which is exactly the length that would fit for so many of these business, "popular science", and other kinds of nonfiction books which have good ideas but no business being 250 pages long. Essentially one-off magazines instead of being a periodical, individually published and probably in a smaller format than 8x11/A4-ish sizes for most modern magazines.

https://www.library.illinois.edu/rbx/2015/03/30/the-pamphlet...


Monetisation: how about a periodical publication? Edited and curated around a theme and then contributors can write to an appropriate length. Spin-off courses and perhaps even individual sessions where a general theme is applied to a specific situation?

https://www.idler.co.uk/

These people seem to be doing OK with that format for er, a different market segment.


Yes, it would not be sane to depend on implementation details of something like this.

But the sad reality is that many developers (myself included earlier in my career) will do insane things to fix a critical bug or performance problem when faced with a tight deadline.


Such is life, yes.

I find it even more sad when people come out of the woodwork on every LLM post to tell us that our positive experiences using LLMs are imagined and we just haven’t realized how bad they are yet.

Some people got into coding to code, rather than build things.

If the AI is doing the coding then that is a threat to some people. I am not sure why, LLMs can be good and you can enjoy coding...those things are unrelated. The logic seems to be that if LLMs are good then coding is less fun, lol.


Software jobs pay more than artist jobs because coding builds things. You can still be a code artist on your own time. Nobody is stopping you from writing in assembler.

¯\_(ツ)_/¯ people didn't stop playing chess because computers were better at it than them

And chess players stream as their primary income, because there's no money in Chess unless you're exactly the best player in the world (and even then the money is coming from sponsors/partners, not from chess itself).

We don't just tell you they were imagined, we can provide receipts.

https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...


Certainly an interesting result, but remember that a single paper doesn’t prove anything. This will no doubt be something studied very extensively and change over time as tools develop.

Personally, I find the current tools don’t work great for large existing codebases and complex tasks. But I’ve found they can help me quickly make small scripts to save me time.

I know, it’s not the most glamorous application, but it’s what I find useful today. And I have confidence the tools will continue to improve. They hardly even existed a few years ago.


> We do not provide evidence that:

> AI systems do not currently speed up many or most software developers

> We do not claim that our developers or repositories represent a majority or plurality of software development work


Cursor is an old way of using LLMs.

Not to mention in the study less than 1/2 have ever used it before the study.


The AI tooling churn is so fast that by the time a study comes out people will be able to say "well they were using an older tool" no matter what tool that the study used.

It's the eternal future. "AI will soon be able to...".

There's an entire class of investment scammers that string along their marks, claiming that the big payoff is just around corner while they fleece the victim with the death of a thousand cuts.


What is the problem with this, exactly? It's a valid criticism of the study (when applied to current agentic coding practices). That the pace of progress is so fast sucks for researchers, in some sense, but this is the reality right now.

Not really. Chatting with a llm was cutting edge for 3 years it’s only within the last 8-10 months with Claude code and Gemini cli do we have the next big change in how we interact with llms

How is Claude Code and Gemini CLI any different from using Cursor in agent mode? It's basically the same exact thing.

I can't speak to how they're technically different, but in practice, Cursor was basically useless for me, and Claude Code works well. Even with Cursor using Claude's models.

Claude Code was released in May.

Yup. But they are improvements over what cursor was releasing over the last year or so.

If there are paradigm-shattering improvements every six months, every single study that is ever released will be "behind" or "use an older tool." In six months when a study comes out using Claude Code, people dissatisfied with it will be able to point to the newest hotness, ad infinitum.

If LLMs were actually useful, there would be no need to scream it everywhere. On the contrary: it would be a guarded secret.

In my experience, devs generally aren't secretive about tools they find useful.

People are insane, you can artificially pine for the simpler betters times made up in your mind when you could give oracle all your money.

But I would stake my very life on the fact that the movement by developers we call open-source is the single greatest community and ethos humanity has ever created.

Of course it inherits from enlightenment and other thinking, it doesn't exist in a vacuum, but it is an extension of the ideologies that came before it.

I challenge anyone to come up with any single modern subcultures that has tangibly generated more that touches more lives, moves more weight, travels farther, effects humanity more every single day from the moment they wake up than the open source software community (in the catholic sense obviously).

Both in moral goodness and in measurable improvement in standard of living and understanding of the universe.

Some people's memories are very short indeed, all who pine pine for who they imagined they were and are consumed by a memetic desire of their imagined selves.


> open-source is the single greatest community and ethos humanity has ever created

good lord.


posting a plain text description of your experience on a personal blog isn't exactly screaming. in the noise of the modern internet this would be read by nobody if it wasn't coming from one of the most well known open source software creators of all time.

people who believe in open source don't believe that knowledge should be secret. i have released a lot of open source myself, but i wouldn't consider myself a "true believer." even so, i strongly believe that all information about AI must be as open as possible, and i devote a fair amount of time to reverse engineering various proprietary AI implementations so that i can publish the details of how they work.

why? a couple of reasons:

1) software development is my profession, and i am not going to let anybody steal it from me, so preventing any entity from establishing a monopoly on IP in the space is important to me personally.

2) AI has some very serious geopolitical implications. this technology is more dangerous than the atomic bomb. allowing any one country to gain a monopoly on this technology would be extremely destabilizing to the existing global order, and must be prevented at all costs.

LLMs are very powerful, they will get more powerful, and we have not even scratched the surface yet in terms of fully utilizing them in applications. staying at the cutting edge of this technology, and making sure that the knowledge remains free, and is shared as widely as possible, is a natural evolution for people who share the open source ethos.


If consumer "AI", and that includes programming tools, had real geopolitical implications it would be classified.

The "race against China" is a marketing trick to convince senators to pour billions into "AI". Here is who is financing the whole bubble to a large extent:

https://time.com/7280058/data-centers-tax-breaks-ai/


So ironic that you post this on Hacker News, where there are regularly articles and blog posts about lessons from the industry, both good and bad, that would be helpful to competitors. This industry isn’t exactly Coke guarding its secret recipe.

I think many devs are guarding their secrets, but the last few decades have shown us that an open foundation can net huge benefits for everyone (and then you can put your secret sauce in the last mile.)

If Internet was actually useful there would be no need to scream it everywhere. Guess that means the internet is totally useless?

If LLMs were actually useful, there would be no need to scream it everywhere. On the contrary: it would be a guarded secret.

LLMs are useful—but there’s no way such an innovation should be a “guarded secret” even at this early stage.

It’s like saying spreadsheets should have remained a secret when they amplified what people could do when they became mainstream.


Could it not be that those positive experiences are just shining a light that the practices before using an LLM were inefficient? It’s more a reflection on the pontificator than anything.

Tautologically so! That doesn't show that LLMs are useless, it perfectly shows how they are useful.

Sure, but even then the perspective makes no sense. The common argument against AI at this point (e.g. OP) is that the only reason people use it is because they are intentionally trying to prop up high valuations - they seem unable to understand that other people have a different experience than they do. You’d think that just because there are some cases where it doesn’t work doesn’t necessarily mean that 100% of it is a sham. At worst it’s just up to individual taste, but that doesn’t mean everyone who doesn’t share your taste is wrong.

Consider cilantro. I’m happy to admit there are people out there who don’t like cilantro. But it’s like the people who don’t like cilantro are inventing increasingly absurd conspiracy theories (“Redis is going to add AI features to get a higher valuation”) to support their viewpoint, rather than the much simpler “some people like a thing I don’t like”.


"Redis for AI is our integrated package of features and services designed to get your GenAI apps into production faster with the fastest vector database."

It’s only a hard dependency if you don’t know and never learn how to program.

For developers who read and understand the code being generated, the tool could go away and it would only slow you down, not block progress.

And even if you don’t, it really isn’t a hard dependency on a particular tool. There are multiple competing tools and models to choose from, so if you can’t make progress with one, switch to another. There isn’t much lock-in to any specific tool.


My experience has been that Claude can layout a lot of things in minutes that would take me hours if not days. Often I can dictate the precise logic and then claude get's most of the way there, with a little prompting claude can usually get even further. The amount of work I can get done is much more substantial than it used to be.

I think there is a lot of reticence to adopt AI for coding but I'm seeing it as a step change for coding the way powerful calculators/workstation computers were for traditional engineering disciplines. The volume of work they were limited to using a slide rule was much lower than now with a computer.


> For developers who read and understand the code being generated, the tool could go away and it would only slow you down

Recent research suggests it would in fact speed you up.

https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...


You should actually read the paper. N size of 16. Only 1 of which had used cursor more than 40 hours before. All people working in existing code bases where they were the primary author.

I did read the paper, and the HN discussion (which is how I found it). I recommend you read that, your comments were addressed.

https://news.ycombinator.com/item?id=44522772


Interestingly, devs felt that it sped them up even though it slowed them down in the study.

So even if it’s not an actual productivity booster on individual tasks, perhaps it still could reduce cognitive load and possibly burnout in the long term.

Either way, it’s a tool that devs should feel free to use or not use according to their preferences.


Scent is decomposable. There are many different scent receptors, but finite.

Hearing is quite similar in that there are numerous different length hairs in the ear drum that can sense different frequencies of sound.


There are anywhere between 200 and 400 scent receptors in humans.

Sure, this is a finite number, but for practical purposes it's not really decomposable.


> “most people agree that the output is trite and unpleasant to consume”

That is a such a wild claim. People like the output of LLMs so much that ChatGPT is the fastest growing app ever. It and other AI apps like Perplexity are now beginning to challenge Google’s search dominance.

Sure, probably not a lot of people would go out and buy a novel or collection of poetry written by ChatGPT. But that doesn’t mean the output is unpleasant to consume. It pretty undeniably produces clear and readable summaries and explanations.


> People like the output of LLMs so much that ChatGPT is the fastest growing app ever

While people seem to love the output of their own queries they seem to hate the output of other people's queries, so maybe what people actually love is to interact with chatbots.

If people loved LLM outputs in general then Google, OpenAI and Anthropic would be in the business of producing and selling content.


> While people seem to love the output of their own queries they seem to hate the output of other people's queries

Listening or trying to read other peoples chats with these things is like listening to somebody describe a dream. It’s just not that interesting most of the time. It’s remarkable for the person experiencing it but it is deeply personal.


Low effort Youtube shorts with AI voice annoy the crap out of me.

After all this hype, they still can't do text to speech properly. Pause at the wrong part of the sentence all the time.


Google does put AI output at the top of every search now, and sometimes it's helpful and sometimes it's crap. They have been trying since long before LLMs to not just provide the links for a search but also the content.

Google used to be interested in making sure you clicked either the paid link or the top link in the results, but for a few years now they'd prefer that a user doesn't even click a link after a search (at least to a non-Google site)


It made me switch away from google. The push I needed

I think the thing people hate about that is the lack of effort and attention to detail. It’s an incredible enabler for laziness if misused.

If somebody writes a design or a report, you expect that they’ve put in the time and effort to make sure it is correct and well thought out.

If you then find the person actually just had ChatGPT generate it and didn’t put any effort into editing it and checking for correctness, then that is very infuriating.

They are essentially farming out the process of creating the document to AI and farming out the process of reviewing it to their colleagues. So what is their job then, exactly?

These are tools, not a replacement for human thought and work. Maybe someday we can just have ChatGPT serve as an engineer or a lawyer, but certainly not today.


This is the biggest impact I have noticed in my job.

The inundation of verbose, low SNR text and documents. Maybe someone put thought into all of those words. Maybe they vibed it into existence with a single prompt and it’s filled with irrelevant dot points and vague, generic observations.

There is no way to know which you’re dealing with until you read it, or can make assumptions based on who wrote it.


If I cared about the output from other people's queries then wouldn't they be my queries? I don't care about ChatGPTs response to your queries is because I don't care about your queries. I don't care if they came from ChatGPT or the world's foremost expert in whatever your query was about.

> That is a such a wild claim. People like the output of LLMs so much that ChatGPT is the fastest growing app ever.

The people using ChatGPT like its output enough when they're the ones reading it.

The people reading ChatGPT output that other people asked for generally don't like it. Especially if it's not disclosed up front.


Had someone put up a project plan for something that was not disclosed as LLM assisted output.

While technically correct it came to the wrong conclusions about the best path forward and inevitably hamstrung the project.

I only discovered this later when attempting to fix the mess and having my own chat with an LLM and getting mysteriously similar responses.

The problem was that the assumptions made when asking the LLM were incorrect.

LLMs do not think independently and do not have the ability to challenge your assumptions or think laterally. (yet, possibly ever, one that does may be a different thing).

Unfortunately, this still makes them as good as or better than a very large portion of the population.

I get pissed off not because of the new technology or the use of the LLM, but the lack of understanding of the technology and the laziness with which many choose to deliver the results of these services.

I am more often mad at the person for not doing their job than I am at the use of a model, the model merely makes it easier to hide the lack of competence.


> LLMs do not think

Yep.

More seriously, you described a great example of one of the challenges we haven't addressed. LLM output masquerades as thoughtful work products and wastes people's time (or worse tanks a project, hurts people, etc).

Now my job reviewing work is even harder because bad work has fewer warning signs to pick up on. Ugh.

I hope that your workplace developed a policy around LLM use that addressed the incident described. Unfortunately I think most places probably just ignore stuff like this in the faux scramble to "not be left behind".


It's even worse than you suggest, for the following reason. The rare employee that cares enough to read through an entire report is more likely to encounter false information which they will take as fact (not knowing that LLM produced the report, or unaware that LLMs produce garbage). The lazy employees will be unaffected.

> LLMs do not think independently and do not have the ability to challenge your assumptions

It IS possible for a LLM to challenge your assumptions, as its training material may include critical thinking on many subjects.

The helpful assistant, being almost by definition a sycophant, cannot.


Strong agree. If you simply ask an LLM to challenge your thinking, spot weaknesses in your argument, or what else you might consider, it can do a great job.

This is literally my favorite way to use it. Here’s an idea, tell me why it’s wrong.


> do not have the ability to challenge your assumptions or think laterally.

Particularly on the challenging your assumptions part is where I think LLMs fail currently, though I won't pretend to know enough about how to even resolve that; but right now, I can put whatever nonsense I want into ChatGPT and it will happily go along telling me what a great idea that is. Even on the remote chance it does hint that I'm wrong, you can just prompt it into submission.

None of the for-profit AI companies are going to start letting their models tell users they're wrong out of fear of losing users (people generally don't like to be held accountable) but ironically I think it's critically important that LLMs start doing exactly that. But like you said, the LLM can't think so how can it determine what's incorrect or not, let alone if something is a bad idea or not.

Interesting problem space, for sure, but unleashing these tools to the masses with their current capabilities I think has done, and is going to continue to do more harm than good.


This is why once you are using to using them, you start asking them for there the plan goes wrong. They won't tell you off the bat, whuch can be frustrating, but they are really good at challenging your assumptions, if you ask them to do so.

They are good at telling you what else you should be asking, if you ask them to do so.

People don't use the tools effectively and then think that the tool can't be used effectively...

Which isn't true, you just have to know how the tool acts.


I'm no expert, but the most frequent recommendations I hear to address this are:

a) tell it that it's wrong and to give you the correct information.

b) use some magical incantation system prompt that will produce a more critical interlocutor.

The first requires knowing enough about the topic to know the chatbot is full of shit, which dramatically limits the utility of an information retrieval tool. The second assumes that the magical incantation correctly and completely does what you think it does, which is not even close to guaranteed. Both assume it even has the correct information and is capable of communicating it to you. While attempting to use various models to help modify code written in a less-popular language with a poorly-documented API, I learned how much time that can waste the hard way.

If your use case is trivial, or you're using it as a sounding board with a topic you're familiar with as you might with, say, a dunning-kruger-prone intern, then great. I haven't found a situation in which I find either of those use cases compelling.


Especially if it's not disclosed up front, and especially when it supplants higher-value content. I've been shocked how little time it's taken for AI slop SEO optimized blogs to overtake the articles written by genuine human experts, especially in niche product reviews and technical discussions.

However, whether or not people like it is almost irrelevant. The thing that matters is not whether economics likes it.

At least so far, it looks like economics absolutely loves LLMs: Why hire expensive human customer support when you can just offload 90% of the work to a computer? Why pay expensive journalists when you can just have the AI summarize it? Why hire expensive technical writers to document your code when you can just give it to the AI and check the regulatory box with docs that are good enough?


Eventually the economics will correct themselves once people yet again learn the old "you get what you pay for" lesson (or the more modern FAFO lesson)

I'm not really countering that ChatGPT is popular, it certainly is, but it's also sort of like "fastest growing tire brand" that came along with the adoption of vehicles. The amount of smartphone users is also growing at the fastest rate ever so whatever the new most popular app is has a good chance of being the fastest growing app ever.

No… dude… it’s a new household name. We haven’t had those in software for a long time, maybe since TikTok and Fortnite.

Lots of things had household recognition. Do you fondly remember the Snuggie? The question is whether it'll be durable. The lack of network effects is one reason to be skeptical.

Lack of network effects... It's the biggest thing ever! Everyone is talking about it, all the time, nonstop! How is that not a network? Network effects do not exclusively mean multiplayer software, communications or social media. And anyway, it is almost certainly all three of these things, because content is being made (and often consumed) by ChatGPT in every digital network there is.

Anyway, I don't think it's possible in this forum to have a conversation about it, if "ChatGPT is humongous" is a controversial, downvotable POV.


> How is that not a network?

A network effect is ~ "I must use this specifically because the people I am connected to, socially or professionally, also use this".

I can trivially replace OpenAI's ChatGPT with DeepSeek or Anthropic's Claude, and indeed often do so.

For any of these providers to benefit from a network effect, it has to do to LLMs what Microsoft did to spreadsheets with Office. I think one of these businesses may well be able to, but so far, none have.


I don't know if you are joking or not, but people were talking about ChatGPT non-stop in like March of 2023 in my social group. Now it's far less frequently mentioned, basically never. In fact mostly if it is, it's in some form of a sarcastic joke or reply.

> Everyone is talking about it, all the time, nonstop! How is that not a network?

Network effect in this context means the product is successful primarily because everyone else is using it. Not easy to compete with Instagram/Tiktok because you need most users to use your new app, not just a few. Amazon can only deliver fast because they have a huge delivery network, because most people use Amazon.

No such effect or moat exists for AI companies. In fact, it is the opposite. Same prompt will give you very similar results in any AI product.

You can't compete with amazon now, even with a better product. But you can easily kill AI companies if you have a better model.


> That is a such a wild claim.

Some people who hate LLMs are absolutely convinced everyone else hates them. I've talked with a few of them.

I think it's a form of filter bubble.


This isn't some niche outcry: https://www.forbes.com/sites/bernardmarr/2024/03/19/is-the-p...

And that was 18 months ago.

Yes, believe it or not, people eventually wake up and realize slop is slop. But like everything else with LLM development, tech is trying to brute force it on people anyway.


You posted an article about investors trust in AI companies to deliver and societies strong distrust of large corporations.

You article isn’t making the point you seem to think it is.


What point do you think it means? Seems pretty clear to me.

1. Investors are pushing a lot of hype

2. People are not trusting the hype.

Hence why people's trust in LLM's are waning.


I haven't read the article, but it sounds to me you're conflating “how much do regular users trust LLMs to produce good/correct output” with “how much do capitalists trust LLMs to become (and remain) profitable”.

Yup, any day now people will suddenly realize that LLMs suck and you were right all along. Any day now..

Yup, I can wait a while. Took some 7-8 years for people to turn on Facebook.

It's not that LLMs are bad, they're very useful. It's that the media they produce is, in fact, slop.

I want to watch Breaking Bad, not AI generated YouTube shorts. I want to listen to "On the Radio" by Donna Summer, not some Spotify generated piano solo. I want to read a high quality blog post about tech with a unique perspective, not an LLM summary of said blog post that removes all the charm.

The gap in quality, when it comes to entertainment, is truly astronomical. I mean, it's not even kind of close. I would expect literal children to produce content - after all, Mozart was a prodigy.


Maybe he's referencing how people don't like when other humans post LLM responses in the comments.

"Here's what chatGPT said about..."

I don't like that, either.

I love the LLM for answering my own questions, though.


"Here's what chatGPT said about..." Is the new lmgtfy

lmgtfy was (from what I saw) always used as a snarky way to tell someone to do a little work on their own before asking someone else to do it for them.

I have seen people use "here's what chatGPT" said almost exclusively unironically, as if anyone else wants humans behaving like agents for chatbots in the middle of other people's discussion threads. That is to say, they offer no opinion or critical thought of their own, they just jump into a conversation with a wall of text.


Yeah I don't even read those. If someone can't be bothered to communicate their own thoughts in their own words, I have little belief that they are adding anything worth reading to the conversation.

Why communicate your own thoughts when ChatGPT can give you the Correct Answer? Saves everybody time and effort, right? I guess that’s the mental model of many people. That, or they’re just excited to be able to participate (in their eyes) productively in a conversation.

If I want the "correct answer" I'll research it, maybe even ask ChatGPT. If I'm having a conversation I'm interesed in what the other participants think.

If I don't know something, I'll say I don't know, and maybe learn something by trying to understand it. If I just pretend I know by pasting in what ChatGPT says, I'm not only a fraud but also lazy.


> AI apps like Perplexity are now beginning to challenge Google’s search dominance

Now that is a wild claim. ChatGPT might be challenging Google's dominance, but Perplexity is nothing.


It’s not a wild claim, though maybe your interpretation is wild.

I never said Perplexity individually is challenging Google, but rather as part of a group of apps including ChatGPT, which you conveniently left out of your quote.


At some point, Groupon was the fastest growing company ever.

People "like" or people "suffice" with the output? This "rise of whatever" as one blog put it gives me feelings that people are instead lowering their standards and cutting corners. Letting them cut through to stuff they actually want to do.

> People like the output of LLMs so much that ChatGPT is the fastest growing app ever

And how much of that is free usage, like the parent said? Even when users are paying, ChatGPT's costs are larger than their revenue.


> That is a such a wild claim. People like the output of LLMs so much that ChatGPT is the fastest growing app ever.

And this kind of meaningless factoid was immediately usurped by the Threads app release, which IMO is kind of a pointless app. Maybe let's find a more meaningful metric before saying someone else's claim is wild.


Asking your Instagram Users to hop on to your ready made TikTok Clone is hardly in the same sphere as spinning up that much users from nothing.

And while Threads growth and usage stalled, ChatGPT is very much still growing and has *far* more monthly visits than threads.

There's really nothing meaningless about ChatGPT being the 5th most visited site on the planet, not even 3 years after release. Threads doesn't make the top 50.


I think you just precisely explained why MAU / DAU growth is a meaningless metric in such discussions.

Seems like it's only meaningless if you ignore basic context.

What basic context is being ignored? Here's how the thread has gone:

"chatGPT has the fastest growing userbase in history which shows users really like the output!"

This unsourced (and wrong) claim was offered in rebuttal to another post saying people don't like the output of LLM's. This rebuttal offers DAU/MAU as a metric of how much people like the app, I presume, and thus the output of the app. Besides that being a wild jump on its own, it's incorrect. As I pointed out - threads almost immediately beat that DAU/MAU record, and I'd offer a claim it hasn't exactly been a tremendous success either in popularity or monetarily. Pointing out that they got that DAU/MAU by registering their own users to it is precisely the point that is being made - this metric is a meaningless gauge of how popular an app is, and especially when viewed from the context of this argument, which is whether the popularity of the app (as it relates to DAU/MAU growth) also suggests people love consuming the output of it.

No offense, but are you sure you're following this conversation?


>Pointing out that they got that DAU/MAU by registering their own users to it is precisely the point that is being made - this metric is a meaningless gauge of how popular an app is, and especially when viewed from the context of this argument.

How does that make DAU/MAU growth meaningless ? Threads has special context. That's it. Almost all the other software applications that orbited that record are staples of internet life today. So because one entry had some special circumstances to take into account (that users weren't gained from scratch), the growth as a concept or comparison (for uses gained from scratch) is meaningless ? How does that make any sense ?

Also, yeah strong adoption (which is the real point here beyond just the growth) is a strong signal for satisfaction. It's very strange to claim most people don't like the output of what has half a billion weekly active users and is one of the most visited sites on the planet.

>Besides that being a wild jump on its own, it's incorrect.

It's not incorrect. Threads was the fastest to hit some early milestones (like 100M) sure but since growth stalled, ChatGPT is still the software application with the fastest adoption because it reached further milestones threads hasn't and may not reach.


I would pay $5000 to never have to read another LLM-authored piece of text ever again.

...I do wonder what percent of ChatGPT usage is just students cheating on their homework, though.

Neal Stephenson has a recent post that covers some of this. Also links to teachers talking about many students just putting all their work into chatgpt and turning it in.

https://nealstephenson.substack.com/p/emerson-ai-and-the-for...


He links to Reddit, a site where most people are aggressively against AI. So, not necessarily a representative slice of reality.

He links to a post about a teacher’s expertise with students using AI. The fact that it’s on Reddit is irrelevant.

If you're going to champion something that comes from a place of extreme political bias, you could at least acknowledge it.

This is a baffling response. The politics are completely irrelevant to this topic. Pretty much every American is distrustful of big tech and is completely unaware of what the current administration has conceded to AI companies, with larger scandals taking the spotlight, so there hasn't been a chance for one party or the other to rally around a talking point with AI.

People don't like AI because its impact on the internet is filling it with garbage, not because of tribalism.


>This is a baffling response.

Likewise.

95+% of the time I see a response like this, it's from one particular side of the political aisle. You know the one. Politics has everything to do with this.

>what the current administration has conceded to AI companies

lol, I unironically think that they're not lax enough when it comes to AI.


Based on your response and logic - no dem should read stuff written by repub voters, or if they do read it, dismiss their account because it cannot be … what?

Not sure how we get to dismissing the teacher subreddit, to be honest.


I think there implication is that because the teacher posted on Reddit, they are some kind of socialist, and therefore shouldn't be listened to. I guess their story would be worth listening to if it was posted on truth social instead?

Ah! Nice point on truth social.

Nah, misses the entire point of what I was saying.

But thanks for recognizing that Truth Social has a noticeable political leaning. So close, yet so far.


>they are some kind of socialist

Yes, that is accurate.

>I guess their story would be worth listening to if it was posted on truth social instead?

No, I don't take anti-AI nonsense seriously in the first place. That aside, the main point here was that Reddit has a very strong political leaning. If anyone tried to insist that the politics of Truth Social is irrelevant, you'd immediately call it out.


It really doesn't lol.

I don't get the reactionary right's hysteria about Reddit. It's so clearly not true it's just silly.

It's like when my brother let my little cousin watch a scary movie and she had hysterics about scary things for days. Y'all tell each other ghost stories and convince yourselves it's real.


Yet another one! And literally all I have to do is point out that Reddit is a far-lefty website (it obviously is) and say that I won't play along (I won't).

So instead of addressing the actual substance, you dismiss it because of your assumption of their political leanings.

Good luck navigating the world, I guess.


They have the same political leanings as you. I notice these things.

>Good luck navigating the world, I guess.

Thanks. And a hardy "F you" to you too.


Look, another one! Twist it however you want, I'm not going to accept the idea that far-lefty Reddit is some impartial representation of what teaching is or what the average person thinks of AI.

> 95+% of the time I see a response like this, it's from one particular side of the political aisle. You know the one. Politics has everything to do with this

I really don't, honestly you're being so vague and it's such a bipartisan issue I can't piece together who you're mad at. Godspeed.


Why? So you could discard it faster?

Read things from people that you disagree with.


Because I'm not going to play a game where the other side gets to ignore the rules.

I’d like to see a statistically sound source for that claim. Given how many non-nerds there are on Reddit these days, it’s unlikely that there’s any particular strong bias in any direction compared to any similar demographic.

Given recent studies, that does seem to reflect reality. Trust in AI has been waning for 2 years now.

By what relevant metric?

The userbase has grown by an order of magnitude over the past few years. Models have gotten noticeably smarter and see more use across a variety of fields and contexts.


> Models have gotten noticeably smarter and see more use across a variety of fields and contexts.

Is that really true? The papers I've read seem to indicate the hallucination rate is getting higher.


Models from a few years ago are comparatively dumb. Basically useless when it comes to performing tasks you'd give to o3 or Gemini 2.5 Pro. Even smaller reasoning models can do things that would've been impossible in 2023.

> > “most people agree that the output is trite and unpleasant to consume”

> That is a such a wild claim.

I think when he said "consume" he meant in terms of content consumption. You know, media - the thing that makes Western society go round. Movies, TV, music, books.

Would I watch an AI generated movie? No. What about a TV show? Uh... no. What about AI music? I mean, Spotify is trying to be tricky with that one, but no. I'd rather listen to Remi Wolf's 2024 Album "Big Ideas", which I thought was, ironically, less inspired than "Juno" but easily one of the best albums of the year.

ChatGPT is a useful interface, sure, but it's not entertaining. It's not high-quality. It doesn't provoke thought or offer us some solace in times of sadness. It doesn't spark joy or make me want to get up and dance.


If that were the case, people would watch classic movies, read novels, etc.

No, I’m pretty sure social media has seriously hurt the average person’s attention span.

The idea of sitting down and watching a two hour movie is really quite daunting when you’re used to videos that are at most 30 min and often less than one.


> The idea of sitting down and watching a two hour movie is really quite daunting when you’re used to videos that are at most 30 min and often less than one.

Whenever I watch a modern Netflix/Hulu/etc show: I'm on my phone 2 minutes into the show. Half paying attention to both.

Whenever I watch a modern BBC-ish (anything British really) show: I literally can't look away for more than 10 seconds because I will miss something crucial. If someone distracts me, I rewind the show and rewatch the last few minutes.

What's different? The Brits (at least the stuff that makes it into syndication) focus on content you're going to watch. The Americans focus on filling air between commercials.

Product placement counts as commercials for the purpose of this comparison.


Observe somebody browsing Tiktok/Instagram/YouTube Shorts. People compulsively swipe on to the next reel if the one they're watching doesn't hook them in within the first second.

Right, because the much vaunted Tik-Tok algorithm starts a stopwatch when the clip begins in order to determine whether or not to serve you more content like it.

> attention span.

This gets repeated ad nauseum, but IMHO people are short on patience, not attention.

Parents probably understand this the most: try to find an 80s movie to show to your kids, you'll have a pass at it first to properly remember what it's about, and it will painfully slow.

Not peaceful or measured, just slow. Scenes that don't need much explanation will be exposed for about for 10 min, dialogues that you digest in 2s get 2 min of lingering on.

Most movies were targeted at a public that would need a lot of time to process info, and we're not that public anymore (despite this very TFA about how writers make their dialogues dumber)


I noticed this recently when I decided to watch Hitchcock's 'The Birds.'

It was almost absurd to me not only how bland and drawn out most scenes were, but how absolutely poorly acted it was. If it were not famous(ie didn't exist), and updated to today's vernacular and shot scene for scene, it would absolutely get reamed by critics.

Funny how much changes in just a generation or two.


Old movies are kind of slow but I'm much less frustrated because they are short: an hour, at most two. That's more than enough to tell a story. Modern movies are two hours at minimum with some crossing over three with absolutely nothing to tell (e.g Babylon 2022, completely pissed me off).

I don't think the reason is "public needed time to process info", more likely both the length and the intensity (of changing sights, not of meaning) were ultimately determined by production costs. Filming two hours is more expensive than one hour. Filling an hour with 60 one-minute cuts is more expensive then 30 two-minute cuts because of all the setup and decorations.

Production is now cheaper thanks to CGI, box offices are larger thanks to higher prices and the global market. You no longer have to be frugal when filming, the protection against sloppy overextended movies is now taste and not money. And taste is scarce.


> If that were the case, people would watch classic movies, read novels, etc.

They literally do. Have you ever tried reaching out people NOT on social networks?

> The idea of sitting down and watching a two hour movie is really quite daunting when you’re used to videos that are at most 30 min and often less than one.

Average movie length is increasing every year.


I don't think people know about classic movies, or know that they have access to classic movies (hint: libraries).

This people though has been catching up on a century of classic films. There are plenty of lists around on the internet if you wanted to get started. The AFI Top 100 is a gentle introduction to the (American-only) classics. There are deeper cuts when you are ready to saddle up for "1001 Movies" instead. (Warning, you could be starting down a journey that will involve the next eight years of your life.)


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: