People are way too comfortable banning things these days. This is where the term 'nanny state' comes from. A subset of the population doesn't have self control? Ban it everyone. Even if it's a wildly popular form of entertainment with millions of creators sharing their lives, who cares we know better.
Even most liberal societies tend to ban addictive things. Alcohol, smoking, gambling, drugs, they are regulated almost everywhere, in one form or another.
I think that algorithmic social media should be likewise regulated, with at the very minimum ban for minors.
Note that my focus here on the "algorithmic" part. I'm fine with little or no regulation for social media where your feed is just events in chronological order from contacts that you are subscribed to, like an old bullettin board, or the original Facebook.
Also, I think we should consider companies that provide algorithmic social media responsible for what they publish in your feed. The content may be user generated, but what is pushed to the masses is decided by them.
It's way more complex than "no self control". Social media is addictive by design and is peddled at such scale that it is literally impossible to ignore. It's also backed by billions upon billions of dollars.
Pitting the average person up against that, then blaming them for having "no self control" once they inevitably get sucked in is not a remotely fair conclusion.
People keep saying this and yet, I have never used any of these short form video services or really any social media outside of desktop websites like hackernews and reddit. Even on reddit I just subscribe to a few niche and mostly technical subreddits. It seems extremely easy to ignore it all.
Considering the median amount of time people spend on social media daily, it sure does not seem to be so easy for the average person (as was implied in the comment you replied to). I've got a pretty good self control when it comes to the common vices, but I can't see why that would generalise to everyone else.
It's easy for you and me. At the same time, it doesn't seem right to make a business of intentionally going after the people who get addicted to this, like flavored cigs meant to appeal to teenagers. And these social media companies have a paper trail of internal research on user engagement.
But I'm still wary of the motives behind these bans because they seem to be about controlling information, not addiction.
> People are way too comfortable banning things these days. This is where the term 'nanny state' comes from. A subset of the population doesn't have self control? Ban it everyone. Even if it's a wildly popular form of entertainment with millions of creators sharing their lives, who cares we know better.
Europe wants to ban algorithmic recommendation. You attack a straw-man: banning all the content from creators. If you have any valid argument you should bring them to the discussion instead of creating imaginary enemies.
Banning harmful design patterns is a must to protect citizens even if it ruffles the feathers of those profiting from their addiction.
> A subset of the population doesn't have self control?
please fix this to
A subset of the population who has not yet reached the age of consent
I think society broadly accepts that there are different expectations for children and adults; the line is currently officially drawn somewhere around 18-21 years old.
> But in Europe you can drink at 14. Age of consent is also 14.
That is hilariously general. You're conflating a lot of different nations there. In practice; its different depending on the nation, consent is usually 16 and alcohol is ~18.
The thing is, people who live in Europe actually like that companies aren't allowed take advantage of people in every way concievable.
I have an ideia, if you don't like regulation that protects people why don't you fuck off to your own country and advocate for it in whatever dystopian hellhole you came from?
1. The reactions to banning drunk driving: "It's kind of getting communist when a fella can't put in a hard day's work, put in 11 to 12 hours a day, and then get in your truck and at least drink one or two beers."
2. Mandatory seatbelts: "This is Fascism"
You're going to balk at just about anything that comes down the line - I guarantee it.
The videos are the entertainment, not the endless recommendation algorithm.
Additionally, this is not about self control. The claim is that the algorithm is designed to exploit users. Insiders (including a designer of infinite scroll!) have admitted as much going back years: https://www.bbc.com/news/technology-44640959
We should be uncomfortable with companies spending huge amounts of money to research and implement exploitative algorithms. We did something about cigarette companies advertising to kids. This action is along those lines.
I would much rather people not break things down into false dichotomies. Also, we should strive to give our children at least "good" options, and not settle for "less bad".
When most of the market using it is abusive, and a source of abuse, preventing the abuse to continue while it's being investigated, or better apprehended by the population/generations at large, makes sense.
The "subset of the population" is not small, and there is no easy way to protect the most vulnerable.
> it's a wildly popular form of entertainment with millions of creators sharing their lives
I don't think we should be rewarding those who make a living by creating "content" that serves for nothing but a dopamine rush, and you can bet that those who who put it in the effort to create valuable content would prefer to have one less channel where they are forced to put out content just to satisfy the algorithm overlords.
It's not about the content, but the format and the economic pressure that corporations exert over everyone.
If you want to distribute short videos on a website that let's you choose what you want after search and deliberately clicking on a button to play it, by all means feel free to do it. But the current Tik-Tok mechanism removes all agency and are an extreme version of mind pollution.
I noticed Youtube shorts also seems to update the feed based on how long the last video you watched. If you're scrolling quickly then stop to watch a dog video long enough the next one is likely to be another dog video.
I’ve noticed the same thing and this creates such a negative user experience. Every short is a reaction test and if I fail, I get slop. Makes the whole experience very jarring (for better or for worse).
For better or worse with regards to my addiction, my subscriptions are all either science channels or high effort / high production comedy skits (e.g. DropoutTV). I still get slop, but I never subscribe and it mostly remains background noise
That’s the point though. It may seem as if you’re not in control when scrolling, but you can adjust your behavior to get the content you’re looking for almost intuitively. That’s actually something good in my honest opinion.
Why is it good that you need self control to not get slop? Its much better if you can just turn that off and relax rather than having to stay alert to avoid certain content that it tries to trick you to serve you more slop.
Distancing yourself from temptations is an effective and proven way to get rid of addictions, the programs constantly trying to get you to relapse is not a good feature. Like imagine a fridge that constantly puts in beer, that would be very bad for alcoholics and people would just say "just don't drink the beer?" even though this is a real problem with an easy fix.
Basically, I want to set boundaries in a healthy frame of mind, and have that default respected when my self control is lower because I’m tired, depressed, bored, etc.
It’s because content curation is inherently impossible to reach the same level of relevance as direct feedback from user behavior. You mix in all kinds of biases, commercial interests, ideology of the curator, etc, and you inevitably get irrelevant slop. The algorithm puts you in control a little bit more.
> The algorithm puts you in control a little bit more.
Why not let you choose to get a less addictive algorithm? Older algorithms were less addictive, so its not at all impossible to do this, many users would want this.
And that is why these algorithms needs to be regulated. People don't want to pick the algorithm that makes them spend the most time possible on their phones, many would want an algorithm that optimizes for quality rather than quantity on the app so they get more time to do other things. But corporations doesn't want to provide that because they don't earn anything from it.
I just don’t think that the addiction is exclusively due to the algorithm. There’s really a lack of affordable varied options for learning trade and entertainment. We say in Portuguese: You shouldn’t throw the baby away along with the water you used to bathe.
I don't see any harm that could come from saying "a less addictive algorithm needs to be available to users"? For example, lets say there is an option to only recommend videos from channels you subscribe to, that would be much less addictive, why isn't that an option? A regulation that forces these companies to add such a feature would only make the world a better place.
>I don't see any harm that could come from saying "a less addictive algorithm needs to be available to users"?
consider air travel in the present day. ticketing at essentially all airlines breaks down as: premium tickets that are dramatically expensive but offer comfortable seats, and economy tickets that are cramped and seem to impose new indignities every new season. what could be the harm from legislation that would change that menu?
the harm would be fewer people able to travel, fewer young people taking their first trip to experiencing the other side of the world, fewer families visiting grandma, etc.
As much as people hate the air travel experience, the tickets get snapped up, and most of them strictly on the basis of price, and next most taking into account nonstops. This gives us a gauge as to how much people hate air travel: they don't.
this doesn't mean airlines should have no regulation, it doesn't mean monopoly practices are not harmful to happiness, it doesn't mean that addictions don't drive people to make bad choices, it doesn't mean a lot of things.
I'm just trying to get you to see that subtle but significant harm to human thriving can easily come from regulations.
I agree, but what would be the actual mechanism that would allow that? I believe we’re out of ideas. TikTok’s crime was just be firmly successful because of good engineering. There’s no evil sauce apart from promotional content and occasional manipulation, which has nothing to do with the algorithm per se.
And about whitelisting, I honestly don’t think you’re comparing apples to apples. The point of the algorithm is dynamically recommending new content. It’s about discovery.
> I agree, but what would be the actual mechanism that would allow that?
Governments saying "if you are a social content platform with more than XX million users you have to provide these options on recommendation algorithms: X Y Z". It is that easy.
> And about whitelisting, I honestly don’t think you’re comparing apples to apples. The point of the algorithm is dynamically recommending new content. It’s about discovery.
And some people want to turn off that pushed discovery and just get recommended videos from a set of channels that they subscribed to. They still want to watch some tiktok videos, they just don't want the algorithm to try to push bad content on them.
You are right that you can't avoid such algorithm when searching for new content, but I don't see why it has to be there in content it pushes onto you without you asking for new content.
I don’t agree tbh. This is part of how people wind up down extremist rabbit holes. If you’re just lazily scrolling it can easily trap you in its gravity well.
But you can get into extremist rabbit holes independently of control surface. Remember 4chan? Dangerous content is a matter of moderation regardless of interfacing.
I try to react as “violently” as possible to any slop and low-quality crap (e.g. stupid “life hacks” purposely bad to ragebait the comments). On YouTube it’s called “Don’t recommend this channel” and on Facebook it’s multiple taps but you can “Hide All From…”
Basically, I don’t trust that thumbs down is sufficient. It is of course silly, since there are no doubt millions of bad channels and I probably can’t mute them all.
The right way to look at these networks is that people are being trained by the algorithm, not the other way around. The ultimate goal is to elicit behaviors in humans, normally to spend more time and spend more money in the platform, but also for other goals that may be designed by the owners of the network.
One of my gripes with youtube at the moment is that they break my adblock filters to remove shorts more often than they break the filters stopping the actual ads.
The *FCC* is a regulatory agency, and many regulations have criminal penalties for violating them. The SEC for example has sent many people to prison. Fines can also be criminal penalties, not just civil.
This is a tool that is basically vibecoded alpha software published on GitHub and uses API keys. It’s technical people taking risks on their own machines or VMs/servers using experimental software because the idea is interesting to them.
I remember when Android was new it was full of apps that were spam and malware. Then it went through a long period of maturity with a focus on security.
> However this does not excuse Apple to sit with their thumbs up their asses for all these years.
They've been wildly successful for all of those years. They've never been in the novel software business. Siri though one could argue was neglected, but it was also neglected at Amazon Alexa and Google home stuff still sucks too (mostly because none of them made any money and most of their big ideas for voice assistants never came true).
They haven’t been truly novel if you want to say that, for example, the Lisa was covering Xerox PARC ideas but I think you’d have to ignore a lot of significant work to say they didn’t substantially innovate in GUIs, personal assistants and handwriting recognition (Newton), touchscreen behavior (iPhone), etc.
The key thing is that they tend not to ship things which aren’t mature enough to be useful (Vision Pro and Apple Intelligence being cautionary tales about why) and voice assistants just aren’t doing a whole lot for anyone. Google and Amazon have been struggling to find a market, too, and it’s rare to find someone who uses those dramatically more than Apple users do Siri. I think most of the big advances need something close to AGI: if you can’t describe something in a short command, it’s usually much faster to use a device with a screen and a lot of the useful tasks need a level of security and reliability which requires actual reasoning to deliver.
> What people are talking about doing with OpenClaw I find absolutely insane.
Based on their homepage the project is two months old and the guy described it as something he "hacked together over a weekend project" [1] and published it on github. So this is very much the Raspberry Pi crowd coming up with crazy ideas and most of them probably don't work well, but the potential excites them enough to dabble in risky areas.
In my feeds, I’ve seen activity among several an-LLM-is-my-tech-lead-level newly tech-ish people, who are just plugging their lives into it and seeing what happens.
If this really was primarily tech savvy people prodding at the ecosystem, the top skill available, as of a few days ago, probably wouldn’t be a malware installer:
iOS 26 is proof that many product managers at Apple need to find another calling. The usability enshittification in that release is severe and embarrassing.
Ouch. You could have taken a statistical approach "google is not known for high quality product development and likely therefore does not select candidates for qualities in product-development domain" - I'm talking too much to Gemini, aren't I?
I'm not that surprised because of how pervasive the 'move fast and break things' culture is in Silicon Valley, and what is essentially AI accelerationism. You see this reflected all over HN as well, e.g. when Cloudflare goes down and it's a good thing because it gives you a break from the screen. Who cares that it broke? That's just how it is.
This is just not how software engineering goes in many other places, particularly where the stakes are much higher and can be life altering, if not threatening.
It is obvious if viewed through an Apple lens. It wouldn't be so obvious if viewed through a Google lens. Google doesn't hesitate to throw whatever its got out there to see what sticks; quickly cancelling anything that doesn't work out, even if some users come to love the offering.
After subscribing for a decade I canceled NYT last yr because I felt it was leaning more into social media bait (which I dont blame them for business reasons). That plus they kept blocking Firefox with an unpassable captcha even though I was logged in.
I now read WSJ largely for the same reasons "more focused, a little drier, easier to follow". I also find WSJ is much better at writing good headlines that draw you in, on a broad range of topics not just breaking Trump news 24/7 which is mostly what NTYimes notifies you with. WSJ also has an excellent Youtube channel, probably the best of the big 3. The only problem with WSJ is it costs twice as much.
It is much easier to get by not paying for NYT by using stuff like https://periscope.corsfix.com/ for sure. But it is a big inconvenience. WSJ is much more aggressive with it.
>It is much easier to get by not paying for NYT by using stuff like https://periscope.corsfix.com/ for sure. But it is a big inconvenience. WSJ is much more aggressive with it.
While not made specifically for that purpose, this extension[0][1] does a nice job without a lot of hassle, and it makes such pages accessible to others with a URL you can share. The downside is that dynamic content isn't supported.
reply