I want to protect my child from X type of content -- one of many jobs of a parent, but I will trust all content to self report to be child inappropriate? "Inappropriate" is entirely subjective and can not be defined as some sort universal bool -- and that's before you get to the point of actively malicious actors like Meta and Tiktok actively exploiting children for their content farms generation and ad impression factories.
If the user owns and controls their computers -- as they should -- then that subjective content filtering layer belongs there, in the owners control. If its a child's, then the parent owns the device, not the child.
The idea is that society should have some common standards for what's inappropriate for children. For example, parents don't want their kids to buy cigarettes, but also, stores don't want to sell them cigarettes. When there's consensus on this, cooperation is possible. Parents have an easier time when they get cooperation from the rest of society.
But there isn't going to be consensus on everything, so content filters are still needed.
So simple, just get various Christian, Muslim, atheist, traditionalist and progressive, sane and insane parents to all agree on a common set of what is appropriate and inappropriate. And then enforce that on all of their children. Why didn’t I think of that? That should go great.
It's the internet. There are no borders and there is no mandate to follow any consensus. Stores may not want to sell cigarettes to children, but e-stores safely hosted in some remote country do want to sell them nicotine pouches and vapes. With a protocol that makes age information always available to websites they could hide their intentions from adults while actively targeting children.
IMO you could have some mechanism by which websites could have content certified as child-safe if they agree to adhere to certain standards. (And thereby make them accessible to child-safe devices, which would otherwise default to blocking content which doesn't bear such a certificate.) Adult devices would not implement those restrictions and would therefore be unaffected.
Very much the puff piece of someone living in a social media bubble. The real problem is how the fediverse is going to survive the onslaught of laws related to social media age verification, data retention, data privacy, data not-privacy (breaking e2e encryption and retaining data for a really long time to spy on users), etc. There are a lot of problematic laws right now, but the velocity of new laws is alarming.
One can make an argument that compliance is possible -- but it isn't free. I don't see how small, independent websites will survive. Operators chose not to follow the laws (which sometimes conflict with each other.) As long as you don't scale too much or the operators or anonymous they can probably get away with it.
I use Mastodon. I use Twitter. Twitter is still fine as long as you keep your follow list clean. That means unfollowing people who post noise, which somehow people haven't figured out 17 years later?? Only view the chronological feed. Could this all have just been RSS feeds? Probably.
This whole thing is silly, LLMs can automate reference validation.
If someone is a lawyer, accountant, doctor, teacher, surgeon, engineer etc, and is regurgitating answers that were pumped out with with GPT-5-extra-low or whatever mediocre throttled model they are using, they should just be fired and de-credentialed. Right now this is easy.
The real problem is ahead: 99.999% of future content that exists will be made using generative AI. For many people using Facebook, Instagram, TikTok, or some other non-sequential, engagement weighted feed, 50%+ of the content they consume today is fake. As that stuff spreads in to modern culture it's going to be an endless battle to keep it out of stuff that should not be publishing fake content (e.g. the New York Times or Wall Street Journal; excluding scientific journals who seem to abandoned validation and basic statistics a long time ago.)
Much of the future value and profit margins might just be in valid data?
Nope, and the article is about a judge. What's the point to incentive lawyers to carefully verify their references when they know the judge has no incentive to read them and can just make shit up anyway?
Generative AI raises a lot of questions as to the value of copyright to society.
There's a very dangerous direction I suspect things are tipping toward with generative AI: the big creative rights holders / representatives are going to be paid big royalties, in perpetuity for generative AI. The amount of money the RIAA could get from Google, for example, may exceed the enterprise values of all record labels combined.
Even more scary, deals written in to national law could join copyright cartels and mega corporations at the hip and effectively ban all but the largest multi-trillion dollar companies from training and serving generative AI models. Local AI models you download and run today - whether LLMs or image generation would be illegal.
These models were trained and tuned on the collective work of human civilization. If someone uses a generative model to assist them in creating something new, how much intellectual property rights does that individual deserve? How much intellectual property rights do the dead, dying, and their rights owners deserve?
What was black or white 5 years ago is now grey. What remains of black or white today will all be grey in 5 years as generative AI proliferates through all forms of software and real time rendering (if my iPhone camera is using generative AI to make an optical zoom look more detailed, how much is really my photo? How much of it is Disney's?)
Even without diving in to the privacy & censorship aspects of these issues, I think there's a very good case for completely ending copyright in the long term (leaving exceptions for things such as a human's own likeness?) At least in the near term, 5 years sounds ok.
A human's own likeness is not copyrightable. Hard to take posts about copyright doctrine seriously when they are premised on complete misunderstanding.
There is a legally protected right of publicity. You cannot take someone's likeness and use it for your advertising campaign/movie/endorsement without their permission.
> There is a legally protected right of publicity.
There is not a general right of publicity in federal law in the US; in certain states there is with different parameters, including as to who is even protected.
There is a false endorsement provision in the Lanham Act, 15 USC § 1125(a), that provides a very narrow protection around misleading commercial endorsement, though.
You'll be arrested for some weird law that doesn't make sense, but it's ok because a pool of 12 people off the street won't consider whatever random thing you did a real crime!
When I was a kid, they said don't meet strangers you talked to online. That was it. Sometimes it turned out poorly, but as it could anywhere. Perhaps on the internet it was less risky because the person you were talking to had no idea you were a kid or anything about you.
There were not apps that all of your friends in school used, and if you didn't use them you wouldn't be cool, but also the apps would push you or cause you to unwittingly share photos publicly while publishing your photos/videos globally to adults who for some reason use their app longer when they look at videos of kids.
Maybe I'm misunderstanding how things work here, I don't use Facebook or Instagram. I've never even seen Tiktok. I've never used LinkedIn. But when I read these stories about what is going on with Mark Zuckerberg and Meta it sounds like they were doing a lot of things they shouldn't be doing in a commercial context, period. If you aren't 18 you should still be able to talk to your friends without being spied on, but you sure as hell shouldn't be getting connected to random people adult or otherwise from all over the world because it's increasing the usage time of those adults on some app.
I think protocols are the way to go and will be what dominate in the post-AI era. Fuck the ads, the constantly changing UIs, the bait and switch, and now just add photo verification to the list. No thanks.
OpenAI, Anthropic, Google, Microsoft certainly desire path dependence but the very nature of LLMs and intelligence itself might make that hard unless they can develop models which truly are differentiated (and better) from the rest. The Chinese open source models catching up make me suspect that won't happen. The models will just be a commodity. There is a countdown clock for when we can get Opus 4.6+ level models and its measured in months.
The reason these LLM tools being good is they can "just do stuff." Anthropic bans third party subscription auth? I'll just have my other tool use Claude Code in tmux. If third party agents can be banned from doing stuff (some advanced always on spyware or whatever), then a large chunk of the promise of AI is dead.
Amp just announced today they are dumping IDE integration. Models seem to run better on bare-bones software like Pi, and you can add or remove stuff on the fly because the whole things open source. The software writes itself. Is Microsoft just trying to cram a whole new paradigm in to an old package? Kind of like a computer printer. It will be a big business, but it isn't the future.
At scale, the end provider ultimately has to serve the inference -- they need the hardware, data centers & the electricity to power those data centers. Someone like Microsoft can also provide a SLA and price such appropriately. I'll avoid a $200/month customer acquisition cost rant, but one user, running a bunch of sub agents, can spend a ton of money. If you don't own a business or funding source, the way state of the art LLMs are being used today is totally uneconomical (easy $200+ an hour at API prices.)
36+ months out, if they overbuild the data centers and the revenue doesn't come in like OpenAI & Anthropic are forecasting, there will be a glut of hardware. If that's the case I'd expect local model usage will scale up too and it will get more difficult for enterprise providers.
(Nothing is certain but some things have become a bit more obvious than they were 6 months ago.)
Thinking about this a little more -> "nature of LLMs and intelligence"
Bloated apps are a material disadvantage. If I'm in a competitive industry that slow down alone can mean failure. The only thing Claude Code has going for it now is the loss making $200 month subsidy. Is there any conceivable GUI overlay that Anthropic or OpenAI can add to make their software better than the current terminal apps? Sure, for certain edge cases, but then why isn't the user building those themselves? 24 months ago we could have said that's too hard, but that isn't the case in 2026.
Microsoft added all of this stuff in to Windows, and it's a 5 alarm fire. Stuff that used to be usable is a mess and really slow. Running linux with Claude Code, Codex, or Pi is clearly superior to having a Windows device with neither (if it wasn't possible to run these in Windows; just a hypothetical.)
From the business/enterprise perspective - there is no single most important thing, but having an environment that is reliable and predictable is high up there. Monday morning, an the Anthropic API endpoint is down, uh oh! In the longer term, businesses will really want to control both the model and the software that interfaces with it.
If the end game is just the same as talking to the Star Trek computer, and competitors are narrowing gaps rather than widening them (e.g. Anthropic and OpenAI releases models minutes from each other now, Chinese frontier models getting closer in capability not further), then it is really hard to see how either company achieves a vertical lock down.
We could actually move down the stack, and then the real problem for OpenAI and Anthropic is nVidia. 2030, the data center expansion is bust, nVidia starts selling all of these cards to consumers directly and has a huge financial incentive to make sure the performant local models exist. Everyone in the semiconductor supply chain below nvidia only cares about keeping sales going, so it stops with them.
Maybe nvidia is the real winner?
Also is it just me or does it now feel like hn comments are just talking to a future LLM?
The British press has strict libel laws, and stories that the American press publishes are absolutely not legal in Britain, yet the British state doesn't threaten the New York Times or Washington Post with reparations (unless I'm missing something.)
If this is the end goal, then they should do the same thing China does. Make back doors mandatory on all devices and ban any sensitive foreign platforms at the network level. If anyone is using VPNs, Tor, or whatever the UK police can flag those individuals and investigate what they are doing. At minimum, they can get ad revenue for Google, X, Meta, etc close to $0 in the UK which will dis-incentivize those platforms from having users there.
There is also a future here where the UK will not be able to monitor or see what their users are doing. SpaceX is already breaking foreign sovereignty with Starlink usage in Iran. If the UK or the rest of EU fails to really crack down at the scale China did, they may completely lose control of what is distributed within their borders. A combination of satellites and mesh networks could be much harder to monitor than the current telecom infrastructure.
The current approach is going to get the UK pressured at the nation state level by the US. In that case the UK isn't answering to some foreign tech company but whatever party is in power in the US at the time.
reply