Here's a good interview with the director of the Free Speech Coalition on the consequences of these "protect the kids" moral panic laws, which include widespread surveillance, banning VPNs and raising the cost of running an independent website to unsustainable levels.
Remember it's not just about pornography. It's anything deemed "harmful to minors" including platforms like Reddit, Bluesky or stuff conservative lawmakers think is harmful like discussion forums for LGBTQ people, sexual health information or dissident political opinions.
They also examine how these laws, which are often backed by the religious Right, are getting support more broadly from people who see it as a way to rein in Big Tech who are creating "social media addiction" and so forth.
And even within our industry there is a lot of money to be made by creating and selling compliance products, so even on forums like this you will find people advocating for them.
This is so much bigger than the “religious right” though, UK and Australia have far less of that and parties from both sides of politics here and in the UK seem to be competing to out-do each other with surveillance, censorship and control of adults online under the guise of ‘child safety’.
And all being pushed so, so much harder in just the last couple of years, all at the same time. I don’t know what’s the source…
Governments around the world have sought to control the internet and strip away anonymity for years, they've now found their foot-in-the-door moment so they're all going for it in their own way.
Some of it is governments watching and copying each other, some of it is dialogue happening at international events, being driven by groups like the Global Coalition for Digital Safety:
It's probably not being driven by one single group, there are a number of private and government orgs whos interests in controlling information converge.
the religious right may be one faction in this push for digital surveilance but I don't think they're the ones behind the EU push for chat control and device lockdown or the insane 3d printer proposal in california.
The code they posted doesn't quite explain the root cause. This is a good study case for resilient API design and testing.
They said their /v1/prefixes endpoint has this snippet:
if v := req.URL.Query().Get("pending_delete"); v != "" {
// ignore other behavior and fetch pending objects from the ip_prefixes_deleted table
prefixes, err := c.RO().IPPrefixes().FetchPrefixesPendingDeletion(ctx)
[..snip..]
}
What's implied but not shown here is that endpoint normally returns all prefixes. They modified it to return just those pending deletion when passing a pending_delete query string parameter.
The immediate problem of course is this block will never execute if pending_delete has no value:
This is because Go defaults query params to empty strings and the if statement skips this case. Which makes you wonder, what is the value supposed to be? This is not explained. If it's supposed to be:
Then this would work, but the implementation fails to validate this value. From this you can infer that no unit test was written to exercise the value:
The post explains "initial testing and code review focused on the BYOIP self-service API journey." We can reasonably guess their tests were passing some kind of "true" value for the param, either explicitly or using a client that defaulted param values. What they didn't test was how their new service actually called it.
So, while there's plenty to criticize on the testing front, that's first and foremost a basic failure to clearly define an API contract and implement unit tests for it.
But there's a third problem, in my view the biggest one, at the design level. For a critical delete path they chose to overload an existing endpoint that defaults to returning everything. This was a dangerous move. When high stakes data loss bugs are a potential outcome, it's worth considering more restrictive API that is harder to use incorrectly. If they had implemented a dedicated endpoint for pending deletes they would have likely omitted this default behavior meant for non-destructive read paths.
In my experience, these sorts of decisions can stem from team ownership differences. If you owned the prefixes service and were writing an automated agent that could blow away everything, you might write a dedicated endpoint for it. But if you submitted a request to a separate team to enhance their service to returns a subset of X, without explaining the context or use case very much, they may be more inclined to modify the existing endpoint for getting X. The lack of context and communication can end up missing the risks involved.
Final note: It's a little odd that the implementation uses Go's "if with short statement" syntax when v is only ever used once. This isn't wrong per se but it's strange and makes me wonder to what extent an LLM was involved.
> But there's a third problem, in my view the biggest one, at the design level. For a critical delete path they chose to overload an existing endpoint that defaults to returning everything. This was a dangerous move. When high stakes data loss bugs are a potential outcome, it's worth considering more restrictive API that is harder to use incorrectly. If they had implemented a dedicated endpoint for pending deletes they would have likely omitted this default behavior meant for non-destructive read paths.
Or POST endpoint, with client side just sending serialized object as query rather than relying that the developer remembers the magical query string.
I think this comment misses that OpenAI hired the guy, not the project.
"This guy was able to vibe code a major thing" is exactly the reason they hired him. Like it or not, so-called vibe coding is the new norm for productive software development and probably what got their attention is that this guy is more or less in the top tier of vibe coders. And laser focused on helpful agents.
The open source project, which will supposedly remain open source and able to be "easily done" by anyone else in any case, isn't the play here. The whole premise of the comment about "squashing" open source is misplaced and logically inconsistent. Per its own logic, anyone can pick up this project and continue to vibe out on it. If it falls into obscurity it's precisely because the guy doing the vibe coding was doing something personally unique.
Not only that, his output is insane, he has more active projects than I bother to count and more than 70k commits last year. He's probably one of, if not the best, vibe coding evangelist.
The original name of his ai assistant tool was 'clawdbot' until Anthropic C&D'ed him. All the examples and blog posts walking thru new user setup on a mac mini or VPS were assuming a claude code max account.
I know he uses many llms for his actual software dev.. - right tool for the job. But the origins of openclaw seem to me more rooted in claude code than codex.
Which does give the whole story an interesting angle when you consider the safety/alignment angle that Anthropic pledges to (publicly) and OpenAI pretty much ignores (publicly). Which is ironic, as configuring codex cli to 'full yolo mode' feels more burdensome and scary than in Claude Code. But I'm pretty sure that speaks more to eng/product decisions, and not CEO & biz strategy choices.
It looks like most of Peter's projects are just simple API wrappers.
Peter's been running agents overnight 24/7 for almost a year using free tokens from his influencer payments to promote AI startups and multiple subscription accounts.
Hi, my name is Peter and I’m a Claudoholic. I’m addicted to agentic engineering. And sometimes I just vibe-code. ... I currently have 4 OpenAI subs and 1 Anthropic sub, so my overall costs are around 1k/month for basically unlimited tokens. If I’d use API calls, that’d cost my around 10x more. Don’t nail me on this math, I used some token counting tools like ccusage and it’s all somewhat imprecise, but even if it’s just 5x it’s a damn good deal.
... Sometimes [GPT-5-Codex] refactors for half an hour and then panics and reverts everything, and you need to re-run and soothen it like a child to tell it that it has enough time. Sometimes it forgets that it can do bash commands and it requires some encouragement. Sometimes it replies in russian or korean. Sometimes the monster slips and sends raw thinking to bash.
I’d bet good money that at leasy 2/3 of all software ever made, the decision makers couldn’t care less about security beyond "let’s get that checkbox to show we care in case we get sued". Higher velocity >> tech debt and bugginess unless you work at nasa or you're writing software for a defibrillator, especially in the current "nothing matters more than next quarter results".
I have worked over two decades creating government software, and I can say that this is not new.
Security (and accessibility) are reluctant minimum effort check boxes at best. However, my experience is focused on court management software, so maybe these aspects are taken more seriously in other areas of government software.
Taylor Lorenz has done excellent reporting on this. It's a right wing censorial moral panic that's forced some Democrats to go along with it by positioning it as "protecting kids". This legislation is moving at a fast clip and we have to fight back.
> using wireless communication means even less bandwidth between nodes, more noise as the number of nodes grows, and significantly higher power use
Space changes this. Laser based optical links offer bandwidth of 100 - 1000 Gbps with much lower power consumption than radio based links. They are more feasible in orbit due to the lack of interference and fogging.
> Building data centres in the middle of the sahara desert is still much better in pretty much every metric
This is not true for the power generation aspect (which is the main motivation for orbital TPUs). Desert solar is a hard problem due to the need for a water supply to keep the panels clear of dust. Also the cooling problem is greatly exacerbated.
You don’t need to do anything to keep panels with a significant angle clear of dust in deserts. The Sahara is near the equator but you can stow panels at night and let the wind do its thing.
The lack of launch costs more than offset the need for extra panels and batteries.
“The reason I concentrate my research on these urban environments is because the composition of soiling is completely different,” said Toth, a Ph.D. candidate in environmental engineering at the University of Colorado who has worked at NREL since 2017. “We have more fine particles that are these stickier particles that could contribute to much different surface chemistry on the module and different soiling. In the desert, you don’t have as much of the surface chemistry come into play.”
You’re not summarizing the article fairly. She is saying the soiling mechanisms are environmentally dependent, not that there is no soiling in the desert. Again, it cites an efficiency hit of 50% in the ME. The article later notes that they’ve experimented with autonomous robots for daily panel cleaning, but it’s not a generally solved problem and it’s not true that “the wind takes care of it.”
And you still haven’t provided a source for your claim.
I’m saying the same thing she is, that soiling isn’t as severe in the desert not that it doesn’t exist.
The article itself said the maximum was 50% and it was significantly less of a problem in the desert. Even 50% still beats space by miles, that only increases per kWh cost by ~2c the need for batteries is still far more expensive.
So sure I could bring up other sources but I don’t want to get into a debate about the relative validity of sources etc because it just isn’t needed when the comparison point is solar on satellites.
You are again misquoting the article. She did not say soiling was "significantly less of a problem" in the desert. She in fact said it "requires you to clean them off every day or every other day or so" to prevent cement formation.
You claimed it was already a solved problem thanks to wind, which is false. You are unable to provide any source at all, not even a controversial one.
And that's just generation. Desert solar, energy storage and data center cooling at scale all remain massive engineering challenges that have not yet been generally solved. This is crucial to understand properly when comparing it to the engineering challenges of orbital computing.
Now you make me want to come up with a controversial source. The Martian rovers continued to operate at useful power level for decades without cleaning.
Thank you for providing a source. That’s an early stage research paper, not the proven solution you originally implied. There are tons of early stage research papers on all these problems on earth and in space. Often we encounter a bunch of complications in applying them at scale such as dew-related cementation[1], which is a key reason why they haven’t been deployed at sufficient scale.
That you point to the Mars rover, a mission with extremely budgeted power requirements, as proof of how soiling doesn’t pose an impediment to mega scale desert solar farms, only underscores the flaw in your reasoning.
“I don’t want to get into a debate about the relative validity of sources etc”
> Not the proven solution
Yet you quote a paper saying it can work. “This impact can have a positive or negative effect depending on the climatic conditions and the surface properties.”
I have no interest in debating with you because I don’t believe you are capable of a honest debate here. The physics doesn’t change and the physics is what matters.
> doesn’t pose an impediment
Nope. I said it beats “space” not that soiling doesn’t exist. That’s what you have to demonstrate here and you have provided zero evidence whatsoever supporting that viewpoint. Hell they could replace the entire array every 5 years and it would still beat space.. Even if what you said was completely true, you still lose the argument.
The argument here is simply over your false claim that "You don’t need to do anything to keep panels with a significant angle clear of dust in deserts." Your only source does not, in fact, establish that, and cementation is in fact a challenge with desert solar -- something that happens much faster than every five years.
Repeating unsupported claims and declaring yourself the winner does not, it turns out, actually help you win an argument.
Indeed, that seems unnecessarily complex for what is actually needed. I don't understand why the great grandparent comment seems to suggest it's an "unsolved" problem - as if grid-scale solar buildouts don't already have examples of things like motorized brushes on rails for exactly this already.
And it's always a numbers game - sure they're not /perfect/, but a few % efficiency loss is fine when it's competing against strapping every kilo of weight to tons of liquid hydrogen and oxygen and firing it into space. How much "extra" headroom to buffer those losses would that equivalent cost pay for?
And solar panels in space degrade over time too - between 1-5% per year depending on coatings/protections.
The same panel produces much more electricity in space than at the bottom of the atmosphere, because the atmosphere already reflects most of the light. Additionally, the panel needs less glass or no glass in space, which makes it lighter and cheaper.
Launch costs have shrunk significantly thanks to SpaceX, and they are projected to shrink further with the Super Heavy Booster and Starship.
Space doesn't really change it though because the effective bandwidth between nodes is reduced by the overall size of the network and how much data they need to relay between each other.
> It makes far more sense to build data centers in the arctic.
What (literally) on earth makes you say this? The arctic has excellent cooling and extremely poor sun exposure. Where would the energy come from?
A satellite in sun-synchronous orbit would have approximately 3-5X more energy generation than a terrestrial solar panel in the arctic. Additionally anything terrestrial needs maintenance for e.g. clearing dust and snow off of the panels (a major concern in deserts which would otherwise seem to be ideal locations).
There are so many more considerations that go into terrestrial generation. This is not to deny the criticism of orbital panels, but rather to encourage a real and apolitical engineering discussion.
> A satellite in sun-synchronous orbit would have approximately 3-5X more energy generation than a terrestrial solar panel in the arctic.
Building 3-5x more solar plants in the Arctic, would still be cheaper than travelling to space. And that's ignoring that there are other, more efficient plants possible. Even just building a long powerline around the globe to fetch it from warmer regions would be cheaper.
> Even just building a long powerline around the globe to fetch it from warmer regions would be cheaper.
Deserts have good sun exposure and land availability but extremely poor water resources, which is necessary for washing the sand off the panels. There are many challenges with scaling both terrestrial and orbital solar.
I wasn't thinking of going THAT far. Northern Canada/Alaska is in the arctic region, so build the line some thousand miles down to the sunny parts of Canada/USA and call it done. Not like this is particularly hard, probably not even that expensive, compared to a million satellites/future space-debris. Greenland would probably be also a good location.
There are plenty of legit concerns here about e.g. the launch externalities which are actually greater than the launch costs themselves, i.e. climate impact to future generations.
However one flaw in this critique is that is only looks at the cost of ground-based solar panels and not their overall scalability. That is, manufacturing cost is far from the only factor. There is also the need for real estate in areas with good sun exposure that also have sufficient fresh water supply for cleaning.
When we really consider the challenges of deploying orders of magnitude more terrestrial solar, it really requires a more detailed and specific critique of the orbital vision. Positive includes near continuous solar exposure (in certain orbits) and no water requirements.
Much has been said of cooling but remember, there is a lot of literal space between the satellites for radiative cooling fins. It is envisioned they would network via optical links, and each mini satellite would be roughly on the order of a desktop GPU (not a whole data center rack). The vision is predicated on leveraging a ton of space for lots of mini satellites on the order of a Dell desktop tower. The terrestrial areas that are really cold are also not that great for solar exposure.
Personally I don't know how it will play out but the core concern I have about making these kinds of absolutist predictions is they make weak assumptions about the sustainable scalability of terrestrial power. And that is definitely the case here in that it only looks at the manufacturing cost of solar.
> There is also the need for real estate in areas with good sun exposure that also have sufficient fresh water supply for cleaning.
Solar panels are 20x more efficient than growing corn for ethanol. Swap out some of those 30 million acres of ethanol corn fields (in the US) and you'll have more energy than you need.
Utility scale PV farms should be seen as literally harvesting solar power, not generating it, while still allowing other agriculture like sheep grazing to occur using the same fields.
You plant a PV panel and add its irrigation (power interconnect) and remote monitoring, then you harvest power for the next 25+ years.
Ethanol production excess is a specific US problem because of the misalignment of incentives and lobbying.
I’m all for it — converting just a third of that land to solar would be enough to power the grid in terms of raw output — but there is still a huge, unsolved problem of energy storage that scale. Without that you’re only powering your data center for five hours a day.
At least in one case the authors claimed to use ChatGPT to "generate the citations after giving it author-year in-text citations, titles, or their paraphrases." They pasted the hallucinations in without checking. They've since responded with corrections to real papers that in most cases are very similar to the hallucination, lending credibility to their claim.[1]
Not great, but to be clear this is different from fabricating the whole paper or the authors inventing the citations. (In this case at least.)
I counted 15 hallucinated citations. The authors explanation is plausible, but it is still 15 citations to works they clearly have not read. Any university teaches you that citing sources you personally have not verified supports you claim(s) is fraudulent. Apologizing is not enough, they should retract the article.
What makes you say they "clearly have not read" their citations? Are you assuming that because they used ChatGPT to generate the citation section based on their description of the papers that they haven't read the papers? Are you suggesting that their clarifications of which real papers the ChatGPT citations were meant to map to are fake, and if so which ones?
I tried clicking on every number on this site and none of them linked to any primary sources.
I clicked through on the first news item I saw, "Security forces open fire on woman filming them."
This led to a post on X captioned 'Yasuj; "Firing a shotgun at a lady who was filming."'
The attached video[1] did not show a weapon. It appeared to show uniformed forces on motorbikes and some kind of muted firing sound.
A subsequent comment said: "Don't write the wrong text, it's marking with paintball so the operations team can arrest him. The sound of a shotgun is like this, don't give wrong information."
To be clear, this is not meant to defend security forces firing paintballs at or arresting people recording them, just calling into question the integrity of this particular claim suggesting lethal force, and the overall lack of support for the figures claimed.
Remember it's not just about pornography. It's anything deemed "harmful to minors" including platforms like Reddit, Bluesky or stuff conservative lawmakers think is harmful like discussion forums for LGBTQ people, sexual health information or dissident political opinions.
They also examine how these laws, which are often backed by the religious Right, are getting support more broadly from people who see it as a way to rein in Big Tech who are creating "social media addiction" and so forth.
And even within our industry there is a lot of money to be made by creating and selling compliance products, so even on forums like this you will find people advocating for them.
"Another Internet Law That Punishes Everyone" - Power User Podcast 1/9/26: https://www.youtube.com/watch?v=8bnp3nmpK9g