Hacker Newsnew | past | comments | ask | show | jobs | submit | the_mitsuhiko's commentslogin

> Unless the EU member states actually impose capital controls, investors will continue to send their capital wherever it can earn the highest returns.

You don't need to introduce capital controls to make it unattractive to invest in the US. There are plenty of options that the EU could pull that would make investments abroad very unpopular quickly.


Like how…?

By taking inspiration from the US. The US has PFIC for instance and many other reporting requirements that make it more attractive to invest in the US than abroad.

Yea, but how?

The EU can barely get the Mercosur FTA out the door. How can it even attempt to make such a drastic change that would make FDI in the EU less attractive than equally large and equally onerous China?

And that ignores the fact that states like Poland, Ireland, and Czechia would ferociously fight back at anything that threatens their FDI driven economies.

Even Ireland opposed the Anti-Coercion Instrument [0] four days ago, and everyone still remembers Belgium's unilateral opposition to seizing frozen Russian assets barely a month ago.

[0] - https://www.reuters.com/world/europe/be-no-doubt-eu-will-ret...


That Europe is incapable of doing anything bold is a different topic. You don't have to tell me how fundamentally screwed we are because of the consensus issue. But Europe could, without introducing capital controls, implement something. The US did, there is no fundamental reason why Europe could not either.

It's just a question of political will


If something is hypothetically possible but practically impossible, then the mental exercise is a waste of time, and distracts from thinking about an actual solution.

For example, Trump could impeached and removed from office, but that isn't happening. So what's the solution?


I exclusively responded to a comment about capital controls, which are even less likely. I'm not particularly interested in a discussion about what politicans might or might not do.

I think if people were forced to invest their pensions in shitty EU stocks there would be push back. Also moving public sector pensions into EU stocks won't deliver the growth required, they are already unsustainable.

But there's a chicken and egg effect here in that the stock prices are low because of low investment and the stocks are bad because the stock prices are low.

For instance, Meta has basically doubled in price from a few years back but their business is basically identical. Doesn't seem very efficient to me, at least.


Those are capital controls by another name.

but not necessarily capital controls by a similar legislative difficulty, although at this point it's somewhat abstract what is being discussed.

I can also listen to a notary online in Austria. I just absolutely do not want to have the notary involved in the first place.

> I don't find the wording in the RFC to be that ambiguous actually.

You might not find it ambiguous but it is ambiguous and there were attempts to fix it. You can find a warmed up discussion about this topic here: https://mailarchive.ietf.org/arch/msg/dnsop/2USkYvbnSIQ8s2vf...


> will GitHub face the same slop-destiny as mainstream social media

At the very least because it's now human + coding agent, separating out the human input from the machine output in pull requests becomes necessary in my book. There are dramatic differences in prompting styles that can have completely different qualities of output and it's much easier to tell it apart from the prompts than from the outputs given that it's basically an amplification problem.


> separating out the human [input] from the machine

I was thinking more generally and thus put the noun input in parenthesis in the quote. With agents and slop, the value for humans being there may quickly spiral down. There are also a lot of bad stuff already there, including malware and such.

If you have your own infrastructure instead of a mega-platform, you can control these things more easily.


The value in open source code was never the code. It was the trust that was created around it that it becomes a place for useful innovation, for trust, for vetting, for keeping dependencies low.

I can build my own curl in a week, but the value that curl gives me is that it's a multi decade old library, by a person that has dedicated his live to keeping the project there, keeping a quality bar etc.

The value of curl is not curl, it's the human behind it.


The human behind it, the community using it critically, and the years of battle hardening.

The great open source tools out there have handled, worked around, or influenced away many many bugs and edge cases out in the real world, many of which you wont think of when initially designing your own. The silent increase in stability and productivity resulting from this kind of thing is as vast as it is hard to see/measure. It feels like the quote about expertise saying someone "has forgotten more than I'll ever know about [subject]".

Thank you to everyone powering our collective work.


At this point I'm fully down the path of the agent just maintaining his own tools. I have a browser skill that continues to evolve as I use it. Beats every alternative I have tried so far.

Same. Claude Opus 4.5 one-shots the basics of chrome debug protocol, and then you can go from there.

Plus, now it is personal software... just keep asking it to improve the skill based on you usage. Bake in domain knowledge or business logic or whatever you want.

I'm using this for e2e testing and debugging Obsidian plugins and it is starting to understand Obsidian inside and out.


Cool! Have you written more about this? (EDIT: from your profile, is that what https://relay.md is about?)

https://relay.md is a company I'm working on for shared knowledge management/ AI context for teams, and the Obsidian plugin is what i am driving with my live-debug and obsidian-e2e skills.

I can try to write it up (I am a bit behind this week though...), but I basically opened claude code and said "write a new skill that uses the chrome debug protocol to drive end to end tests in Obsidian" and then whenever it had problems I said "fix the skill to look up the element at the x,y coordinate before clicking" or whatever.

Skills are just markdown files, sometimes accompanied by scripts, so they work really naturally with Obsidian.


Hey FWIW Relay is AWESOME!! The granular sharing of a given dir within a vault (vs the whole thing) finally solves the split-brain problem of personal (private) vault on my own hardware vs mandated use of a company laptop... it's fast, intuitive, and SOLVES this long-time thorn in my side. Thanks for creating it, high five, hope it leads to massive success for you! :)

Thank you for the kind words <3

Sorry it took me a while. Hopefully this helps:

https://notes.danielgk.com/Obsidian/Obsidian+E2E+testing+Cla...


Thanks! It does help, it's a great blog. You shld consider posting a "show hn".

Do you experience any context pollution with that approach?

Writing your own skill is actually a lot better for context efficiency.

Your skill will be tuned to your use case over time, so if there's something that you do a lot you can hide most of the back-and-forth behind the python script / cli tool.

You can even improve the skill by saying "I want to be more token efficient, please review our chat logs for usage of our skill and factor out common operations into new functions".

If anything context waste/rot comes from documentation of features that other people need but you don't. The skill should be a sharp knife, not a multi-tool.


Not really. less bad than the mcps i used.

whats the name of the skill?

why would that matter?

I find it so incredible disappointing that discrimination by citizenship or country of birth is not just alive, but getting worse. I’m afraid if the US is starting with this, it won’t take long for others to catch up.

If the world learns anything from the celebration of stupidity that has become the US, I very much hope it’s “whatever they’re doing, we absolutely should not.”

A lot of countries already do this. You cannot get visas to most developed countries if you are likely to become a "public charge". In general, its a lot easier to get a visa if you are from a rich and stable country (or are rich yourself), and if you look at where countries allow visa free travel to citizens of another country the countries on this list are unlikely to qualify!

In that case, why not have some measurement of what makes a person likely to be a public charge that applies to every country, rather than a blanket ban on everyone from targeted countries?

They already literally review on a case by case basis regardless of country of origin. Providing evidence of financial support is a big part of visa and green card applications.

There are lots of possible reasons. Some good, some bad.

A possible good reason might be that there is a higher level of fraud (e.g. faked financial statements), or a higher level of public charge in applications from some countries - especially if it is a pause while procedures are changed. On the other hand the true motive might be something else.

That said, I have no idea why its this particular list of countries. Why Thailand or Jamaica or Nepal?


H1B processing is hopelessly backed up for the 60-70 thousand visas we give out annually. We would have to massively cut immigration inflow, from the 1-3 million annually we have today, to make those granular determinations feasible.

I don't think individualized determination are even possible. Unless you take very few people from each country, they'll inevitably find each other and form communities. And the kinds of communities they form will be driven by their cultures. The question isn't "would this one Bangladeshi be a good immigrant." It is "when 100,000 Bangladeshis inevitably form a cultural enclave in some city, will that be better or worse than what was there before?"


That is not the same as this. If you're a multi-PhD holder from Iran who's a world-famous scientist, you can get into e.g. the UK. This would forbid them, purely based on country of origin.

The article says it is a temporary pause. other sources seem to confirm this:

"Immigrant visa processing from these 75 countries will be paused while the State Department reassesses immigration processing procedures to prevent the entry of foreign nationals who would take welfare and public benefits,"

https://www.reuters.com/world/us/us-suspend-visa-processing-...


Oh, well that's reassuring

The U.S. already does this. Providing evidence of financial support is a big part of visa and green card applications. If this is a big problem, it's because the U.S. is approving applications without sufficiently reviewing that evidence (but more likely, it's a bogus excuse).

You need to learn your history because one of the first immigration laws this country passed was exclusively banning Chinese people for nearly an entire human lifespan.

Who doesn't discriminate by citizenship, really?

That’s the “is not just alive” part.

Yeah, but then the "others will catch up" part does not make sense. Other countries don't need the US' example to do that.

Good. I like having distinct nation states with different cultures and ethnicity instead of bland homogenized globalized grayness - the thing that could be seen in every mall in a city that has international airport. From Jakarta trough Kenya, Berlin and NY - it is all the same. There should be breaks on the whole immigration and asylum things.

While I don't agree with the haphazard and seemingly random policy changes coming from the US lately -- this is a bad take.

You do realize that discrimination by citizenship is conducted by basically every government on earth in the context of visas and tourism and residency?

In fact, what made the US so bizarre up until about 1914 was that they were the only major country that effectively had open borders. There was no welfare state to take advantage of back then, and you literally did have to pull yourself up by your bootstraps.

This only started to shift after the US began constructing its welfare state (welfare state expansion correlates with increasingly closed immigration policy, hence where we find ourselves today).


Literally every country worldwide does this. The question is simply to what extent and to what countries. The whole difference between being a native an an alien is the rights you get. It's not a human right to be able to freely go into any country you please.

> The whole difference between being a native an an alien is the rights you get. It's not a human right to be able to freely go into any country you please.

The first step for genocide is to dehumanize people.

They're not humans, they're aliens. Therefore it's fine if we treat them as filth and throw them away (or gas them).


It's interesting you got downvoted, perhaps for the sentence

> The whole difference between being a native an an alien is the rights you get.

A knee jerk and uncharitable reading might make this look bad, but it does require an uncharitable reading. It is clear what you mean.

However, the claim

> It's not a human right to be able to freely go into any country you please.

is not false. The idea that open borders are a good thing is a very odd idea. It seems to grow out of a hyperindividualistic and global capitalist/consumerist culture and mindset that doesn't recognize the reality of societies and cultures. Either that, or it is a rationalization of one's own very domestic and particular choices, for example. In any case, uncontrolled migration is well-understood (and rather obviously!) as something damaging to any society and any culture. In hyperindividualistic countries, this is perhaps less appreciated, because there isn't really an ethnos or cohesive culture or society. In the US, for example, corporate consumerism dominates what passes as "culture" (certainly pop culture), and the culture's liberal individualism is hostile to the formation and persistence of a robust common good as well as a recognition of what constitutes an authentic common good. It is reduced mostly to economic factors, hence globalist capitalism. So, in the extreme, if there are no societies, only atoms and the void, then who cares how to atoms go?

The other problem is that public discourse operates almost entirely within the confines of the false dichotomy of jingoist nationalism on the one hand and hyperindividualist globalism on the other (with the respective variants, like the socialist). There is little recognition of so-called postliberal positions, at least some of which draw on the robust traditional understanding of the common good and the human person, one that both jingoist nationalism and hyperindividualist globalism contradict. When postliberalism is mentioned, it is often smeared with false characterization or falsely lumped in with nihilistic positions like the Yarvin variety...which is not traditional!

Given the ongoing collapse of the liberal order - a process that will take time - these postliberal positions will need to be examined carefully if we are to avoid the hideous options dominating the public square today.


> The idea that open borders are a good thing is a very odd idea

Passports were not common until the 20th century. Until then borders were mostly porous.

There did use to be other cases some people couldn't leave a geografic confines, they used to call them serfs.


Pardon me if I’m misreading it but this sounds like disinformation. No examples in your example, a lot of abstract reasoning unmoored from facts.

>uncontrolled migration is well-understood (and rather obviously!) as something damaging to any society and any culture.

The US was built on unrestricted immigration for a long time. Was that destructive? I guess so if you count native Americans but not to the nation of USA.

Capitalism wants closed borders to labor and open borders to capital. Thats how they can squeeze labor costs while maximizing profits. The US is highly individualistic but wants closed borders so how does your reasoning align with the news?


Capitalists in wealthy countries have absolutely no problem with effectively open borders, that's exactly how they squeeze labor costs

That's the wrong way of looking at it. We have evidence that national cultures affect prosperity, and that, at scale, immigrants bring their cultures with them: https://www.rorotoko.com/11/20230913-jones-garett-on-book-cu... ("For the last twenty years I’ve been asking the Adam Smith question: Why are some nations so much more productive than others? I’d found some new answers in my own research, summed up in my earlier book Hive Mind. But at the same time, I kept reading findings by a separate group of researchers, especially three excellent professors at Brown University: David Weil, Louis Putterman, and Oded Galor. Their work on the 'Deep Roots' of economic prosperity suggested that many of the important economic differences across countries began centuries, even millennia ago.").

The U.S. takes in millions of immigrants a year. At that scale, it's not a question of the individual merits of a single immigrant from a country. It's about the merits of the community that will be formed when 100,000 immigrants from that country come to the U.S. and settle in the same place and socialize their children into their culture. And the evidence we have is that, when that happens, they'll bring with them a lot of characteristics of their origin countries.


This is a gigantic middle finger to pre-1965 South Asian immigrants, which you continue to pretend don't exist.

A few days ago he was claiming that the most orderly societies had the least seasoned food.

Am I wrong? You acknowledge that food preferences are cultural, right? Wouldn’t it be weird if culture just affected the kinds of food people like and how they dress, but not the kinds of civic institutions they form?

Not at all! I think it’s the opposite! That population was small and scattered. They had limited capacity to create cultural enclaves, develop ethnic social identity, etc. They ended up absorbing much more culturally from Americans and had little cultural and social impact on the communities where they moved.

That’s quite different from mass immigration.


> MCP allows any client (Claude, Cursor, IDEs) to dynamically discover and interact with any resource (Postgres, Slack) without custom glue code.

My agent writes its own glue code so the benefit does not seem to really exist in practice. Definitely not for coding agents and increasingly less for non coding agents too. Give it a file system and bash in a sandbox and you have a capable system. Give it some skills and it will write itself whatever is neeeded to connect to an API.

Every time I think I have a use case for MCP I discover that when I ask the agent to just write its own skill it works better, particularly because the agent can fix it up itself.


The skill/CLI argument misses what MCP enables for interactive workflows. Sure, Claude can shell out to psql. But MCP lets you build approval gates, audit logs, and multi-step transactions that pause for human input.

Claude Code's --permission-prompt-tool flag is a good example. You point it at an MCP server, and every permission request goes through that server instead of a local prompt. The server can do whatever: post to Slack, require 2FA, log to an audit trail. Instead of "allow all DB writes" or "deny all," the agent requests approval for each mutation with context about what it's trying to do.

MCP is overkill for "read a file" but valuable when you need the agent to ask permission, report progress, or hand off to another system mid-task.


You end up wasting tokens on implementation, debugging, execution, and parsing when you could just use the tool (tool description gets used instead).

Also, once you give it this general access, it opens up essentially infinite directions for the model to go to. Repeatability and testing become very difficult in that situation. One time it may write a bash script to solve the problem. The next, it may want to use python, pip install a few libraries to solve that same problem. Yes, both are valid, but if you desire a particular flow, you need to create a prompt for it that you'll hope it'll comply with. It's about shifting certain decisions away from the model so that it can have more room for the stuff you need it to do while ensuring that performance is somewhat consistent.

For now, managing the context window still matters, even if you don't care about efficient token usage. So burning 5-10% on re-writing the same API calls makes the model dumber.


> You end up wasting tokens on implementation, debugging, execution, and parsing when you could just use the tool (tool description gets used instead).

The token are not wasted, because I rewind to before it started building the tool. That it can build and manipulate its own tools to me is the benefit, not the downside. The internal work to manipulate the tools does not waste any context because it's a side adventure that does not affect my context.


Maybe I'm not understanding the scenario well. I'm imagining an autonomous agent as a sort of baseline. Are you saying the agent says "I need to write a tool", it takes a snapshot, and once it's done, it rewinds to the snapshot but this time, it has the tool it desired? That's actually a really cool idea to do autonomously!

If you mean manually, that's still interesting, but that kind of feels like the same thing to me. The idea is - don't let the agent burn context writing tools, it should just use them. Isn't that exactly what yours is doing? Instead of rewinding to a snapshot, I have a separate code base for it. As tools get more complex, it seems nice to have them well-tested with standardized input and output. Generating tools on the fly, rewinding, and using tools is just the same thing. You even would need to provide some context that says what the tool is and how to use it, which is basically what the mcp server is doing.


> Are you saying the agent says "I need to write a tool", it takes a snapshot, and once it's done, it rewinds to the snapshot but this time, it has the tool it desired? That's actually a really cool idea to do autonomously!

I'm basically saying this except I currently don't give the agent a tool yet to do it automatically because it's not really RL'ed to that extend. So I use the branching and compaction functionality of my harness manually when it should do that.

> If you mean manually, that's still interesting, but that kind of feels like the same thing to me.

It's similar, but it retains the context and feels very naturally. There are many ways to skin the cat :)


I think the path to dependency on closed publishers was opened wide with the introduction of both attestations and trusted publishing. People now have assigned extra qualities to such releases and it pushes the ecosystem towards more dependency on closed CI systems such as github and gitlab.

It was a good intention, but the ramifications of it I don't think are great.


> People now have assigned extra qualities to such releases and it pushes the ecosystem towards more dependency on closed CI systems such as github and gitlab.

I think this is unfortunately true, but it's also a tale as old as time. I think PyPI did a good job of documenting why you shouldn't treat attestations as evidence of security modulo independent trust in an identity[1], but the temptation to verify a signature and call it a day is great for a lot of people.

Still, I don't know what a better solution is -- I think there's general agreement that packaging ecosystems should have some cryptographically sound way for responsible parties to correlate identities to their packages, and that previous techniques don't have a great track record.

(Something that's noteworthy is that PyPI's implementation of attestations uses CI/CD identities because it's easy, but that's not a fundamental limitation: it could also allow email identities with a bit more work. I'd love to see more experimentation in that direction, given that it lifts the dependency on CI/CD platforms.)

[1]: https://docs.pypi.org/attestations/security-model/


> It was a good intention, but the ramifications of it I don't think are great.

as always, the road to hell is paved with good intentions

the term "Trusted Publishing" implies everyone else is untrusted

quite why anyone would think Microsoft is considered trustworthy, or competent at operating critical systems, I don't know

https://firewalltimes.com/microsoft-data-breach-timeline/


> the term "Trusted Publishing" implies everyone else is untrusted

No, it just means that you're explicitly trusting a specific party to publish for you. This is exactly the same as you'd normally do implicitly by handing a CI/CD system a long-lived API token, except without the long-lived API token.

(The technique also has nothing to do with Microsoft, and everything to do with the fact that GitHub Actions is the de facto majority user demographic that needs targeting whenever doing anything for large OSS ecosystems. If GitHub Actions was owned by McDonalds instead, nothing would be any different.)


> This is exactly the same as you'd normally do implicitly by handing a CI/CD system a long-lived API token, except without the long-lived API token.

The other difference is being subjected to a whitelisting approach. That wasn't previously the case.

It's frustrating that seemingly every time better authentication schemes get introduced they come with functionality for client and third party service attestation baked in. All we ever really needed was a standardized way to limit the scope of a given credential coupled with a standardized challenge format to prove possession of a private key.


> The other difference is being subjected to a whitelisting approach. That wasn't previously the case.

You are not being subjected to one. Again: you can always use an API token with PyPI, even on a CI/CD platform that PyPI knows how to do Trusted Publishing against. It's purely optional.

> All we ever really needed was a standardized way to limit the scope of a given credential coupled with a standardized challenge format to prove possession of a private key.

That is what OIDC is. Well, not for a private key, but for a set of claims that constitute a machine identity, which the relying party can then do whatever it wants with.

But standards and interoperability don't mean that any given service will just choose to federate with every other service out there. Federation always has up-front and long-term costs that need to be balanced with actual delivered impact/value; for a single user on their own server, the actual value of OIDC federation versus an API token is nil.


Right, I meant that the new scheme is subject to a whitelist. I didn't mean to imply that you can't use the old scheme anymore.

> Federation always has up-front and long-term costs

Not particularly? For example there's no particular cost if I accept email from outlook today but reverse that decision and ban it tomorrow. I don't immediately see a technical reason to avoid a default accept policy here.

> for a single user on their own server, the actual value of OIDC federation versus an API token is nil.

The value is that you can do away with long lived tokens that are prone to theft. You can MFA with your (self hosted) OIDC service and things should be that much more secure. Of course your (single user) OIDC service could get pwned but that's no different than any other account compromise.

I guess there's some nonzero risk that a bunch of users all decide to use the same insecure OIDC service. But you might as well worry that a bunch of them all decide to use an insecure password manager.

> Well, not for a private key, but for a set of claims that constitute a machine identity

What's the difference between "set of claims" and "private key" here?

That last paragraph in GP was more a tangential rant than directly on topic BTW. I realize that OIDC makes sense here. The issue is that as an end user I have more flexibility and ease of use with my SSH keys than I do with something like a self hosted OIDC service. I can store my SSH keys on a hardware token, or store them on my computer blinded so that I need a hardware token or TPM to unlock them, or lots of other options. The service I'm connecting to doesn't need to know anything about my workflow. Whereas self hosting something like OIDC managing and securing the service becomes an entire thing on top of which many services arbitrarily dictate "thou shalt not self host".

It's a general trend that as new authentication schemes have been introduced they have generally included undesirable features from the perspective of user freedom. Adding insult to injury those unnecessary features tend to increase the complexity of the specification. In contrast, it's interesting to think how things might work if what we had instead was a single widely accepted challenge scheme such as SSH has. You could implement all manner of services such as OIDC on top of such a primitive while end users would retain the ability to directly use the equivalent of an SSH key.


> Not particularly? For example there's no particular cost if I accept email from outlook today but reverse that decision and ban it tomorrow. I don't immediately see a technical reason to avoid a default accept policy here.

Accepting email isn't really the same thing. I've linked some resources elsewhere in this thread that explain why OIDC federation isn't trivial in the context of machine identities.

> The value is that you can do away with long lived tokens that are prone to theft. You can MFA with your (self hosted) OIDC service and things should be that much more secure. Of course your (single user) OIDC service could get pwned but that's no different than any other account compromise.

You can already do this by self-attenuating your PyPI API token, since it's a Macaroon. We designed PyPI's API tokens with exactly this in mind.

(This isn't documented particularly well, since nobody has clearly articulated a threat model in which a single user runs their own entire attenuation service only to restrict a single or small subset of credentials that they already have access to. But you could do it, I guess.)

> What's the difference between "set of claims" and "private key" here?

A private key is a cryptographic object; a "set of claims" is (very literally) a JSON object that was signed over as the payload of a JWT. You can't sign (or encrypt, or whatever) with a set of claims naively; it's just data.


Thank you again for taking the time to walk through this stuff in detail. I think what happened (is happening) with this stuff is a slight communication issue. Some of us (such as myself) are quite jaded at this point when we see a "new and improved" solution with "increased security" that appears to even maybe impinge on user freedoms.

I was unaware that macaroons could be used like that. That's really neat and that capability clears up an apparent point of confusion on my part.

Upon reflection, it makes sense that preventing self hosting would be a desirable feature of attested publishing. That way the developer, builder, and distributor are all independent entities. In that case the registry explicitly vetting CI/CD pipelines is a feature, not a bug.

The odd one out is trusted publishing. I had taken it to be an eventual replacement for API tokens (consider the npm situation for why I might have thought this) thus the restrictions on federation seemed like a problem. However if it's simply a temporary middle ground along the path to attested publishing and there's a separate mechanism for restricting self managed API tokens then the overall situation has a much better appearance (at least to my eye).


I mean, if it meant the infrastructure operated under a franchising model with distributed admin like McD, it would look quite different!

There is more than one way to interpret the term "trusted". The average dev will probably take away different implications than someone with your expertise and context.

I don't believe this double meaning is an unfortunate coincidence but part of clever marketing. A semantic or ideological sleight of hand, if you will.

In the same category: "Trusted Computing", "Zero trust" and "Passkeys are phishing-resistant"


> I don't believe this double meaning is an unfortunate coincidence but part of clever marketing. A semantic or ideological sleight of hand, if you will.

I can tell you with absolute certainty that it really is just unfortunate. We just couldn’t come up with a better short name for it at the time; it was going to be either “Trusted Publishing” or “OIDC publishing,” and we determined that the latter would be too confusing to people who don’t know (and don’t care to know) what OIDC is.

There’s nothing nefarious about it, just the assumption that people would understand “trusted” to mean “you’re putting trust in this,” not “you have to use $vendor.” Clearly that assumption was not well founded.


Maybe signed publishing or verified publishing would have been better terms?


It’s neither signed or verified, though. There’s a signature involved, but that signature is over a JWT not over the package.

(There’s an overlaid thing called “attestations” on PyPI, which is a form of signing. But Trusted Publishing itself isn’t signing.)


Re signed - that is a fair point, although it raises the question, why is the distributed artifact not cryptographically authenticated?

Maybe I'm misunderstanding but I thought the whole point of the exercise was to avoid token compromise. Framed another way that means the goal is authentication of the CI/CD pipeline itself, right? Wouldn't signing a fingerprint be the default solution for that?

Unless there's some reason to hide the build source from downstream users of the package?

Re verified, doesn't this qualify as verifying that the source of the artifact is the expected CI/CD pipeline? I suppose "authenticated publishing" could also work for the same reason.


> why is the distributed artifact not cryptographically authenticated?

With what key? That’s the layer that “attestations” add on top, but with Trusted Publishing there’s no user/package—associated signature.

> Maybe I'm misunderstanding but I thought the whole point of the exercise was to avoid token compromise. Framed another way that means the goal is authentication of the CI/CD pipeline itself, right? Wouldn't signing a fingerprint be the default solution for that?

Yes, the goal is to authenticate the CI/CD pipeline (what we’d call a “machine identity”). And there is a signature involved, but it only verifies the identity of the pipeline, not the package being uploaded by that pipeline. That’s why we layer attestations on top.

(The reasons for this are unfortunately nuanced but ultimately boil down to it being hard to directly sign arbitrary inputs with just OIDC in a meaningful way. I have some slides from talks I gave in the past that might help clarify Trusted Publishing, the relationship with signatures/attestations, etc.[1][2])

> I suppose "authenticated publishing" could also work for the same reason.

I think this would imply that normal API token publishing is somehow not authenticated, which would be really confusing as well. It’s really not easy to come up with a name that doesn’t have some amount of overlap with existing concepts, unfortunately.

[1]: https://yossarian.net/res/pub/packagingcon-2023.pdf

[2]: https://yossarian.net/res/pub/scored-2023.pdf


> imply that normal API token publishing is somehow not authenticated

Fair enough, although the same reasoning would imply that API token publishing isn't trusted ... well after the recent npm attacks I suppose it might not be at that.

> With what key?

> And there is a signature involved,

So there's already a key involved. I realize its lifetime might not be suitable but presumably the pipeline itself either already possesses or could generate a long lived key to be registered with the central service.

> but it only verifies the identity of the pipeline,

I thought verifying the identity of the pipeline was the entire point? The pipeline singing a fingerprint of the package would enable anyone to verify the provenance of the complete contents (either they'd need a way to look up the key or you could do TOFU but I digress). There's value in being able to verify the integrity of the artifacts in your local cache.

Also, the more independent layers of authentication there are the fewer options an attacker will have. A hypothetical artifact that carried signatures from the developer, the pipeline, and the registry would have a very clear chain of custody.

> it being hard to directly sign arbitrary inputs with just OIDC in a meaningful way

At the end of the day you just need to somehow end up in a situation where the pipeline holds a key that has been authenticated by the package registry. From that point on I'd think that the particular signature scheme would become a trivial implementation detail; you stuff the output into some json or something similar and get on with life.

Has some key complexity gone over my head here?

BTW please don't take this the wrong way. It's not my intent to imply that I know better. As long as the process works it isn't my intent to critique it. I was just honestly surprised to learn that the package content itself isn't signed by the pipeline to prove provenance for downstream consumers and from there I'm just responding to the reasoning you gave. But if the current process does what it set out to do then I've no grounds to object.


> So there's already a key involved. I realize its lifetime might not be suitable but presumably the pipeline itself either already possesses or could generate a long lived key to be registered with the central service.

The key involved is the OIDC IdP's key, which isn't controlled by the maintainer of the project. I think it would be pretty risky to allow this key to directly sign for packages, because this would imply that any party that can use that key for signing can sign for any package. This would mean that any GitHub Actions workflow anywhere would be one signing bug away from impersonating signatures for every PyPI project, which would be exceedingly not good. It would also make the insider risk from a compromised CI/CD provider much larger.

(Again, I really recommend taking a look at the talks I linked. Both Trusted Publishing and attestations were multi-year projects that involved multiple companies, cryptographers, and implementation engineers, and most of your - very reasonable! - questions came up for us as well while designing and planning this work.)

> I thought verifying the identity of the pipeline was the entire point? The pipeline singing a fingerprint of the package would enable anyone to verify the provenance of the complete contents (either they'd need a way to look up the key or you could do TOFU but I digress). There's value in being able to verify the integrity of the artifacts in your local cache.

There are two things here:

1. Trusted Publishing provides a verifiable link between a CI/CD provider (the "machine identity") and a packaging index. This verifiable link is used to issue short-lived, self-scoping credentials. Under the hood, Trusted Publishing relies on a signature from the CI/CD provider (which is an OIDC IdP) to verify that link, but that signature is only over a set of claims about the machine identity, not the package identity.

2. Attestations are a separate digital signing scheme that can use a machine identity. In PyPI's case, we bootstrap trust in a given machine identity by seeing if a project is already enrolled against a Trusted Publisher that matches that identity. But other packaging ecosystems may do other things; I don't know how NPM's attestations work, for example. This digital signing scheme uses a different key, one that's short-lived and isn't managed by the IdP, so that signing events can be made transparent (in the "transparency log" sense) and are associated more meaningfully with the machine identity, not the IdP that originally asserted the machine identity.

> At the end of the day you just need to somehow end up in a situation where the pipeline holds a key that has been authenticated by the package registry. From that point on I'd think that the particular signature scheme would become a trivial implementation detail; you stuff the output into some json or something similar and get on with life.

Yep, this is what attestations do. But a key piece of nuance: the pipeline doesn't "hold" a key per se, it generates a new short-lived key on each run and binds that key to the verified identity sourced from the IdP. This achieves the best of both worlds: users don't need to maintain a long-lived key, and the IdP itself is only trusted as an identity source (and is made auditable for issuance behavior via transparency logging). The end result is that clients that verify attestations don't verify using a specific key; the verify using an identity, and ensure that any particular key matches that identity as chained through an X.509 CA. That entire process is called Sigstore[1].

And no offense taken, these are good questions. It's a very complicated system!

[1]: https://www.sigstore.dev


> I think it would be pretty risky to allow this key to directly sign for packages, because this would imply that any party that can use that key for signing can sign for any package.

There must be some misunderstanding. For trusted publishing a short lived API token is issued that can be used to upload the finished product. You could instead imagine negotiating a key (ephemeral or otherwise) and then verifying the signature on upload.

Obviously the signing key can't be shared between projects any more than the API token is. I think I see where the misunderstanding arose now. Because I said "just verify the pipeline identity" and you interpreted that as "let end users get things signed by a single global provider key" or something to that effect, right?

The only difference I had intended to communicate was the ability of the downstream consumer to verify the same claim (via signature) that the registry currently verifies via token. But it sounds like that's more or less what attestation is? (Hopefully I understood correctly.) But that leaves me wondering why Trusted Publishing exists at all. By the time you've done the OIDC dance why not just sign the package fingerprint and be done with it? ("We didn't feel like it" is of course a perfectly valid answer here. I'm just curious.)

I did see that attestation has some other stuff about sigstore and countersignatures and etc. I'm not saying that additional stuff is bad, I'm asking if Trusted Publishing wouldn't be improved by offering a signature so that downstream could verify for itself. Was there some technical blocker to doing that?

> the IdP itself is only trusted as an identity source

"Only"? Doesn't being an identity source mean it can do pretty much anything if it goes rogue? (We "only" trust AD as an identity source.)


> There must be some misunderstanding. For trusted publishing a short lived API token is issued that can be used to upload the finished product. You could instead imagine negotiating a key (ephemeral or otherwise) and then verifying the signature on upload.

From what authority? Where does that key come from, and why would a verifying party have any reason to trust it?

(I'm not trying to be tendentious, so sorry if it comes across that way. But I think you're asking good questions that lead to the design that we arrived at with attestations.)

> I did see that attestation has some other stuff about sigstore and countersignatures and etc. I'm not saying that additional stuff is bad, I'm asking if Trusted Publishing wouldn't be improved by offering a signature so that downstream could verify for itself. Was there some technical blocker to doing that?

The technical blocker is that there's no obvious way to create a user-originated key that's verifiably associated with a machine identity, as originally verified from the IdP's OIDC credential. You could do something like mash a digest into the audience claim, but this wouldn't be very auditable in practice (since there's no easy way to shoehorn transparency atop that). But some people have done some interesting exploration in that space with OpenPubKey[1], and maybe future changes to OIDC will make something like that more tractable.

> "Only"? Doesn't being an identity source mean it can do pretty much anything if it goes rogue? (We "only" trust AD as an identity source.)

Yes, but that's why PyPI (and everyone else who uses Sigstore) mediates its use of OIDC IdPs through a transparency logging mechanism. This is in effect similar to the situation with CAs on the web: a CA can always go rogue, but doing so would (1) be detectable in transparency logs, and (2) would get them immediately evicted from trust roots. If we observed rogue activity from GitHub's IdP in terms of identity issuance, the response would be similar.

[1]: https://github.com/openpubkey/openpubkey


Okay. I see the lack of benefit now but regardless I'll go ahead and respond to clear up some points of misunderstanding (and because the topic is worthwhile I think).

> From what authority?

The registry. Same as the API token right now.

> The technical blocker is that there's no obvious way to create a user-originated key

I'm not entirely clear on your meaning of "user originated" there. Essentially I was thinking something equivalent to the security of - pipeline generates ephemeral key and signs { key digest, package name, artifact digest }, registry auth server signs the digest of that signature (this is what replaces the API token), registry bulk data server publishes this alongside the package artifact.

But now I'm realizing that the only scenario where this offers additional benefit is in the event that the bulk data server for the registry is compromised but the auth server is not. I do think there's some value in that but the much simpler alternative is for the registry to tie all artifacts back to a single global key. So I guess the benefit is quite minimal. With both schemes downstream assumes that the registry auth server hasn't been compromised. So that's not great (but we already knew that).

That said, you mention IdP transparency logging. Could you not add an arbitrary residue into the log entry? An auth server compromise would still be game over but at least that way any rogue package artifacts would conspicuously be missing a matching entry in the transparency log. But that would require the IdP service to do its own transparency logging as well ... yeah this is quickly adding complexity for only very minimal gain.

Anyway. Hypothetical architectures aside, thanks for taking the time for the detailed explanations. Though it wasn't initially clear to me the rather minimal benefit is more than enough to explain why this general direction wasn't pursued.

If anything I'm left feeling like maybe the ecosystems should all just switch directly to attested publishing.


Thanks for replying.

I'm certainly not meaning to imply that you are in on some conspiracy or anything - you were already in here clarifying things and setting the record straight in a helpful way. I think you are not representative of industry here (in a good way).

Evangelists are certainly latching on to the ambiguity and using it as an opportunity. Try to pretend you are a caveman dev or pointy-hair and read the first screenful of this. What did you learn?

https://github.blog/changelog/2025-07-31-npm-trusted-publish...

https://learn.microsoft.com/en-us/nuget/nuget-org/trusted-pu...

https://www.techradar.com/pro/security/github-is-finally-tig...

These were the top three results I got when I searched online for "github trusted publishing" (without quotes like a normal person would).

Stepping back, could it be that some stakeholders have a different agenda than you do and are actually quite happy about confusion?

I have sympathy for that naming things is hard. This is Trusted Computing in repeat but marketed to a generation of laymen that don't have that context. Also similar vibes to the centralization of OpenID/OAuth from last round.

On that note, looking at past efforts, I think the only way this works out is if it's open for self-managed providers from the start, not by selective global allowlisting of blessed platform partners one by one on the platform side. Just like for email, it should be sufficient with a domain name and following the protocol.


I would probably not build an actual app with HTMX but I found it to be excellent for just making a completely static page feel more dynamic. I'm using it on my two blogs and it makes the whole experience feel much snappier and allows me to carry through an animation from page to page.

The amount of custom stuff I needed to add was minimal (just mostly ensuring that if network is gone, it falls back to native navigation to error out).

Examples: https://lucumr.pocoo.org/ and https://dark.ronacher.eu/

I also found Claude to be excellent at understanding HTMX so that definitely helps.


I moved to a AI maintained custom site generator and it’s ideal for my uses. I have full control over everything and nothing breaks me.


I'm not sure if there should be a /s there. AI to me seems to be the antithesis of stable.


The AI only changes things when I want it to, and to my command. It's very stable.


You could also upgrade a static generator when you want to and equally achieve stability.


That's somewhat untrue. Personal software only moves to your constraints. Shared software moves to others' as well. I use Mediawiki for my site (I would like others to be able to edit it) and version changes introduce changes in more than the sections I care about.


They tend to change and when I want to do something that the generator does not do, I either need to hack it in (which might break) or i need to fork the generator.


An AI-maintained tool is a different thing than using AI to generate the site.


When you say "AI maintained", what are you meaning?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: