If you’re letting Claude code just handle secrets like this you’re already fucked from a security standpoint so I don’t really see the big deal here
Today it was the Vercel plugin but if you’re letting an LLM agent with access to bash and the internet read truly sensitive information then you’re already compromised
That’s the uncomfortable part: even with approval prompts, the trust boundary is still Anthropic’s runtime plus whatever plugin ecosystem sits behind it. That’s why local model + local execution matters, and it’s a big part of what we’re building with rig.ai.
Israel has a disproportionately large amount of tech companies for its size and he took one photo with their leader.
I have no idea why everyone on the internet wants to endlessly seethe about this & personally attack Guillermo for it as if he’s endorsed their foreign policy or something
I would switch to Cursor 3 in a heartbeat if it supported Claude Agent SDK (w/ Claude Max subscription usage) and/or Codex the way that similar tools like Conductor do
And I would happily pay a seat based subscription fee or usage fees for cloud agents etc on top of this
Unfortunately very locked into these heavily subsidized subscription plans right now but I think from a product design and vision standpoint you guys are doing the best work in this space right now
If things continue to get worse I really worry how many people might give up on life entirely. A lot of people in this industry don’t have a whole lot else going on for them, myself included.
I grinded my 20s away trying to have a successful career and if that just gets pulled out from under me I’ve got absolutely nothing.
I know a very senior engineer who took their life the day after Trump was elected. He was unemployed for a while.
While I think a lot more was going on with him than being unemployed, I'm convinced AI hitting the scene had a bit to do with it. They were an older dev 50+.
These are kind of unrelated issues. You’re right that it used to be companies just didn’t want to be involved in war at all, & generally speaking that isn’t going to cause issues.
The core of the issue here is having a private company which is trying to dictate terms of use to the military, which is not really something that has been done before afaik
Originally this contract was signed with these terms included, and it wasn’t until Anthropic started investigating how its tech was used by Palantir in the Maduro operation that this became an issue.
On a surface level it seems like Anthropic is doing the right thing here but this is really at the root of this & the outcome of the case (and whether or not Anthropic is a legitimate supply chain risk) depends entirely on the details of those conversations they had with Palantir.
They would never do this because the entire point of the company is to try and control what AI is allowed to do, who is allowed to use it, and what they’re allowed to do with it. The overarching philosophy of Anthropic is explicitly opposed to open models. If it were up to them it would be illegal to inference them in the U.S.
There’s plenty of straightforward reasons why OpenAI would want to do this, it doesn’t need to be some sort of malicious conspiracy.
I think it’s good PR (particularly since Anthropics actions against OpenCode and Clawdbot were somewhat controversial) + Peter was able to build a hugely popular thing & clearly would be valuable to have on the team building something along the lines of Claude Cowork. I would expect these future products to be much stronger from a security standpoint.
I suspect Anthropic was seeing a huge spike of concurrent model usage at a too fast of a rate that claude code just doesn't do, CC is rather "slow" at api calls per minute. Also lots and lots of cache, the sheer amount of cache that claude does is insane.
It’s hard to say exactly what prompted the decision but they banned people paying $200/mo without warning & without any reasonable appeal system in place. It’s a Google form that is itself reviewed by some automated system that may or may not ever get back to you.
This was already an ongoing issue prior to 3rd party tools using Claude subscriptions, there are reports of false positive automated bans going back for several months.
I have not seen or heard of this happening w/ Codex, and rather than trying to shut down 3rd party tools that want to integrate with their ecosystem they have worked with those projects to add official support.
I’m more impressed with Codex as a product in general as well. Their new desktop app is great & feels an order of magnitude better than Claude’s.
Overall HN crowd seems heavily biased in favor of Anthropic (or maybe just against OpenAI?) but IMO Anthropic needs to take a step back and reset. If they keep on the current path of just making small iterative improvements to Claude Code and Claude Desktop they are going to fall very far behind.
I’m starting to think you’re right but only because software engineers don’t seem to actually value or care about open source anymore. Apparently we have collectively forgotten how bad it can be to let your tools own you instead of the other way around.
Maybe another symptom of Silicon Valley hustle culture — nobody cares about the long term consequences if you can make a quick buck.
There's nothing stopping you from using OpenCode with any other provider, including Anthropic: you just can't get the subsidised pricing while doing so. This is irritating, yes - it certainly disincentivises me from trying out OpenCode - but it's also, like, not unexpected?
In any case, the long-term solution for true openness is to be able to run open-weight models locally or through third-party inference providers.
Yes but why are they subsidizing the pricing and requiring to use their closed source client to benefit from it? It’s the same reason the witch in the story of Hansel and Gretel was giving out free candy.
Is this a serious question? Why would they subsidize people when there is no benifet to them? Subsidization means they are LOSING money when people use it. If the customers that are using 3rd party clients are unwilling to pay a price that is profitable for them, that is a very positive, not negative, thing for Anthropic to lose them.
The reason to subsidize is the exact reason you are worried about. Lock in, network effects, economies of scale, etc.
It very obviously is, you'd have to be the most naive of the most naive to think there isn't a path for them to jack prices later. Maybe that's not nefarious depending on your definition, but the point is you will definitely be paying more in the future.
I mean, this is the playbook of every tech company for the past 30 years. You sell something at a huge loss to gain market share and force your competitors to exit, and then you begin value extraction from your, now captive, customer base. You lower quality, raise prices, and cut support, and you do it slowly enough that nobody is hit with enough friction at one time to walk.
If you expect anything else, I don't know what to tell you. This is very much the standard. In fact it's SO much the standard that companies don't even have a choice. If you choose not to do this, then the people who are doing this will just undercut you and run you out.
The key piece in this is that, once the value extraction begins, it can't just strive for profitability. No, it also has to make up for the past 10 or 15 years of losses on top of that. So it's not like the product will just get expensive enough to sustain itself like you'd expect with a typical product. It'll get much more expensive than that.
> Apparently we have collectively forgotten how bad it can be to let your tools own you instead of the other way around.
We've collectively forgotten because a large enough number of professional developers have never experienced anything other than a thriving open source ecosystem.
As with everything else (finance and politics come to mind in particular), humans will have to learn the same lessons the hard way over and over. Unfortunately, I think we're at the beginning of that lesson and hope the experience doesn't negatively impact me too much.
> software engineers don’t seem to actually value or care about open source anymore.
Hate to break it to you, but the vast majority never did. See any thread about Linux on HN. Maybe the Open Source wave was before my time, but ever since I came into the industry around 2015 "caring about open source" has been the minority view. It's Windows/Mac/Photo Shop/etc all the way up and down.
It might make sense from Anthropics perspective but as a user of these tools I think it would be a huge mistake to build your workflow around Claude Code when they are pushing vendor lock in this aggressively.
Making this mistake could end up being the AI equivalent of choosing Oracle over Postgres
As a user of Claude Code via API (the expensive way), Anthrophic's "huge mistake" is capping monthly spend (billed in advance and pay as you go some $500 - $1500 at a time, by credit card) at just $5,000 a month.
It's a supposedly professional tool with a value proposition that requires being in your work flow. Are you going to keep using a power drill on your construction site that bricks itself the last week or two of every month?
An error message says contact support. They then point you to an enterprise plan for 150 seats when you have only a couple dozen devs. Note that 5000 / 25 = 200 ... coincidence? Yeah, you are forbidden to give them more than Max-like $200/dev/month for the usage-based API that's "so expensive".
They are literally "please don't give us money any more this month, thanks".
I imagine a combination of stop loss and market share. If larger shops use up compute, you can't capture as many customers by headcount.
// There was a figure around o3, an astonishing model punching far above the weights (ahem) of models that came after, that suggested the thinkiest mode cost on the order of $3500 to do a deep research. Perhaps OpenAI can afford that, while Anthropic can't.
Sounds plausible they're not really making any. Arbitrary and inflexible pricing policies aren't unusual, but it sounds easy enough for a new rapidly-growing company to let the account managers decide which companies they might have a chance of upselling 150 seat enterprise licenses to and just bill overage for everyone else...
That leads to the obvious question; is the API next on the chopping block? Or would they just increase the API pricing to a point where they are A) making profit off it and B) nobody would use the API just for a different client?
I'm pretty sure everyone is pricing their APIs to break-even, maybe profit if people use caching properly (like GPT-5 can do if you mark the prompts properly)
Their target is the Enterprise anyway. So they are apparently willing to enrage their non-CC user base over vendor-locking.
But this is not the equivalent of Oracle over Postgres, as these are different technology stacks that implement an independent relational database. Here were talking about Opencode which depends on Claude models to work "as a better Claude" (according to the enraged users in the webs). Of course, one can still use OC with a bazillion other models, but Anthropic is saying that if you want the Claude Code experience, you gotta use the CC agent period.
Now put yourself in the Anthropic support person shoes, and suppose you have to answer an issue of a Claude Max user who is mad that OC is throwing errors when calling a tool during a vibe session, probably because the multi-million dollar Sonnet model is telling OC to do something it can't because its not the claude agent. Claude models are fine-tuned for their agent! If the support person replies "OC is an unsupported agent for Claude Code Max" you get an enraged customer anyway, so you might as well cut the crap all together by the root.
If you’ve only got a CLAUDE.md and sub agent definitions in markdown it is pretty easy to do at the moment, although more of their feature set is moving in a direction that doesn’t have 1:1 equivalents in other tools.
The client is closed source for a reason and they issued DMCA takedowns against people who published sourcemaps for a reason.
reply