The US single-handedly dominating AI at this point probably means a handful of tech overlords in charge of a surveillance society which depends on AI for everything, with some vague promises that everyone else will get some sort of allowance if they feel benevolent enough. For all existential risks discussed about ASI or whatever, having an oligarchy in complete control of this tech is maybe even worse.
So, I guess we all have to hope that more money does not necessarily lead to a "victory" here.
It seems every AI company eventually concludes that lobbying is required for them to operate. That suggests to me that they know they have no real moat.
So Anthropic is trying to save money on infrastructure, we all get it. However, it's not ok to degrade the performance your users have paid for. Last week the issue was that you reduced the default "effort" level, now the prompt cache is shortened. Several users experience far more restrictive usage limits lately.
There is only so much you can do through "UX improvements" or some smart routing on the backend. Your flagship product is actively getting worse, and if users need to fiddle with hidden settings and keep track of GitHub issues every week they will start voting with their money.
For context, my company gives each developer a decent monthly allowance for Claude and if push comes to shove, we are allowed to fallback to using
AWS Bedrock hosted Anthropic models.
When you pay for a Claude subscription, what exactly were you promised?
> they will start voting with their money.
And go where? Sooner or later the party is going to be over and Claude and its competitors are going to have to start charging enough to actually be profitable when the VC money dries up.
> When you pay for a Claude subscription, what exactly were you promised?
I was promised 5x or 20x the amount of resources that the free tier would offer. I implicitly expected the same quality too, not some watered-down version of the product they allowed me to sample before committing to a subscription.
Sooner or later Anthropic will run out of VC money, yes. That's their problem, not mine. When I took an Uber while it was subsidized by venture capital, the driver did not drop me half way through my destination because they were having cash flow issues.
It’s exhausting enough to deal with services that change around on an annual/semi-annual basis with pricing and expectations.
Now the expectation is that we should tolerate goalposts being shuffled around on a weekly/daily basis with the added requirement of digging into bug tickets because there’s no attempt at transparency? The tech is cool but this is absolutely insane.
If you’re an individual developer paying $100-200/mo for a service that keeps changing, there is a LOT of reason to keep an eye on other products.
I’m not saying that there isn’t a reason to keep an eye on other products. I’m saying that every other product in the space has the same unit economics and will eventually need to charge enough to be profitable - and to continue training and hardware expansion.
Honestly a developer paying $200 a month is a nothingburger and if using their service to the fullest is losing them money.
For context, the company I work for gives each consultant a $2000 a month allowance and I think there are probably around 500-700 people with that allowance. I’m sure everyone doesn’t use it all.
If they have limited hardware resources, where do you think they are going to focus?
Classic VC pump playbook - run it uneconomically until everyone is addicted, then 5x prices once you have enough critical mass. See 2010s "Millennial Lifestyle Subsidy"..
It seems pretty transparent that they are heavily resource constrained, (training run for Claude 5.x, higher usage / growth than anticipated). I don’t disagree that their long play is monopolistic pricing, but what we’re observing seems better explained by the fact they have a very tight compute budget they are trying to optimize over to put as much as they can into next gen experiments / training to make sure they stay competitive over the next 6-months / year.
This is partly why this talk about AI "solving science" should be taken with a grain of salt. Here the authors intentionally poisoned the publication record, but there are millions of papers out there that are also garbage, and it would be very hard for either a human or a LLM to distinguish them from actual work.
I agree with the general insight here. Python is great for humans but once they are out of the loop it's no longer as useful. Having a compiler is more useful for LLMs indeed.
However we are moving one step closer to complete inability for humans to understand the code, as there are likely 100x more developers with experience in Python than Rust. If humans are indeed going to be the bottleneck then perhaps this is inevitable, and languages fitted especially for LLMs will dominate.
I actually believe we need to rethink Git for modern needs. Saving prompts and sessions alongside commits could become the norm for example, or I could imagine having different flags for whether a contribution was created by a human or not.
This doesn't seem to be the direction these guys are going though, it looks like they think Git should be more social or something.
What do people expect to do with these saved prompts/contexts? Nobody is going to read through them, right? I suppose the thinking is LLMs will, but any decently active codebase will soon contain far too much context for any current LLM. Is this the same thinking behind cryonics, ie. we may be able to use this stuff one day so let's start saving it now? Hoarding has ruined many people and it will ruin us all if we're not careful...
For me the reason would be to preserve traces of intentionality (ie what was the user trying to achieve with this commit?). These days a 10k LOC commit might be triggered by a 100-word user prompt, there is a lot more signal in reading the prompt itself than the code changes.
I mean, it's just text, so it shouldn't be too taxing to store it. I agree it's hoarder mentality though :)
Actually, it is. We're currently leading a conversation among several players in this space to agree on a metadata standard that helps make attaching, collaborating on and transmitting information like this simple, extensible and scalable.
Keep an eye on our blog to see how we're doing this, and how we're doing it in a way that hopefully the entire community joins us in a way where we're not all reinventing the same wheels.
>Saving prompts and sessions alongside commits could become the norm for example, or I could imagine having different flags for whether a contribution was created by a human or not.
and then the tooling could attach any metadata to it that is desired.
OH WAIT YOU CAN DO THAT ALREADY SINCE 2009
Seriously, the 90% complaints about git not being able to do something is just either RTFM or "well, it can, but could use some better porcelain to present to user"
It's crazy to me that this is not considered fraud. You sign up for a yearly plan under a given assumption of functionality, then they just change the terms to give you less than what they agreed to without compensating you in any way. That's textbook fraud.
I wonder whether this was your first attempt to solve this issue with LLMs, and this was the time you finally felt they were good enough for the job. Did you try doing this switch earlier on, for example last year when Claude Code was released?
Honestly, I was very adverse to agentic code up until Opus came out. The hallucinations and false confidence it had in objectively wrong answers just broke more things than it fixed.
However after it came out it suddenly behaved closely to what they marketed it as being. So it was my first real end-to-end project relying on AI at the front seat. Though design wise it is nowhere near perfect, I was holding it's hand the entire way throughout.
So, I guess we all have to hope that more money does not necessarily lead to a "victory" here.
reply