Hacker Newsnew | past | comments | ask | show | jobs | submit | zormino's commentslogin

I also think some of this stems from the default 1m context window. Performance starts to degrade when context size increases, and each token over (i think the level is) 400k counts more towards your usage limit. Defaulting to 1m context size, if people arent carefully managing context (which they shouldnt ever have to in an ideal world), they would notice somewhat degraded performance and increased token usage regardless.

Because most people work for someone else and don't decide their own salaries. It's not doubling productivity, but even a 10-20% boost to productivity for a team of engineers means that, as a business, even $1k per month per seat is perfectly acceptable. For consumers and hobbyists that basically kills access.

I think removing Claude Code from the $20 tier is a terrible idea, I never would've gone from nothing right into the $100/200 tier. The $20 plan let me get my feet wet and see how good it could be, and in less than a week I was on the $100 plan.

I think they need to at least have a 1 month introductory rate for the max plan at $20, or devs that decide to try out agentic coding just won't go to Anthropic.

That leads to downstream impacts, like when a company is deciding which AI coding tools to provide and the feedback management hears everyone is already used to (e.x.) Codex, then Anthropic starts losing the enterprise side of things.


They're not losing anything. They have much more demand than they could ever fulfill to care anymore about promotional or subsidized user groups.

Until magically all their demand vaporizes.

I suspect a lot of people are like me. They got into this at the $20/month level individually to check things out. I'm not stressing things out, so I haven't moved up, but the moment I bump into a limit, I'll pull the trigger by default. Until then, I'm the sleeping dog, and you should let me lie.

Well, Anthropic decided to kick me. Now, I'm investing the time to figure out how to use the "open" and "Chinese" models assuming that Anthropic is about to screw me. Once I switch, Anthropic is going to have to demonstrate significant improvements over what I'm now using to get me to even consider them again.


I don't think they need the $20/month users when there are some who use over $1000 in tokens per day.

I know of 4 companies that are already starting to stomp down on the AI whales using the "$1000 per day". There was the cost of the entire AI usage of the company and then there was the cost of about a half-dozen people who dwarfed it individually.

So, we've established a hard upper ceiling for what AI can extract per user at roughly $100K and more realistically at $10K per year. Basically, if using the AI costs the same as a human salary, it's going to get pushback. I mean, the whole point was to get rid of those pesky human salaries, after all.

So, there are about 2 million-ish software jobs in the US? It's more than 1 million but a far cry from 10 million. So that pencils in at $20 billion in the US per year total? That means that if an AI company literally won all the US software programmers, it would be worth max $200 billion to be bought out (10x revenue).

Now how much investment have the AI companies taken? Yeah, roughly that. And investors are going to want quite a bit more than that back.

Even if they had zero delivery costs, the AI companies are cooked long term. The moment your number bumps into "All the X in the US/World", you've got a problem.

Short term? Greater fool theory applies. And there appear to be a lot of them.

And all this is before we start getting into people exploring the open models. Most people were like me; we started on something like Claude and just stayed put because it was straightforward. Now that we've been kicked, we'll start looking at the other options.


I agree. Why would they not keep the $20 plan as a gateway drug?

My biggest worry isn't that it will make me dumb (it won't), or that it will make me lazy (it will), but that people raised with it wont learn things in the first place. I'm split on if this is a real issue or an old man rants about slide rules and the decline of mental math kinda situation.

Same for my s24, 80% battery limit and slow charging at night (most of my charging). It's been over 2 years and the battery seems to last just as long as day one

You mean you aren't still using Google Duo and Allo? Google Reader? Playing games on your Stadia? I'd be worried about really locking into a specific Anthropic product at this point other than Claude Code

I never recovered from Inbox being killed.

I still fondly remember playing cyberpunk 2077 on release date with no download time. What the future could have been . Probably would have become economically infeasible regardless with GPU prices due to AI though

This is why I use Claude Code though, it pairs well with a regular old text editor (in my case Sublime). I've always had an editor and a terminal open, plugging an AI into my terminal has been a fantastic enhancement to my work without really anything else changing or giving up any control.


That's what you should be doing. Start from plain Claude, then add on to it for your specific use cases where needed. Skills are fantastic if used this way. The problem is people adding hundreds or thousands of skills that they download and will never use, but just bloat the entire system and drown out a useful system.


Sure, it's basic use and nothing to flex about - was just responding specifically to the line that plan-review-implement is all you need.

Though, you get such a huge bang from customizing your config that I can easily see how you could go down that slippery slope.


Nothing prevents that, and the test lab wouldn't bat an eye at that. For all they know or care, you wanted to run different test on different cells. The only thing the lab is verifying is that the defined test was executed as stated, nothing else.


When I was in university, I had a math teacher that brought extra chalk to class everyday because if he saw you on your phone, he'd snap a piece off and throw it at you pretty damn hard. Maybe that wouldn't exactly work in a high school but damn if Dr Murphy didn't have me paying attention in that class.


It's "literal violence" now and you'd get sued and lose your job.


That being called "literal violence" is one thing, but now a teacher telling a student "you should be ashamed with yourself" is also called "literal violence."


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: