Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don’t know why people expect unlimited usage for limited cost. Copilot hasn’t been good for a long time. They had the first mover advantage but they were too slow to improve the product. It’s still not caught up to cursor or windsurf. Cline leaves it so far in the dust it’s like a decade behind in AI years. So you get what you pay for.

Claude is still the gold standard for AI assisted coding. All your Geminis and o3s of the world still don’t match up to Claude.

I started using Claude code once it became a fixed price with my Claude max subscription. And it’s taken a little getting used to vs Cline, but I think it’s closer to Cline in performance rather than cursor (Cline being my personal gold standard). $100 is something most people on this forum could make back in 1 day of work.

$100 per month for the value is nothing and for what it’s worth I have tried to hit the usage limit and the only thing that got me close was using their deep research feature. I’ve maxed out Claude code without hitting limits.



Claude is still the gold standard for AI assisted coding. All your Geminis and o3s of the world still don’t match up to Claude.

I might be missing something, but you can use Claude 3.7 in Copilot Chat:

https://docs.github.com/en/copilot/using-github-copilot/ai-m...

VS Code with your favorite model in Copilot is rapidly catching up with Cursor, etc. It's not there yet, but the trajectory is good.

(Maybe you meant code completion? But even smaller, local models do pretty well in code completion.)


When I tried Claude in copilot it was so obviously crippled as to be useless. I deleted copilot and never went back.


Care to explain why? Isn't the Claude version in Copilot exactly the same as in Claude Code?


It was just obviously worse than using the anthropic website. That was the only explanation for why it was so bad. They could offer it free because it was stupid even if the same version (maybe less resources). Or maybe I was just unlucky but that's what it seemed to me.


Sonnet in Copilot is crippled, Copilot agent mode is also very basic and failed every time I tried it. It would have been amazing 2 years ago, but now it's very meh.

GitHub is losing money on the subs, but they are definitely trying to reduce the bleed. One way to do that is to cut corners with LLM usage, by not sending as much context, trimming the context window, capping output token limits, these are all things Cursor also does btw, hence why Cline, with almost the same tech (in some ways its even inferior tech) achieves better results. I have hit $20 in API usage within a single day with Cline, Cursor lets you have "unlimited" usage for $20 for a month. So its optimised for saving costs, not for giving you the best experience. At $10 per month for Copilot, they need to save costs even more. So you get a bad experience, you think its the AI that is not capable, but the problem is with the companies burning VC money to corner the market, setting unrealistic expectations on pricing, etc.


> $100 is something most people on this forum could make back in 1 day of work.

I expect so. The question is "How many days does the limit last for?"

Maybe they have a per-day limit, maybe it's per-month (I'm not sure), but paying $100/m and hitting the limit in the first day is not economical.


I wrote about this on my blog: https://www.asad.pw/llm-subscriptions-vs-apis-value-for-mone...

But basically you get ~300Mn input tokens and ~100Mn output tokens per month with Sonnet on the $100 plan. These are split across 50 sessions you are allowed, each session is 5 hrs starting from the first time you send a message until 5 hrs after the first message. During this time, you get ~6Mn input and ~2Mn output tokens for Sonnet. Claude Code seems to use a mix of Sonnet and Haiku, and Haiku has 2x the limits of Sonnet.

So if you absolutely maxed out your 50 sessions every month, that's $2400 worth of usage if you instead had used the API. So it's a great deal. It's not $100 worth of API credits you're buying, so they don't run out like that. You can exhaust limits for a given session, which is at most a 5 hr wait for your next one, or you can run out of 50 sessions, I don't know how strongly they enforce that limit and I think that limit is BS, but all in all the value for money is great, way better than using the API.


Thanks for the link and explainer. My first experience with Claude Code left mixed feelings because of the pricing. I have Pro subscription, but for Claude Code can only use API mode. So I added 5$ just to check it, and exhausted 4.5$ in the first 8m session. It left me wondering if switching to Max plan will exhaust it at the same rate or not.


Right into the announcement, later down, they even explain how to handle the limits:

How Rate Limits Work: With the Max plan, your usage limits are shared across both Claude and Claude Code:

Shared rate limits: All activity in both Claude and Claude Code counts against the same usage limits.

Message variations: The number of messages you can send on Claude varies based on message length, conversation length, and file attachments.

Coding usage variations: Expected usage for Claude Code will vary based on project complexity, codebase size, and auto-accept settings.

On the Max plan (5x Pro/$100), average users:

- Send approximately 225 messages with Claude every 5 hours, OR

- Send approximately 50-200 prompts with Claude Code every 5 hours

On the Max plan (20x Pro/$200), average users:

- Send approximately 900 messages with Claude every 5 hours, OR

- Send approximately 200-800 prompts with Claude Code every 5 hours


How many prompts does Claude code send per user prompt? Is it 1:1?


Nope, it can be even a dozen (because agentic). Claude usage limits are actually based on token usage, and Claude Code uses a mix of Haiku and Sonnet. So your limits are split among those two models. I gave an estimation of how much usage you can expect in another comment on this thread, but you will find it hard to max out the $100 plan unless you are using it very, very extensively.


I didn’t realize they were tuning cost optimization by switching models contextually. That’s very clever. I bet the whole industry of consumer LLM apps moves that way.


I am using cline, it plans with Haiku and execute with Sonnet. It works well.


I agree with much of your post, but:

Claude is still the gold standard for AI assisted coding. All your Geminis and o3s of the world still don’t match up to Claude.

Out of date I think in this fast moving space.

Sonnet has long been the gold-standard, but that position is looking very shaky at the moment; Gemini in particular has been working wonders for me and others when Sonnet has stumbled.

VS Code/Copilot has improved massively in Cursor's wake, but yes, still some way to go to catch up.

Absolutely though - the value we are getting is incredible.


In my experience, there are areas where Gemini did well but Claude didn't, same for o1 pro or o3, but for 90% of the work, I find Claude way more trustworthy, better at following instructions, not making syntax mistakes, etc. Gemini 2.5 Pro is way better than all their prior models, but I don't get the hype about it being a coding superstar. It's not bad, but Sonnet is still the primary workhorse. Sonnet is more expensive, so if Gemini was at the same level I'd be happy to save the money, but unfortunately, I've tried it with various approaches, played with the temperature, but in the vast majority of cases Claude does a better job.


Exactly, $100 per month is nothing for professional usage. For hobby projects, it is a lot.

From the internet, we got used to get everything for nothing, thus ppl beg for a lower price, even if it doesn't make sense.


It makes perfect sense if the market is cheaper.


Gemini 2.5 Pro is better at coding than Claude, it’s just not as good at acting agentically, nor does Google have good tooling to support this use case. Given how quickly they’ve come from far behind and their advantage on context size (Claude’s biggest weakness), this could change just as fast, although I’m skeptical they can deliver a good end user dev tool.


> Gemini 2.5 Pro is better at coding than Claude

Id be careful with stating things like these as fact. I asked Gemini for half an hour to write code that draws a graph the way I want, it never got it right. Then I asked Cladue 3.7 and it got it almost right the first try, to the point I thought its compeltely right, and fixed the bug I discovered right after I pointed it out.


Yup, I have had similar experience too. Not only for coding, but just yesterday, I was asking Gemini to compose an email with a list of attachments, which I had specified as a list of file paths in the prompt, and it wasn't able to count correctly and report in the email text (the text went something like, there are <number_of_attachments> charts attached). Claude 3.7 was able to do that correctly in one go.


How much do you pay for Gemini 2.5 Pro?


Something like $20/month, first 2 months $10. Depends on the country.


What about Sourcegraph? How do they compare?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: