That's a nonsense take. How fast you burn through usage limits depends on your use patterns, and if there's one thing that's true about LLMs, is that you can practically always improve your results by spending more tokens. Pay-per-use API pricing just makes you hit diminishing returns quickly. With Claude Code subscription, it's different.
The whole value proposition of a Max subscription is that it lets you stop worrying about tokens (Claude Code literally tells you so if you type `/cost` while authenticated with a subscription). So I'd turn around and say, people who don't regularly hit usage limits aren't using Claude Code properly - they're not utilizing it in full.
--
Myself, I'm only using Claude Code for little R&D and some side projects, but I upgraded from Max x5 to Max x20 on the second day, as it's trivial to hit the Max x5 limit in a regular, single-instance chat. And that's without any smarts, just a more streamlined flavor of good ol' basic chat experience.
But then I look around, and see people experiment with more complex approaches. They run 4+ instances in parallel to work on more things at a time. They run multiple instances in parallel to get multiple solutions to the same task, and then mix them into a final one - possibly with help of yet another instance. They have the agent extensively research a thing before doing it, and then extensively validate it afterwards. And so on. Any single one of such tricks/strategies is going to make hitting limits on Max x20 a regular occurrence again.
SEEKING WORK | CET | Remote / One meeting weekly (Europe)
I have been working last two years replacing human process with automated process using AI with good results. Replaced financial analysts processing reports (financial notes, ESG reports), publishing editors selecting books for publication etc. Before AI I had been working mostly on pricing risk, derivatives and the likes etc. Degree in CS and Financial Math.
Golang (preferred), Python, Javascript, Matlab, Mathematica and other technologies
I use this analogy. In the early 90's I had been programming in assembler and sometimes in pure hex codes. I had been very good at that, creating really effective code, tight, using as little resources as possible.
But then resources became cheap and it stoped matter. Yeah, the tight well designed machine code is still some sort of art expression but for practical purpose it makes sense to write a program in higher level language and waste a few MB...
I don't agree.
It may be true that most code is throwaway.
But you trust a C compiler, or a Python intepreter, to do their job in a deterministic way.
You will never be able to trust Copilot telling you that "this should be the code you are using".
It may suggest you using AWS, or Google, or Microsoft, or Tencent infrastructure.
An LLM can even push you a specific style, or political agenda, without even you realizing it.
I hate polarized discussion all-or-nothing thinking about LLMs.
See how perfectly and reliably they can translate text in whatever language.
See them fail at aligning a table with a monospace font.
I think you probably need a multimodal LLM to align a table with a monospace font. Blind human programmers using screenreaders will also have difficulty with that particular task. It doesn't mean they're bad at programming; it just means they can't see.
Human blind programmers may write programs to align their own tables like requested, in autonomy. Instead of me seeing them doing wrong, asking for a fix, and them doing it wrong again in an endless cycle.
Also, when I tell them the solution and ask to add a feature - bam, the table is made wrong again.
A c program is understandable even if it isn't efficient. It's also deterministic, or close enough that it shouldn't matter.
An llm project that can be generated from scratch every time is maybe understandable if you use very good prompts and a lot of grounding text. It is not deterministic unless you use zero temperature and stick with the same model forever. Something that's impossible now. Six months ago the state of the art model was deepseek r1.
I pretty much one shot a scraper from an old Joomla site with 200+ articles to a new WP site, including all users and assets, and converting all the PDFs to articles. It cost me like $3 in tokens.
I guess the question the is: can't VScode Copilot do the same for a fixed $20/month? It even has access to all SOTA models like Claude 3.7, Gemini 2.5 Pro and GPT o3
Vscode’s agent mode in copilot (even in the insider’s nightly) is a bit rough in my experience: lots of 500 errors, stalls, and outright failures to follow tasks (as if there’s a mismatch between what the ui says it will include in context vs what gets fed to the LLM).
I would have thought so, but somehow no. I have a cursor subscription with access to all of those models, and I still consistently get better results from claude code.
no it's a few hundred lines of python to parse weird and inconsistent HTML into json files and CSV files, and then a sync script that can call the WP API to create all the authors as needed, update the articles, and migrate the images
> If European leaders don't watch this and realise they need to take control of their own destiny they're idiots. Several European leaders visited this week bending the knee to try and stave off tariffs.
European leader are desperate because they realize obvious truth - US will soon implement regime changes all across Europe... European elites are simply not up to task of dealing with this new revolutionary era.
They have have either shitty codebase or can't narrow down the scope. Or both. Not the kind of folks you want on your team.
reply