Everyone that successfully avoided social media for the last decade escaped with their mental health. Everyone that carefully moderates their ai intake (e.g don’t depend on Claude Code) will also escape with their skills over the next decade, others will become AI fiends, just the like social media fiends. Just knowing tech like the internet and ai can fuck your whole brain up is enough to be ahead of the curve. If you didn’t learn the lesson from the uptake of video games, cellphones, tv, streaming (the list is endless), then you’re not paying attention.
The destruction of spelling didn’t feel like doomsday to us. In fact, I think most people treated the utter annihilation of that skill as a joke. “No one knows how to spell anymore” - haha, funny, isn’t technology cute? Not really. We’ve gone up an order of magnitude, and not paying attention to how programming is on the chopping block is going to screw a lot of people out of that skill.
Very thoughtful comment, let me try to capture it more clearly.
Zuckerberg recently said that those not using AI glasses will be cognitively disadvantaged in the future.
Imagine an AI-native from the future and one of the fancy espresso coffee machines that we have today. They will be able to know right away how to operate them from their AI assistants, but they won't be able to figure out how they work on their own.
That's the future that Zuckerberg wants. Naturally, fancy IT offices will likely be gone. The AI-native would have bought the coffee machine for nostalgia effect for a large sum, trying to combat existential dread and feeling of failure which are fueled by their behavior being even more directly coerced into consumption.
curious, maybe one could go and spin up a study for using claculators instead of calculating manually and how it can lead to less x type of thinking and affect our abiltiy but maybe even if that is true(i am not sure maybe it is just in the domains we dont feel like we need to care much or etc) would people quitting clacutors a good thing for getting things done in the world ?
For me the thing that atrophied basic math skills wasn't the calculator, which was invented decades before I was born, but the rise of the smart phone.
Sure, calculators are useful in professional life and in high school math and sciences. But you still had to do everyday math in all kinds of places and didn't always have a calculator at hand. The smartphone changed that
I feel that's relevant in two ways: just like with math, a little bit of manual coding is going to be a huge difference compared to no manual coding, and any study like the one you propose would be hugely complicated by everything else that happened around the time, both because of smart phones and the coinciding 2008 crash
curious, maybe one could go and spin up a study for using claculators instead of calculating manually and how it can lead to less x type of thinking and affect our abiltiy but maybe even if that is true(i am not sure maybe it is just in the domains we dont feel like we need to care much or etc) would people quitting clacutors a good thing for producing value in the world by the will of God?
It’s only a hard dependency if you don’t know and never learn how to program.
For developers who read and understand the code being generated, the tool could go away and it would only slow you down, not block progress.
And even if you don’t, it really isn’t a hard dependency on a particular tool. There are multiple competing tools and models to choose from, so if you can’t make progress with one, switch to another. There isn’t much lock-in to any specific tool.
My experience has been that Claude can layout a lot of things in minutes that would take me hours if not days. Often I can dictate the precise logic and then claude get's most of the way there, with a little prompting claude can usually get even further. The amount of work I can get done is much more substantial than it used to be.
I think there is a lot of reticence to adopt AI for coding but I'm seeing it as a step change for coding the way powerful calculators/workstation computers were for traditional engineering disciplines. The volume of work they were limited to using a slide rule was much lower than now with a computer.
You should actually read the paper. N size of 16. Only 1 of which had used cursor more than 40 hours before. All people working in existing code bases where they were the primary author.
Interestingly, devs felt that it sped them up even though it slowed them down in the study.
So even if it’s not an actual productivity booster on individual tasks, perhaps it still could reduce cognitive load and possibly burnout in the long term.
Either way, it’s a tool that devs should feel free to use or not use according to their preferences.
Hm, I am assuming that paid compilers were largely gone before the whole "must have this dongle attached to computer" industry? Because for software that uses those, "I paid for it" absolutely does not guarantee "I can still run it". The only reason it's not more of a problem is the planned obsolescence that means forced to upgrade sooner or later (but, unlike purely subscription-based services, you have some control over how frequently you pay).
Sadly, paid compilers still exist, and paid compilers requiring a licensing dongle still exist. The embedded development world is filled with staggering amounts of user hostility.
My understanding is that much of the established embedded world has moved to any one flavour of GCC or (more commonly) Clang, just because maintaining a proprietary optimising compiler is too much effort than just modifying (and eventually contributing to) Clang.
Tough for me to speak about embedded in general, but many companies are on vendor toolchains or paid compilers by choice, and it is the right choice to make given the tradeoffs involved.
IAR for example is simply a fantastic compiler. It produces more compact binaries that use less memory than GCC, with lots and lots of hardware support and noticeably better debugging. Many companies have systems-engineering deadlines which are much less amenable to beta quality software, fewer software engineering resources to deal with GCC or build-chain quirks (often, overworked EEs writing firmware), and also a strong desire due to BOM cost to use cheaper/less dense parts. And if there is a compiler bug or quirk, there is someone on the other end of the line who will actually pick up the phone when you call.
That said, some of those toolchain+IDE combos absolutely do suck in the embedded world, mostly the vendor-provided ones (makes sense, silicon manufacturers usually aren't very good at or care much about software, as it turns out).
> Tough for me to speak about embedded in general, but many companies are on vendor toolchains or paid compilers by choice, and it is the right choice to make given the tradeoffs involved.
That's true in general. With paid licenses and especially subscriptions, you're not just getting the service, you're also entering a relationship with the provider.
For companies, that often matters more than the service itself - especially when support is part of this relationship. That's one of many reasons companies like subscriptions.
For individuals, that sucks. They don't need or want another relationship with some random party, that they now need to keep track of. The relationship has so much power imbalance that it doesn't benefit the individual at all - in fact, for most businesses, such customer is nothing more than a row in an analytics database - or less, if GDPR applies.
8051s pretty much mean Keil - they used to do license dongles, but it's all online now. You really don't get much more established than the 8051. If you pick up any cheap electronic product and crack it open to find a low part count PCB with a black epoxy blob on it, chances are very good there's an 8051 core with a mask ROM under the blob.
(Also AVR/PIC compiler from Microchip had a dongle as recently as February this year, and it looks like it's still available for sale even though its license isn't in the new licensing model).
You need to run the cost/benefit analysis here: if I had avoided Claude Code, all that would have happened is I would have written much less code.
What's the cost between never using Claude, and using it and getting these lower limits? In the end, I'm left with no Claude in both situations, which leaves me better off for having used it (I wrote more code when Claude worked).
Did you write more code with Claude? Isn’t the point that you have in fact written less (because Claude wrote it for you)?
As for the cost, you are ignoring the situation where someone has depended on the tool so much that when it goes away they are atrophied and unable to continue as before. So no, in your scenario you’re not always left better off.
The metric being more lines of code usually turnes out to not be a very good one. Can it also help you do the same with less lines of code & reduced complexity ?
Right, but these companies are selling their products on the basis that you can offload a good amount of the thinking. And it seems a good deal of investment in AI is also based on this premise. I don't disagree with you, but it's sorta fucked that so much money has been pumped into this and that markets seem to still be okay with it all.
Oh, you beautiful summer child. They are losing money on you . Do you think they are doing that out of the goodness of your heart? They are luring you in, making you dependent on them at a net loss while the VC money are lasting.
When you are totally hooked and they are fully out of money, that's when you'd realize the long con you've been dragged into. At this very moment, that they are tightening the usage limits they are not telling you, and you still think the peanuts you are paying them now would be enough in the future? It's called https://en.wikipedia.org/wiki/Enshittification and you better know that you are in it.
I am buying from willing sellers at the current fair market price. The belief that there will be "one true king" in this race has been incepted by VCs and hype men, and is misguided at best and dangerous at worst.
Like many of the companies that have gone before them, if / when the value proposition is gone, and I get less than 10x the amount I spend, I will be gone.
Is this like saying a gym runs at 40%+ margin because 80% of users don't really use it heavily or forget they even had a subscription? Would be interested to see the breakdown of that number.
That's how nearly every subscription service works, yes. Some fraction has a subscription and doesn't use it, another large chunk only uses a fraction of their usage limits, and a tiny fraction uses the service to it's full potential. Almost no subscription would be profitable if every customer used it to its full potential
TL;DR: their subscriptions have an extra built-in margin closer to 70%, because the entry price, target audience and clever choice of rate limiting period, all keep utilization low.
----
In this case I'd imagine it's more of an assumption that almost all such subscriptions will have less than 33% utilization, and excepting few outliers, even the heaviest users won't exceed 60-70% utilization on average.
"33% or less" is, of course, people using a CC subscription only at work. Take an idealized case - using CC only during regular working hours, no overtime: then, even if you use it to the limit all the time, you only use it for 1⁄3 of the day (8h), and 5 days a week - the expected utilization in this scenario is 8/24 × 5/7 = 24%. So you're paying a $200 subscription, but actually getting at most $50 of usage out of it.
Now, throw in a rate limit that refreshes in periods of 5 hours - a value I believe was carefully chosen to achieve this very effect - and the maximum utilization possible (maxing out limit, waiting for refresh, and maxing out again, in under 8 hours wall-clock time), is still 10 hours equivalent, so 10/24 × 5/7 = 30%. If you just plan to use CC to the first limit and then take meetings for the rest of the day, your max utilization drops to 15%.
Of course people do overtime, use same subscription for personal stuff after work and on weekends, or just run a business, etc. -- but they also need to eat and sleep, so interactively, you'd still expect them to stay below 75% (83% if counting in 5-hour blocks) total utilization.
Sharing subscriptions doesn't affect these calculations much - two people maxing out a single subscription is, from the provider side, strictly not greater than two subscriptions with 50% utilization. The math will stop working once a significant fraction of users figure out non-interactive workflows, that run CC 24/7. We'll get there eventually, but we're not there yet. Until then, Anthropic is happy we're all paying $200/month but only getting $50 or less of service out of it.
Is that for per token costs or in these bundled subscriptions companies are selling?
For example, when playing around with claude code using a per token paid API key, it was going to cost ~$50aud a day with pretty general usage.
But their subscription plan is less than that per month. Them lowering their limits suggests that this wasn't proving profitable at the current limits.
Exactly. The enormous margin is why companies like OpenAI and Anthropic are known for being so immensely profitable. Just money printing machines compared to the amount of cash they burn
How long can AI be subsidized in the name of growth? They need to radically increase the price. If I replace a $150k yr employee should I pay $200 a month or $2,000 a month. $200 is too cheap.
$200 a month with Opus 4 and Sonnet 4 won't let you replace a $12.5k / month employee - but it's cheap enough that everyone, including you and even your employees, will want to see how much utility they can squeeze out of it.
This is a price to get people hooked up, yes, but also to get them to explore all kinds of weird tricks and wacky workflows, especially ones that are prohibitively costly when billed per token. In some sense, this is crowdsourcing R&D - and then when Opus 7 or whatever comes along to take advantage of best practices people worked out, and that turns out to be good enough to replace your $150k/yr / $12.5k/mo employee - then they'll jack up prices to $10k/month or whatever.
And makes your codebase completely unmaintainable, even for Claude. The more Claude runs one does, the more unmaintainable the codebase becomes, and more tokens Claude spends on each subsequent run. It's a perfect ecosystem!
I honestly feel sorry for these vibe coders. I'm loving AI in a similar way that I loved google or IDE magic. This seems like a far worst version of those developers that tried to build an entire app with Eclipse or Visual Studio GUI drag and drop from the late 90s
I really don't like how religious the debate about AI has gotten here. "I feel sorry for these vibe coders" is something you tell yourself to feel superior to people who use AI.
Don't feel sorry for me, I've used vibe coding to create many things, and I still know how to program, so I'll live if it goes away.
Same, I really like the solutions one can build with LLMs and have a lot of fun working with them to improve use-cases where it actually makes sense. Its the first time since years I really enjoy coding on side-projects and take great care to give clear instructions and review and understand what LLMs build for me, except some completely irrelevant/one-shot things I entirely "vibe code".
Its gotten so bad I'm actively trying to avoid talking about this in circles like Hacker News because people get so heavily and aggressively discredited and ridiculed as if they have no idea what they are doing or are a shill for big AI companies.
I know what I'm doing and actively try to help friends and co-workers use LLMs in a sustainable way, understanding their limitations and the dangers of letting them loose without staying in the loop. Its sad that I can't talk about this without fear of being attacked, especially in communities like Hacker News that I previously valued as being very professional and open, compared to other modern social media.
Why isn't anyone talking about the bevvy of drag-and-drop no colder solutions that have already been in the market? Surely the LLMs are competing with those tools, right?
Hundreds of billions of dollars have changed hands through shitty drag-and-drop UIs, wordpress ecommerce plugins, and dreamweaver sites, lets not forget the code is there to serve a business purpose at the end of the day. Code quality is an implementation detail that may matter less over time as rewrites get easier. I love me some beatiful hand-written clean code, but clean code is not the true goal.
Its not, but it does matter. LLMs, being next word guessers, perform differently with different inputs. Its not hard to imagine a feedback loop of bad code generating worse code and good code generating more good code.
My ability to get good responses from LLMs has been tied to me writing better code, docstrings, and using autoformatters.
I don't think that feedback loop is really a loop because code that doesn't actually do its job doesn't grow in popularity for long. We already have a great source of selection pressure to take care of shitty products that don't function: users and their $.
There is nothing about LLMs that make them bias towards "better" code. LLMs are every bit as good at making low effort reddit posts and writing essays for Harper's Magazine. In fact, there's a lot more shit reddit posts (and horrible student assignment github repos) than there are Harper's Magazine articles.
The only thing standing between your LLM and bad code is the quality of the prompt (including context and the hiddem OEM prompt).
I don't consider drag-and-drop UIs anywhere close to wordpress plugins. I'm not talking about writing bad code, I'm talking about being able to understand what you are creating.
there are many parts of computers I don't understand in detail, but I still get tremendous value using them and coding on top of abstractions I don't need to know the internals of.
That's a nonsense take. How fast you burn through usage limits depends on your use patterns, and if there's one thing that's true about LLMs, is that you can practically always improve your results by spending more tokens. Pay-per-use API pricing just makes you hit diminishing returns quickly. With Claude Code subscription, it's different.
The whole value proposition of a Max subscription is that it lets you stop worrying about tokens (Claude Code literally tells you so if you type `/cost` while authenticated with a subscription). So I'd turn around and say, people who don't regularly hit usage limits aren't using Claude Code properly - they're not utilizing it in full.
--
Myself, I'm only using Claude Code for little R&D and some side projects, but I upgraded from Max x5 to Max x20 on the second day, as it's trivial to hit the Max x5 limit in a regular, single-instance chat. And that's without any smarts, just a more streamlined flavor of good ol' basic chat experience.
But then I look around, and see people experiment with more complex approaches. They run 4+ instances in parallel to work on more things at a time. They run multiple instances in parallel to get multiple solutions to the same task, and then mix them into a final one - possibly with help of yet another instance. They have the agent extensively research a thing before doing it, and then extensively validate it afterwards. And so on. Any single one of such tricks/strategies is going to make hitting limits on Max x20 a regular occurrence again.
Vibe limit reached. Gotta start doing some thinking.