Hacker News new | past | comments | ask | show | jobs | submit login

$200/month isn't that much. Folks, I'm hanging around with are spending $100 USD to $500 USD daily as the new norm as a cost of doing business and remaining competitive. That might seem expensive, but it's cheap... https://ghuntley.com/redlining



When should we expect to see the amazing products these super-competitive businesses are developing?


$100/day seems reasonable as an upper-percentile spend per programmer. $500/day sounds insane.

A 2.5 hour session with Claude Code costs me somewhere between $15 and $20. Taking $20/2.5 hours as the estimate, $100 would buy me 12.5 hours of programming.


Asking very specific questions to Sonnet 3.7 costs a couple of tenths of a cent every time, and even if you're doing that all day it will never amount to more than maybe a dollar at the end of the day.

On average, one line of, say, JavaScript represents around 7 tokens, which means there are around 140k lines of JS per million tokens.

On Openrouter, Sonnet 3.7 costs are currently:

- $3 / one million input tokens => $100 = 33.3 million input tokens = 420k lines of JS code

- $15 / one million output tokens => $100 = 3.6 million output tokens = 4.6 million lines of JS code

For one developer? In one day? It seems that one can only reach such amounts if the whole codebase is sent again as context with each and every interaction (maybe even with every keystroke for type completion?) -- and that seems incredibly wasteful?


I can't edit the above comment, but there's obviously an error in the math! ;-) Doesn't change the point I was trying to make, but putting this here for the record.

33.3 million input tokens / 7 tokens per loc = 4.8 million locs

3.6 million output tokens / 7 tokens per loc = 515k locs


That's how it works, everything is recomputed again every additional prompt. But it can cache the state of things and restore for a lower fee, and reingesting what was formerly output is cheaper than making new output (serial bottleneck) so sometimes there is a discount there.


I'm waiting for the day this AI bubble bursts since as far as we can tell almost all these AI "providers" are operating at a loss. I wonder if this billing model actually makes profit or if it's still just burning cash in hopes of AGI being around the corner. We have yet to see a product that is useful and affordable enough to justify the cost.



Great article, thanks. Mirrors exactly what the JP Morgan/Goldman report claimed but that was quite dated.


It sounds insane until you drive full agentic loops/evals. I'm currently making a self-compiling compiler; no doubt you'll hear/see about it soon. The other night, I fell asleep and woke up with interface dynamic dispatch using vtables with runtime type information and generic interface support implemented...


Do you actually understand the code Claude wrote?


Do you understand all of the code in the libraries that your applications depend on? Or your coworker for that matter?

All of the gate keeping around llm code tools are amusing. But whatever, I’m shipping 10x and making money doing it.


Up until recently I could be sure they were written by a human.

But if you are making money by using LLMs to write code then all power to you. I just despair at the idea of trillions of lines of LLM generated code.


Well, you can’t just vibe code something useful into existence despite all the marketing. You have to be very intentional about which libraries it can use, code style etc. Make sure it has the proper specifications and context. And review the code, of course.


Fair enough. That's pretty cool, I haven't gone that far in my own work with AI yet, but now I am inspired to try.

The point is to get a pipeline working, cost can be optimized down after.


Seriously? That’s wild. What kind of CS field could even handle that kind of daily spend for a bunch of people?


Consider L5 at Google: outgoings of $377,797 USD per year just on salary/stock, before fixed overheads such as insurance, leave, issues like ramp-up time and cost of their manager. In the hands of a Staff+ engineer, these tools enable replication of Staff+ engineers and don't sleep. My 2c: the funding for the new norm will come from either compressing the manager layer or engineering layer or both.


LLMs absolutely don't replicate staff+ engineers.

If your staff engineers are mostly doing things AI can do, then you don't need staff. Probably don't even need senior


That's my point.

- L3 SWE II - $193,712 USD (before overheads)

- L4 SWE III - $297,124 USD (before overheads)

- L5 Senior SWE - $377,797 USD (before overheads)

These tools and foundational models get better every day, and right now, they enable Staff+ engineers and businesses to have less need for juniors. I suspect there will be [short-to-medium-term] compression. See extended thoughts at https://ghuntley.com/screwed


I wonder what will happen first - will companies move to LLMs, or to programmers from abroad (because ultimately, it will be cheaper than using LLMs - you've said ~$500 per day, in Poland ~$1500 will be a good monthly wage - and that still will make us expensive! How about moving to India, then? Nigeria? LATAM countries?)


> in Poland ~$1500 will be a good monthly wage

The minimum wage in Poland is around USD 1240/month. The median wage in Poland is approximately USD 1648/month. Tech salaries are considerably higher than the median.

Idk, maybe for an intern software developer it's a good salary...


Minimal is ~$930 after taxes, though; I rarely see people talk here about salary pre-tax, tbh.

~$1200 is what I'd get paid here after a few years of experience; I have never saw an internship offer in my city that paid more than minimal wage (most commonly, it's unpaid).


The industry has tried that, and the problems are well known (timezones, unpredictable outcomes in terms of quality and delivery dates)...

Delivery via LLMs is predictable, fast, and any concerns about outcome [quality] can be programmed away to reject bad outcomes. This form of programming the LLMs has a one-time cost...


> These […] get better every day.

They do, but I’ve seen a huge slowdown in “getting better” in the last year. I wonder if it’s my perception, or reality. Each model does better on benchmarks but I’m still experiencing at least a 50% failure rate on _basic_ task completion, and that number hasn’t moved higher in many months.


Oh but they absolutely do. Have you not used any of this llm tooling? It’s insanely good once you learn how to employ it. I no longer need a front end team, for example. It's that good at TypeScript and React. And the design is even better.


The kind of field where AI builds more in a day than a team or even contract dev does.


correct; utilised correctly these tools ship teams of output in a single day.


Do you have a link to some of this output? A repo on Github of something you’ve done for fun?

I get a lot of value out of LLMs but when I see people make claims like this I know they aren’t “in the trenches” of software development, or care so little about quality that I can’t relate to their experience.

Usually they’re investors in some bullshit agentic coding tool though.


I will shortly; am building a serious self-compiling compiler rn out of an brand-new esoteric language. Meaning the LLM is able to program itself without training data about the programming language...


I would hold on on making grand claims until you have something grand to show for it.


Honestly, I don't know what to make of it. Stage 2 is almost complete, and I'm (right now) conducting per-language benchmarks to compare it to the Titans.

Using the proper techniques, Sonet 3.7 can generate code in the custom lexical/stdlib. So, in my eyes, the path to Stage 3 is unlocked, but it will chew lots and lots of tokens.


> a serious self-compiling compiler

Well, virtually every production-grade compiler is self-compiling. Since you bring it up explicitly, I'm wondering what implications of begin self-compiling you have in mind?

> Meaning the LLM is able to program itself without training data about the programming language...

Could you clarify this sentence a bit? Does it mean the LLM will code in this new language without training in it before hand? Or is it going to enable the LLM to programm itself to gain some new capabilities?

Frankly, with the advent of coding agents, building a new compiler sounds about as relevant as introducing a new flavor of assembly language and then a new assembly may at least be justified by a new CPU architecture...


All can be true depending on the business/person:

1. My company cannot justify this cost at all.

2. My company can justify this cost but I don't find it useful.

3. My company can justify this cost, and I find it useful.

4. I find it useful, and I can justify the cost for personal use.

5. I find it useful, and I cannot justify the cost for personal use.

That aside -- 200/day/dev for a "nice to have service that sometimes makes my work slightly faster" is much in the majority of the world.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: