Hacker Newsnew | past | comments | ask | show | jobs | submit | Someone1234's commentslogin

You greatly under-estimate how much work it is to maintain old code, particularly to maintain in securely.

AFP and Time Capsules add attack vectors to the OS, which can be targeted even when few users actively using them. One dev could keep both basically functional, but to what end? User counts are already small, and people that aren't using them are still exposed by their mere existence.

Shrinking or removing code, in my experience, is one of the biggest single wins you can have in software development. Less to test, less to update, less to secure.


Yes, writing and maintaining less code is great for a developer. We can follow this to the logical extreme and marvel at how easy it is to write and maintain a program whose only function is to print "hello, world" to the console. Nevermind the users, what do they matter?

By the very nature of assigning development time to these antiquated features, you're assigning them away from other features, bug fixes, or requests that may have a larger user reach.

Development is a finite resource, the argument here is to allocate them to hard-to-secure, outmoded, replaced, technology instead of anything future relevant. It doesn't make sense.


The person was specifically suggesting hiring extra developers for maintenance. While I'm familiar with the concept that "nine women can't birth a baby in a month", I don't think that applies so much to maintenance of old code paths. Apple makes over $100b in net profit per year, a truly unfathomable amount of money, they can afford it, and I think not only can they afford it but that it would benefit them. Even if only 1% of your users use X, for Apple that might translate to perhaps 10 million people using X, or at 0.1% 1 million. Hiring a dev to improve the experience for that many people just makes sense at scale, software is write-once reproduce-a-million-times-for-free.

I have no doubt the bean counters have drawn up every kind of spreadsheet they can imagine trying to quantify it as being not worth it, but I don't think these kinds of quality of life things can be easily quantified, because each small thing maintained might only impact a small number of users but collectively, all of these kinds of small things add up to either a system with sharp corners that constantly papercuts the user (current Apple software), or one that is so seamless that it engenders customer loyalty for decades (old Apple software). This kind of shortsighted penny-pinching is how companies become a shell of their former selves, suffering a slow death-by-MBA.


> You greatly under-estimate how much work it is to maintain old code, particularly to maintain in securely.

cf Linux removing old network drivers this week for the same reason (without the hand-wringing that this Apple announcement is getting!)


Is the code that Apple is removing support for open source? The Linux drivers could at least plausibly be picked up and used by someone who really wants to, so it doesn't seem to be a fair comparison

Yep.

Or if you're a business with multiple seats, these plans may be more inefficient than raw API usage billing. Since if anyone at your organization fails to utilize their full $19/39 allotment each month, that's wasting money, whereas with API credits it is 100% utilized.

I don't think they've thought through the implications of this. Everyone should cancel and go usage-based billing with caps.


They do address this in the doc, Orgs can now (although it was vague as to whether it was an option or just the new standard, probably option due to business contracts) 'pool' the Usage billing across all users.

I'm guessing they did that (and the 'temporary bonus credits') to make the pill easier to swallow for that side of customers.


You're right, I missed that.

It still does make one wonder, why have seats at all though? If everyone is just in one big API credit pool - what do the seats/users accomplish?


It forces you to pay at least $20 in tokens per user even for people who use less (they probably have stats on how many people use just autocomplete, which doesn’t count against the quota. or have a seat and don’t use the service at all).

For orgs, each user was allotted their own quota. For messages beyond that quota, a pooled budget is available.

They mention in the announcement that it will be possible to pool usage across an organization.

If we take that literally, then just remove all destructive API endpoints. Because then, it they no real purpose, you cannot automate the removal of anything.

I think some other suggestions are saner (cool-down period, more fine-grain permissions, delete protection for certain high-value volumes). I don't think "don't allow destructive actions over the API" is the right boundary.


Does the free plan even have access to thinking models?

Technically yes, gpt-5.4-mini is available on the free plan

I'm not following this. PRs are the first time your reviewers have seen that change, so there is no opportunity downstream to do the things you're suggesting.

You're essentially suggesting pre-PRs, but it is circular, since those same pre-PRs would have the same criticism.

PRs are about isolated core changes, not feature or system design. They answer how not why.


> You're essentially suggesting pre-PRs, but it us circular, since those same pre-PRs would have the same criticism.

Walking this road to the end you get pair programming.


You get to design committees where everything has to be approved in advance.

Yep, where productivity goes to die and your developers feel no autonomy/trust.

Usually by the time a PR has been submitted it's too late to dig into aspects of the change that come from a poor understanding of the task at hand without throwing out the PR and creating rework.

So it's helpful to shift left on that and discuss how you intend to approach the solution. Especially for people who are new to the codebase or unfamiliar with the language and, thanks to AI, show little interest in learning.

Obviously not for every situation, but time can be saved by talking something through before YOLOing a bad PR.


Yes, it should be cheap to throw out any individual PR and rewrite it from scratch. Your first draft of a problem is almost never the one you want to submit anyway. The actual writing of the code should never be the most complicated step in any individual PR. It should always be the time spent thinking about the problem and the solution space. Sometimes you can do a lot of that work before the ticket, if you're very familiar with the codebase and the problem space, but for most novel problems, you're going to need to have your hands on the problem itself to get your most productive understanding of them.

I'm not saying it's not important to discuss how you intend to approach the solution ahead of time, but I am saying a lot about any non-trivial problem you're solving can only be discovered by attempting to solve it. Put another way: the best code I write is always my second draft at any given ticket.

More micromanaging of your team's tickets and plans is not going to save you from team members who "show little interest in learning". The fact that your team is "YOLOing a bad PR" is the fundamental culture issue, and that's not one you can solve by adding more process.


I don't disagree that a practical spike is a good way to grasp a novel problem (or work with a lack of internal knowledge because it's legacy code) but there is still something to be said for attempting to work things out in the abstract too, and not necessarily by adding process, but by redeveloping that internal knowledge and getting familiar with the business domain.

In a greenfield project I will have a lot of patience for a team that doesn't grasp the problem space too well yet, and needs to feel around it by experimenting and prototyping. You have to encourage that or you might not even be building anything innovative.

For the longer term legacy project then the team can't really afford to have people going down rabbit holes and it's more beneficial to approach things in the abstract and reduce the problem as much as possible. Especially with junior or mid-level engineers who can see an old codebase as a goldmine for refactoring if left unattended.

As for the fundamental culture issue... maybe. AI increases the frequency of low quality PRs and puts a bigger burden on the reviewer. I can live with this in the short term if people take lessons from it and keep building up their own skillset. I feel this issue is not unique to my team and LLM-driven development is still novel enough that we're all figuring out the best way to tackle it.


I'm not sure what approach you're suggesting?

Asking a more junior developer or someone who "show little interest in learning" to discuss their approach with you before they've spent too much time on the problem, especially if you expect them to take the wrong approach seems like the right way to do things.

Throwing out a PR of someone who doesn't expect it would be quite unpleasant, especially coming from someone more senior.


This is how I try to approach it. I don't think it's a new thing for a new hire to come in hot and try to figure things out themselves rather than spending time with the team. Or getting lost down rabbit holes.

> PRs are the first time your reviewers have seen that change, so there is no opportunity downstream to do the things you're suggesting.

Yes, but I'm arguing for that it shouldn't be the first time they hear about that this change is planned and happening, and their input should have taken into account before the PR is even opened, either upfront/before or early in development. This eliminates so many of the typical PR reviews/comments.

Figure out where you are going before you start going there, instead of trying to course correct after people already walked places.


Great site, thanks for the link. But holy heck, that "Also Known As" column is complete chaos. What the heck is wrong with the USB Consortium, do they have brain damage?

Also, according to that table, "USB4 Gen 2×2" is a downgrade on "USB 3.2 Gen 2x2", since the cable length is 0.8m instead of 1m for the same speeds. Which is uhh unexpected.


Yeah I what I would give to have been a fly on the wall in the room where they decided to roll with such an obviously terrible and stupid naming scheme. Did anyone protest? Did anyone boldly dissent? Or did they all really just sit around and pat themselves on the back?

It allows manufacturers to clear old stocks of cables by rebranding them as latest products.

USB 1+2/3/4 are basically unrelated standards under the same USB umbrella. USB4 especially is just Thunderbolt/PCIe x4 with features. If Betamax was branded as "VHS 2.0" instead of being a separate standard it would have been felt similar to the USB4 situation.


The cable length is only for the spec. You can get longer cables that achieve the higher bandwidth, they're just not certified for that.

Right, so per spec it is a downgrade.

And? The question stands, why is the USB 4 spec a downgrade?

Probably because with USB 3.2 2x2 they were reviewing too many longer cables that didn't meet the requirements, so they lowered the length so companies didn't submit them only to fail to get certified. It's worth noting that 1.2m is now in the USB4 spec.

I really, really wish somebody would explain to me what thr USB consortium was smoking, yeah. I cannot explain it.

It feels more and more like OpenAI/Anthoropic aren't the future but Qwen, Kimi, or Deepseek are. You can run them locally, but that isn't really the point, it is about democratization of service providers. You can run any of them on a dozen providers with different trade-offs/offerings OR locally.

They won't ever be SOTA due to money, but "last year's SOTA" when it costs 1/4 or less, may be good enough. More quantity, more flexibility, at lower edge quality. It can make sense. A 7% dumber agent TEAM Vs. a single objectively superior super-agent.

That's the most exciting thing going on in that space. New workflows opening up not due to intelligence improvements but cost improvements for "good enough" intelligence.


You can run local models on junker laptops for specific tasks that are about as good as last years SOTA. If the manufactured compute hardware shortage wasn't happening a lot more people would be running two months ago SOTA locally right now. Funny thoughts...

Open Source isn't even within 50% of what the SOTA models are. Benchmarks are toys, real world use is vastly different, and that's where they seriously lag.

Why should anyone waste time on poorer results? I'd rather pay my $200/mo because my time matters. I'm not a poor college student anymore, and I need more return on my time.

I'm not shitting on open weights here - I want open source to win. I just don't see how that's possible.

It's like Photoshop vs. Gimp. Not only is the Gimp UX awful, but it didn't even offer (maybe still doesn't?) full bit depth support. For a hacker with free time, that's fine. But if my primary job function is to transform graphics in exchange for money, I'm paying for the better tool. Gimp is entirely a no-go in a professional setting.

Or it's like Google Docs / Microsoft Office vs. LibreOffice. LibreOffice is still pretty trash compared to the big tools. It's not just that Google and Microsoft have more money, but their products are involved in larger scale feedback loops that refine the product much more quickly.

But with weights it's even worse than bad UX. These open weights models just aren't as smart. They're not getting RLHF'd on real world data. The developers of these open weights models can game benchmarks, but the actual intelligence for real world problems is lacking. And that's unfortunately the part that actually matters.

Again, to be clear: I hate this. I want open. I just don't see how it will ever be able to catch up to full-featured products.


Unless you are getting outside of your comfort zone and taking a month off from your $200 subscription, every other month, I can’t see how you can make the universal claim that the open weights models are all 50% as good. Just today, DeepSeek released a new model, so nobody knows how that will compare, a week ago it was Gemma 4, etc. I’m okay with you making a comparison, but state the model and the timeframe in which it was tested that you are basing your conclusions on.

I think that there will come a point when open source models are "good enough" for many tasks (they probably already are for some tasks; or at least, some small number of people seem happy with them), but, as you suggest, it will likely always (for the forseeable future at least) be the case that closed SOTA models are significantly ahead of open models, and any task which can still benefit from a smarter model (which will probably always remain some large subset of tasks) will be better done on a closed model.

The trick is going to be recognizing tasks which have some ceiling on what they need and which will therefore eventually be doable by open models, and those which can always be done better if you add a bit more intelligence.


> Benchmarks are toys, real world use is vastly different...Why should anyone waste time on poorer results? I'd rather pay my $200/mo because my time matters.

This kind of rhetoric is not helpful. If you want to make a point, then make one, but this adds nothing to the conversation. Maybe open source models don't work for you. They work very well for me.


> Open Source isn't even within 50% of what the SOTA models are

Who said so? GLM 5.1 is 90% Opus, at least. Some people quite happy with Kimi 2.6 too. I did not try Deepseek 4 yet but also hearing it is as good as Opus. You might be confusing open source models with local models. It is not easy to run a 1.6T model locally, but they are not 50% of SOTA models.


> Benchmarks are toys, real world use is vastly different, and that's where they seriously lag.

I'm not disagreeing per-se but if you think the benchmarks are flawed and "my real world usage" is more reflective of model capabilities, why not write some benchmarks of your own?

You stand to make a lot of money and gain a lot of clout in the industry if you've figured out a better way to measure model capability, maybe the frontier labs would hire you.


> Why should anyone waste time on poorer results?

Because in almost no real-world project is "programming time" the limiting factor?


amazing how often is this repeated on here are some sort of a gospel SWEs pass down to one another to continue this charade. I have worked in this industry for 30+ years on countless projects, last decade+ as consultant - at every single project (every single one) programming time was the limiting factor. there is a whole industry inside our industry dealing with “processes” and “how to estimate” (apparently we are incapable of doing that) and whatnot, all because the actual programming time is always a limiting factor and there isn’t an even close 2nd

Agreed, it's very strange. I'm sure there are many projects that are like they describe, but it's certainly not all of them. I have worked as a game dev for over 20 years, and probably 75% of that time my team and I have been coding. AI has been an incredible game changer for me over the past 6 months or so (I was using it quite a bit before then, but the capability became much higher lately). I actually have some free time in my days now while still hitting milestone dates, instead of endless crunching.

What counts as programming time ? Writing ? Reviewing ? Compiling ? Debugging ? It also depends the industry. From idea to production, the limiting factor is not always writing the code, and in my experience (15years in fintech) it almost has never been. Discussion, alignment, compilation, heavy testing pipelines, shipping, all of this on a 30million line monorepo. On a greenfield 10k line repo, yes, AI really shines. In other cases, it’s currently just a helper on very specific narrow tasks, that is not always programming.

That's just not my experience. Making the software in the first place is never even the cost center.

No, it's rate at which you can solve problems, and weaker models waste your time because they don't solve problems at the same speed.

No, its the number of debug cycles you need to solve said problems. That's the major attribute that controls dev time. And models require far more than I need. You are paying money to take longer and produce worse code. If its different for you, that's a you problem.

> Open Source isn't even within 50% of what the SOTA models are.

When was the last time you used any of them? Because, a lot of people are actively using them for 9-5 work today, I count myself in that group. That opinion feels outdated, like it was formed a year ago+ and held onto. Or based on highly quantized versions and or small non-Thinking models.

Do you really think Qwen3.6 for a specific example is "50%" as good as Opus4.7? Opus4.7 is clearly and objectively better, no debate on that, but the gap isn't anywhere near that wide. I'd call "20%" hyperbole, the true difference is difficult to exactly measure but sub-10% for their top-tier Thinking models is likely.


Their opinion is also behind on LibreOffice, too. I won't defend GIMP's monstrosity, but I finished a whole dissertation, do all my regular spreadsheet work (that isn't done via R), and have created plenty of visual mockups with LibreOffice. Plus, I don't have to deal with a spammy Windows environment.

Sure, we use Google Drive, too, but that's just for sharing documents across offices, not for everyday use. For that, the open source model is a clear winner in my book.


Qwen3.6 at which model size and quantization? I already think Opus 4.6 is usable but still dumb as bricks. A 20% cut off that feels like it would still be unusable. And that's not even getting to the annoyance of setting everything up to run locally & getting HW that can run it locally which basically looks like a Macbook M4 these days as the x86 side is ridiculously pricey to get decent performance out of models.

At their highest model size and quant. We are discussing price and quality at the top, not what you can run on the lower end.

So the starting point is Opus 4.7 pricing and we're contrasting alternatives near the top end (offered across multiple providers).

Also I said 20% was hyperbole, meaning far too high.


That makes no sense because the largest Qwen models are not even open weight so I’m not sure how that’s any different.

Right, which isn't what we're discussing, since I mentioned "across multiple providers" in every comment about this topic.

Those closed weight models aren't available like we're discussing. They're only available from the vendor that created them.


The largest qwen model is similar so I’m not sure what point you’re trying to make. The only ones available are the open weight ones which are the smaller variants and nowhere near within 20% of the closed frontier models.

The largest open models are within 20%; they're likely within 10%. Go actually try them and stop making outdated assumptions. You don't need to invest a lot of money either, just pick your favorite vendor, and send out a few prompts.

> Open Source isn't even within 50% of what the SOTA models are.

The gap has been shrinking with each release, and the SOTA has already run into diminishing returns for each extra unit of data+computation it uses.

Do you really want to bet that the gap will not eventually be a hairs breadth?


IMO It's a different and new model. We're engineers, and we're rich. It's not going to be good enough for us. But the much larger market by far is all the people who used to HAVE to work with engineers. They now have optionality; the pendulum is going to swing.

Also, this space will (and perhaps already is for some of us) be an arms race. Sure you can go local but hosted will always be able to offer more and if you want to be competitive, you'll need to be using the most capable.

People pirate photoshop and office if they don't want to pay for it, making it as "free" as GIMP. If there is a free option people will use it. never underestimate the cheapskates.

There's going to be a day when we look back at $200/mo price tags and say "wow that was cheap".

The breakeven at this price is 6 minutes of productivity per work day for an engineer making $200k.


Okay, but then by that logic a person making only $20k would break even at about an hour.

Are you suggesting that someone making $20k should be spending $200/mo on Claude?


I'm talking about the cost of labor.

If you pay someone $20,000 for labor, and they save 65 minutes worth of labor per day using a $200/mo Claude subscription, you are better off buying the Claude subscription.


I think if you (a company) pay someone for labor, your labor cannot use personal subscription and you have to pay considerably higher api prices.

Most companies don't provide a corporate cell phone and have no problems with answering emails from a personal account. Can't have it both ways.

You could it’s just against ToS.

But the specific numbers in my prior comments aren’t really relevant to my point. Adjust for whatever numbers you want.


But I think they are relevant because you compare two numbers and one is much lower.

I've done some napkin math and CC code makes me more efficient when I pay 200/ month, but it wouldn't if I had to pay api prices


Really? Are you using opus and letting it run for long periods? Curious as to what your workflow is.

The math is highly in favor of us using it at our company and we are paying API pricing. I don’t imagine there’s a lot of people using Claude without getting their money’s worth…?


Yes, recently I've been working on some research/ optimizatiom problem.

I would start claude in Yolo mode, tell it keep trying new ideas until it runs out of 1m context. (Every day I am giving it a hint to explore different directions as the sessions before)

Twice a day for a month, fits well into CC max plan.

I guess if I had to pay per token I would still use it but only for tasks where the value is clearer and immediate.


Who's gonna pay $20,000 for labor that can be done by anyone with a $200/mo subscription?

Nobody, but that doesn’t exist yet. Currently these solutions enhance the productivity of workers, but it can’t quite replace them.

Everyone is arguing why I'm wrong or that I should have presented more data.

You've got the real insight with this claim.

This is the way the world is moving. Open source isn't even going where the ball is being tossed. There is no leadership here.

You're spot on.

If the cost to deliver a unit of business automation is:

    A. $1M with human labor

    B. $700k human labor + open source models

    C. $500k human labor + $10,000 in claude code max (duration of project)

    D. $250k with humans + $200k claude code "mythos ultra"
The one that will get picked is option "D".

Your poor college students and hobbyists will be on option "B". But this won't be as productive as evidenced by the human labor input costs.

Option "C" will begin to disappear as models/compute get more expensive and capable.

Option "A" will be nonviable. Humans just won't be able to keep up.

Open source strictly depends on models decreasing their capability gap. But I'm not seeing it.

Targeting home hardware is the biggest smell. It's showing that this is non-serious, hobby tinkery and has no real role in business.

For open source to work and not to turn into a toy, the models need to target data center deployment.


You are assuming (imagining) a cost relationship which doesn't exist and when researched was the opposite of what you claim.

This is you playing with imaginary numbers, like Sam Altman is doing for a long time. It won't end well.

I'm willing to bet that this is the shape of the future.

Wanna bet on it?


It is not. Yeah I'm betting already. AI is changing software landscape but it won't be captured by openai and anthropic.

Yeah, I don't wanna shit on open source, there will certainly be uses for all different kinds of models.

The real money in this market, though, is going to be made in the C suite, and they don't really care about the model. They don't care if it's open source, closed source, or what it is. They don't want to buy a model. They're interested in buying a solution to their problems. They're not going to be afraid of a software price tag -- any number they spend on labor is far more.

Labor is something like 50%+ of the Fortune 500's operating expenses -- capturing any chunk of this is a ridiculous sum of money.


If sharing all of your code with the closed providers is OK then it works. If that is a blocker, open weights becomes much more compelling...

What will you do when they stop burning cash and the $200 plan becomes $2000?

I think the problem is that we're all waiting for the patented Silicon Value Rug Pull and ensuing enshittification, where there are a dozen tiers of products, you need 4 of them, and they now cost $2000/month. I want to hedge against that.

From my understanding, that isn't how drivers in Linux work. Nearly no kernels will have that code compiled into them because kconfig won't call for it. It is "opt-in", and it is so niche few Distros would have done so.

Linux only ships with a tiny sub-set of the drivers in the source tree.


Part of why you're hitting your limit is that Claude's Pro subscription is completely unusable with the current usage limits. I legitimately mean it when I say, you should cancel.

But to the actual question: A lot of people's gut instinct on how to solve this doesn't work. They start going down the road of "well, if I teach the AI about my legacy codebase, it will be smarter, and therefore more efficient." But all you wind up doing is consuming all of your available context, with irrelevancies, and your agent gets dumber and costs more.

What you actually need to do is tackle it the same way a human would: Break it down into smaller problems, where the agent is able to keep the "entire problem" within context at once. Meaning 256K or less (file lengths + prompt + outputs). Then of course use a scratchpad file that holds notes, file references, constraints, and line numbers. That's your compaction protection. Restart the chat with the same scratchpad when you move between minor areas.

Context is your primary-limited resource. Fill it only with what should absolutely need to be there, and nothing else at all.


Are you managing your scrathpad file or letting the AI do it? Or both?

I have the agent automatically manage its own scratchpad file. But it is meant to be fully disposable; it isn't committed, and is destroyed if you shift areas.

I'd like to draw people's attention to this section of this page:

https://developers.openai.com/codex/pricing?codex-usage-limi...

Note the Local Messages between 5.3, 5.4, and 5.5. And, yes, I did read the linked article and know they're claiming that 5.5's new efficient should make it break-even with 5.4, but the point stands, tighter limits/higher prices.


For API usage, GPT-5.5 is 2x the price of GPT-5.4, ~4x the price of GPT-5.1, and ~10x the price of Kimi-2.6.

Unfortunately I think the lesson they took from Anthropic is that devs get really reliant and even addicted on coding agents, and they'll happily pay any amount for even small benefits.


I feel like devs generally spend someone else's money on tokens. Either their employers or OpenAIs when they use a codex subscription.

If I put on my schizo hat. Something they might be doing is increasing the losses on their monthly codex subscriptions, to show that the API has a higher margin than before (the codex account massively in the negative, but the API account now having huge margins).

I've never seen an OpenAI investor pitch deck. But my guess is that API margins is one of the big ones they try to sell people on since Sama talks about it on Twitter.

I would be interested in hearing the insider stuff. Like if this model is genuinely like twice as expensive to serve or something.


You can't build a business on per-seat subscriptions when you advertise making workers obsolete. API pricing with sustainable margins are the only way forward if you genuinely think you're going to cause (or accelerate) reduction in clients' headcount.

Additionally, the value generated by the best models with high-thinking and lots of context window is way higher than the cheap and tiny models, so you need to provide a "gateway drug" that lets people experience the best you offer.


> You can't build a business on per-seat subscriptions when you advertise making workers obsolete.

On the other hand I would argue that most workers' salaries are more like subscriptions than API type pricing (which would be more like an hourly contractor)


Yeah and the increase in operating expenses is going to make managers start asking hard questions - this is good. It means eventually there will be budgets put in place - this will force OAI and Anthropic to innovate harder. Then we will see how things pan out. Ultimately a firm is not going to pay rent to these firms if the benefits dont exceed the costs.

> Ultimately a firm is not going to pay rent to these firms if the benefits dont exceed the costs.

This is also true for the humans. They will need to provide more benefits than the coding agents cost.


Humans are needed to use agents and these agents are not showing to be fully autonomous and require constant human review. In fact all you are getting is a splurge of stuff, people not thinking deeper anymore and the creation of more bottle necks and exacerbating the ones that already exist in an org.

You sound like elon with the fsd will be here next year. Many cars have the self driving feature - most drivers don’t use it. Oh why is that I wonder.


Meaning that you believe they're not trying their "hardest" to innovate? They must be slacking then.

Budgets are already happening

The difference between sub and api price makes it hard to create competitive solutions on the app level.

This was something I worried about after openai started building apps as well as models. Now all of the labs make no secret of the fact that they are going after the whole software industry. Its going to be hard to maintain functioning fair markets unless governments step in.

Price increases now aim to demonstrate market power for eventual IPO.

If they can show that people will pay a lot for somewhat better performance, it raises the value of any performance lead they can maintain.

If they demonstrate that and high switching costs, their franchise is worth scary amounts of money.


Sometimes I wonder if innovation in the AI space has stalled and recent progress is just a product of increased compute. Competence is increasing exponentially[1] but I guess it doesn't rule it out completely. I would postulate that a radical architecture shift is needed for the singularity though

[1]https://arxiv.org/html/2503.14499v1 *Source is from March 2025 so make of it what you will.


> that devs get really reliant and even addicted on coding agents

An alternative perspective is, devs highly value coding agents, and are willing to pay more because they're so useful. In other words, the market value of this limited resource is being adjusted to be closer to reality.


It's not limited though there are alternative providers even now, much less when the price goes up. Chinese providers, European ones, local models.

> It's not limited though

Inference is not free, so all providers have a financial limit, and all providers have limited GPU/memory, so there's a physical material limit.

I suggest looking at the profits of these companies (while they scramble to stay competitive).


We are constantly getting smaller and faster models that are close in performance to state of the art from few months prior. And that's due to architectural inventions. I'm sure it takes some time for these inventions to proliferate to frontier and that some might not be applicable there but we are definitely going faster than just due to compute increase.

It will get faster, but there are no singularities in the real world. Except possibly black holes, but we can't even be sure of that.


Maybe that's true. But I think part of the issue is that for a lot of things developers want to do with them now— certainly for most of the things I want to do with them— they're either barely good enough, or not consistently good enough. And the value difference across that quality threshold is immense, even if the quality difference itself isn't.

> devs get really reliant and even addicted on coding agents

That's more about managers who hope AI will gradually replace stubborn and lazy devs. That will shift the balance to business ideas and connections out of technical side and investments.

Anyway, before singularity there going to be a huge change.


On top of that I noticed just right now after updating macos dekstop codex app, I got again by default set speed to 'fast' ('about 1.5x faster with increased plan usage'). They really want you to burn more tokens.

wow wait so it wasn't just me leaving it on from an old session?

sounds like criminal fraud to me tbh


A fool and his money are soon parted

what's the source on that?

In the announcement webpage:

>For API developers, gpt-5.5 will soon be available in the Responses and Chat Completions APIs at $5 per 1M input tokens and $30 per 1M output tokens, with a 1M context window.


oops, thanks. i had just been looking at their api docs

I did one review job that sent off three subagents and I blew the second half of my daily limit in 10 mins 13 seconds. Fun times.

It's such a vague table for pricing information. 30-150 messages...? What?

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: