Hacker Newsnew | past | comments | ask | show | jobs | submit | brokencode's commentslogin

How long do you figure it’d take to solve the problem yourself?

I don’t see anywhere that it’s something they specifically decided not to support. Probably they just haven’t gotten around to it yet? Multithreading is notoriously difficult to get right.

It says it isn't supported right in the readme. Just isn't clear on the "why" yet. Not getting to it yet is my hope. I maintain 14+ highly threaded ruby services atm, for context.

Are you suggesting that governments shouldn’t require safety features because car manufacturers might implement them badly?

The EPA push for fuel efficiency made it easier to hit targets by selling huge trucks instead of small cars.

There is a value in safety regulation but the incentives as legislated have led to negative results. It needs to be fixed or repealed. Not sure there's a clean solution here.


Not only huge trucks, but all vehicles got larger.

Can we have one thread about Claude without people trying to shovel Caveman?

Much of the token usage is in reasoning, exploring, and code generation rather than outputs to the user.

Does making Claude sound like a caveman actually move the needle on costs? I am not sure anymore whether people are serious about this.

To me, caveman sounds bad and is not as easy to understand compared to normal English.


Would it be possible to increase the cache duration if misses are a frequent source of problems?

Maybe using a heartbeat to detect live sessions to cache longer than sessions the user has already closed. And only do it for long sessions where a cache miss would be very expensive.


Yes, we're trying a couple of experiments along these lines. Good intuition.


Running businesses and dealing with customers can be a major pain. There’s a lot of soft work in any business on top of the technical work.

Why bother with all that when you can simply charge an extortionate rate and customers will pay it anyway because it’s still profitable?


Public APIs get distilled, this is why Deepseek and Qwen are so competitive.

I am very confident that frontier models won’t be public at strong AGI levels, and certainly not at superhuman levels.


Because other than SWEs, very few other segments extract significant value from cutting edge AI at present. I suspect that for the average Joe conversing with their chat, GPT-4o was more than adequate (and really, when OpenAI tried to phase that out, the public revolted and they brought it back in).

So companies might pay good money for these models for programming but elsewhere, I don't see where they capture particular interest yet.


New companies can enter this space. Google’s competing, though behind. Maybe Microsoft, Meta, Amazon, or Apple will come out with top notch models at some point.

There is no real barrier to a customer of Anthropic adopting a competing model in the future. All it takes is a big tech company deciding it’s worth it to train one.

On the other hand, Visa/Mastercard have a lot of lock-in due to consumers only wanting to get a card that’s accepted everywhere, and merchants not bothering to support a new type of card that no consumer has. There’s a major chicken and egg problem to overcome there.


Gigawatts seems like more a statement of the power supply and dissipation of the actual facility.

I’m assuming you can cram more chips in there if you have more efficient chips to make use of spare capacity?

Trying to measure the actual compute is a moving target since you’d be upgrading things over time, whereas the power aspects are probably more fixed by fire code, building size, and utilities.


Measuring data centers in watts is like measuring cars in horsepower. Power isn't a direct measure of performance, but of the primary constraint on performance. When in doubt choose the thermodynamic perspective.


Gigawatts are units of power, gigawatthours are units of energy.

The equivalent of cars would be pricing by how much gas you burned, not horsepower.


1 horsepower = 745.7 watts


Yes, and that is both units of power, not energy.


This conversation is confusing because OP didn't use the same units as the person in the quote.


I mean a single nuclear reactor delivers around 1GW, so if a single datacenter consumes multiple of those, it gives a reasonably accurate idea of the scale.


I mean yeah, you pay for the internet. But many sites are free to use only due to ads.

Such as news and magazine sites, many of which are actively dying due to a lack of revenue.

I personally wish these sites could all switch to paid models, because I also don’t like ads.

But absent that, I’d like to support the sites I use so that they don’t go out of business.


If their business depends on psychologically manipulating me into acting against my own best interests then I hope they go out of business.


I have expensive online subscriptions to New York Times, Wall Street Journal, and Washington Post. Nevertheless they are FILLED with ads/popups/videos that run automatically/dark patterns. Just saying: there's no refuge.


True, but that doesn’t invalidate what I said about the vast majority of sites that aren’t globally known, prestigious news companies that people are willing to pay an expensive subscription for.

Most publishers of content online are ad supported and struggling, and I want to make sure I’m contributing to their revenue somehow.

I don’t feel bad about blocking ads on sites I pay for though.


here's an idea: don't use those sites.


That’s a starting spot, but how about some testing and benchmarks?

Where’s the value added if the person just tells Claude to do it and then submits a PR?

The maintainers may as well vibe code it themselves if that’s all the work the would-be contributor is going to put into it.


if it works it works

we live in a wholly unoptimized world because the available resources have been so high, while the benefits of optimizing have been so low. that has flipped now and there are tons of low hanging fruit to optimize.

I agree that benchmarks would be great, but thats only relevant to this one topic, not the overall agentic coded pull request concept itself


It's relevant in that it's an example that people are doing the easy part - the coding - and skipping the hard part - the benchmarking and proving it works and provides value.

A PR without evidence it works and expectations for the benefits using the new feature would bring is kind of worthless.


It might work, but what's the point is sharing it if anyone can do the same in those 30 minutes with minimal effort?


> if it works it works

If it works in one case that doesn't mean it works consistently or well in the general case

I've made lots of things with Claude Code that just work... until I do things in a slightly different order and the whole thing explodes


Who says it works if the “author” isn’t thoroughly testing and reviewing it?

People who do this want the fun part of pretending they’re implementing a feature without actually putting in the hard work it takes to make something for real.

They want the repo maintainers to do all the hard, boring parts while they have fun. As if maintainers of open source projects don’t have enough thankless work on their plates. Good luck with that!


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: