Hacker Newsnew | past | comments | ask | show | jobs | submit | nikcub's commentslogin

a) llms are good at writing typescript

b) typescript fixed a lot about javascript and is somewhat decent

c) multiple fast and performant runtime engines

d) deployment story is php levels of easy

that's it.


> d) deployment story is php levels of easy

Yeah just invalidate the SSR cluster for your tiny footer update and let it prewarm until coffee break. Easy.


interesting that you only need ~150 stars on a project for it to be in the top 1%

Let's establish a roving band of ~150 GitHub users that go around 1% things.

the fact that somebody was able to fork it and remove behaviour they didn't want suggests that it is very open

that #12446 PR hasn't even been resolved to won't merge and last change was a week ago (in a repo with 1.8k+ open PRs)


I think there’s a conflict between “open” as in “open source”, and “open” as in “open about the practice” paired with the fact we usually don’t review software’s source scrupulously enough to spot unwanted behaviors”.

Must be a karmic response from “Free” /s


This is analogous to when Google launched Gmail with 1GB of storage and then a bunch of third-party apps cropped up that took advantage of it to use it as a generic online file storage drive.

There was GMailFS[0] and Gmail Drive[1] - this is before S3, dropbox, and a time where web hosting would give you ~10MB or so of space.

Google updated their ToS and shut down accounts using their service in ways they weren't intended via these apps - because obviously the 1GB of storage was a loss-leader into Google's ecosystem (and it worked)

Same thing today - "unauthorized" third parties taking advantage of a loss-leading[2] deal - complete with similar trademark violations to boot[3].

Google have more cash to burn in the AI race so can be more forgiving today in how their codex plans are used. Anthropic are still a private company and can't.

[0] https://handwiki.org/wiki/GmailFS

[1] https://techcrunch.com/2005/07/31/profile-gmail-drive/

[2] it's a big q just how large a loss leader the max plans are considering a fixed harness, prompt caching etc. but point still stands. you're getting up to $5k of RRP tokens for $200

[3] Clawd Bot -> OpenClaw


This is more like if Google took action against Thunderbird and open-source email clients


No, because in those cases you're still a user of gmail. When you tell people your email address, or send people email, and it contains "@gmail.com", you're still implicitly advertising for Google. From Google's perspective that's still worth the few KB per day of bandwidth and 1GB storage (which the vast majority of people never use the entirety of, anyway) they're giving away.

But when you use gmail accounts as file storage, you're both a higher-cost user and also doing nothing to further Google's ecosystem (since the email address itself is probably not being used for genuine messaging at all).


And here, you're still using Claude Opus, and when people ask you what you used, you'd say OpenCode client with Claude (Thunderbird client with Gmail).

As analogies go it's pretty close.


there is nothing about claude code that prevents you from using it for non coding use cases. nothing that happens in open code or any harness for that matter is hidden from anthropic. neither does open code allow access in some nefarious use case that claude code does not.

the difference is not like the difference between gmail and gmailfs like you seem to be misunderstanding. a more accurate comparison would be the difference between curl, or httpie vs postman.


It's not analogous at all because Google intentionally provided interfaces for those clients and even instructions for using them.

An analogous situation would be if someone reverse engineered the Google Maps API and provided their own app that showed maps using the Google Maps data.


And if Google Maps charged per tile viewed, so the user pays the same amount regardless of which maps client they used, would your opinion hold?

I get that it’s a ToS violation, but I’m saying it shouldn’t be. They’re trying to make the harness the moat because they all have no moat.


> And if Google Maps charged per tile viewed, so the user pays the same amount regardless of which maps client they used, would your opinion hold?

Yes. Why wouldn't it hold?

Anthropic has a pay-per-token API. You can use OpenCode with it.

Maybe my consistency comes from having worked with contracts and agreements in the real world, where the end user doesn't get to pick and choose which terms they want to abide by.

When you sign up to use a service, you're not signing up to use it however you would like, on your own terms. You're paying for a service that they offer. They are not obligated to continue offering it to you if you try to use it a different way.


Anthropic has no issue with the use of OpenCode using Anthropic's API which does charge per token.


Google explicitly allows third party email clients to work with Gmail, so no that hypothetical does not apply to this situation at all.


My point is that model providers are just a compute service, and should have no say in what sends the data, or displays the data. Especially when they only bill based on the quantity of data.

They are basically a utility.


They have an API for exactly that. You can use it.

They offer a separate plan with discounts for use with their tools. You can also choose to take advantage of those discounts with the monthly fee, within the domain where that applies. You cannot, however, expect to demand that discount to apply to anything you want.

You can argue about what you want it to be all day long, but when you go to the subscription page and choose what to purchase it's very clear what you're getting.

> They are basically a utility

Utilities like my electric company also have different plans for different uses. I cannot, for example, sign up for a residential plan and then try to connect it to my commercial business, even though I'm consuming power from them either way.

Utilities do not work like that. They do have contractual agreements about how you can use the resources provided.


> Google have more cash to burn in the AI race so can be more forgiving today in how their codex plans are used.

Even despite the larger cash pile to burn, Google is in the middle of their own controversy around what many feel is a rug-pull around how Gemini "AI credits" work and are priced.

See:

https://www.theregister.com/2026/03/12/users_protest_as_goog...

https://old.reddit.com/r/google_antigravity/comments/1rv4cec...

etc


This argument is predicated on Anthropic losing money on the subs, but I'm not sure that's a cut and dried argument. OpenAI have said publicly that they're (very) profitable on inference, and they're much cheaper than Anthropic. I suspect this is just artificially trying to create a moat. The problem is their moat is not as sticky as they think it is - I completely ditched Claude for Codex a while ago, my money now goes to OpenAI, and I'm very happy with it. For a while Claude was noticeably better, but that's not the case any more - in my case I prefer Codex.


They aren't public companies (yet). They are allowed to just lie about these things. It's also not really reasonable to only count inference compute as a cost since it's not like any of these companies could stop doing R&D without being abandoned for having worse models within a year or 2


> They are allowed to just lie about these things.

That would turn into investment fraud the moment they IPO.


So what openai does differently than anthropic to allow usage everywhere via chatgpt subscription?

Hemorrhaging money more than Anthropic?


If anyone has a better theory I'd love to hear it, but going by Occam's Razor that's the most likely explanation that I would pick.


Here's one possibility: Anthropic understands the value of the brand and the harness and that those two things are connected, specifically because they came from behind. OpenAI almost accidentally launched a global brand overnight. ChatGPT went from nothing to the kind of english word you hear in non-anglophone countries in about a month. Millions and millions have used it (at least once) and more people associate it with AI than use it. OpenAI's problem is managing the big industry links so that by the time the hype cools down, they're already plugged into tools. Their "moat" is that number of companies that matter is actually small and all those companies like predictable, enterprise shaped solutions with contracts and stuff. Unlike developers who might switch their subscriptions quickly and absorb the productivity cost of switching (or minimize that cost), these big companies don't want to be constantly optimizing compute vs rental rate. They want to convert an unruly value (programmer productivity) to something easy, not replace it with a scheduling or optimization problem.

That was working ok until Claude, specifically Claude Code showed up. This was a really useful code-writing harness (that also signed your commits, advertising itself to everyone) that took what are essentially very similar models and made Opus feel like the future of software while GPT 5.2 and friends are just code agents. The performance, ability to handle long term tasks, all of that was basically similar but the harness oriented the model to reason, shell out sub-agents, write scratch code, add console logs, all the sorts of things that 1. seem like science fiction, and 2. improve output a little. Then from fall of last year to no you don't have developers saying "look what I made with LLMs" or "Look what I made with AI" but "Look what I did with Claude". There are not very many blog posts out there about the future of software being re-written due to GPT 5.2 getting autocompaction, but that same feature spawned thousands of "oh shit!" posts in Claude.

That's not a more defensible moat than name recognition + small N for customers. It's a scarier position because if someone else figures out how to deliver the same result (Opus + sonnet + Haiku in a managed ensemble) in a way that was sharp and viral, the same thing they did to OpenAI could happen to them. They still supply the compute but the fact that anyone gives a shit about them is their harness makes it look like more and better code is being written. If that's your situation, you gently write the OpenClaw guy, you threaten to cut off and sue OpenCode for using subscription sign-in. You don't do those things because of a numerator/denominator problem with token cost and monthly fees. You do it because someone using your models in a better harness is a clear brand threat.


Some have claimed that Codex has better token efficiency in their harness than Claude Code.


Ok, bear with me here.

Theory 1: the internet has been fully strip mined for all content and is now dead. See that graph of StackOverflow questions dropping off a cliff to zero. Nothing much worthwhile is being added.

Theory 2: they are all unethical as fuck and definitely learning off your data. You would be insane not to - theory 1 means all your free training data is gone, but all that corporate data is fresh, ripe and covers many domains that the amateurs on the internet never filled. You have to launder it some way of course, but it's definitely happening.

Theory 3: winner takes all. I don't care for "Claude" and your wishy-washy ethics performance. ChudAI has a better model and harness? I'm gone this evening.

Having all the users, even if they are exploiting you for cheap compute with their own harness, is essential.


Good theory and insight. Seems like that’s setting us up for some epic big co vs ai co legal battles for covertly training off sensitive and internal big co data


the $20 pro plan would also have double offpeak limits - just set it to sonnet and you'll get a reasonable level of output


I can't find a github or email for Hannah - if you're reading this i'd like to add Australian energy price data via Open Electricity[0] to the data (reach out via my profile)

[0] https://explore.openelectricity.org.au/



thank you!


chatgpt use should be in the default set since energy use of ai is so often in the news now - and more often in social media


so we're all going to hold onto sequoia like we did snow leopard. only reason i'm not buying a new mac at the moment is because it would force me to upgrade.

the situation is absurd ..

fwiw switching to the sequoia beta channel in system settings killed the nag notifications for me (I believe the profile as defined in OP will stop all updates - which you probably don't want)


My historic "sticking points" have been macOS 9.1, 10.8, 10.13, & presently 13.2

I'm getting old enough that it's rational thinking: pre-AI/pre-Tahoe operating systems will be accompany me into death.


But you probably don't want to receive all and only beta updates either, which is what the beta channel will give you.


> Claude Code this morning was about to create an account with NeonDB

I had the same thing happen. Use planetscale everywhere across projects and it recommended neon. It's definitely a bug.


from my understanding Anthropic are now hiring a lot of experts in different who are writing content used to post-train models to make these decisions and they're constantly adjusted by the anthropic team themselves

this is why the stacks in the report and what cc suggests closely match latest developer "consensus"

your suggestion would degrade user experience and be noticed very quickly


I guess that’s why I’m not seeing anyone trying to build a skills marketplace for agent skills files. The llm api will read in any skills you want to add to context in plain text, and then use your content to help populate their own skills files.


So I wonder about sharable skills? Like if it's a problem that lots of people have, I find the base model knows about it already.

But how to do things in your environment? The conventions your team follow? Super useful but not very shareable.

Whats left over between those extremes does not seem to be big enough to build an ecosystem around.

Final problem, it seems difficult to monetise what is effectively a repo of llm generated text files.



That sounds too expensive to be viable when the giveaway phase ends.


That's how Google search worked back when it was at its most useful. They had a large "editorial team" that manually tweaked page ranks on a site-by-site basis.

The core graph reputation based page ranking algorithm lasted for a hot second before people started gaming it. No idea what they do these days.


Yeah but you can farm that out very cheap, and I don’t think they were even manually reviewing more than a small fraction of sites.

If you’re hiring experts to manually rank programming libraries, that’s a much more expensive position.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: