Hacker Newsnew | past | comments | ask | show | jobs | submit | skybrian's commentslogin

There was a time when companies had terrible development practices and could forget how to build, test, and deploy software, but is anyone seeing that now? We have much better development practices nowadays.

It doesn’t seem much like defense industry problems.


This still happens. Lots of my career has been figuring out what code is actually running in prod, and determining if it even works.

IMHO, it's a people Thing. People developed better practices, talked about IT in conferences, maybe left the company. AS a result the knowlegde spread. On the other Hand, If the places where a skilled individual can work and honey their skills, the knowlegde become scarce, the knowlegde cannot spread anymore and it will vanish. If you only program with AI and 5 people do the Work of 100, then you end Up in such a scenario.

I like using them for coding, but I'm wary of making software that depends on an unreliable, expensive remote API. I'd rather have the agent write code and have no runtime dependency.

It might be nice to have something simple and cheap for basic text classification, but I'm not sure what to use. (My websites are written in Deno.)


My understanding is that it's due to better drilling techniques. The industry learned a fair bit from fracking and they're learning more from experience as they apply it to geothermal.

No particular breakthrough, but there's a learning curve and they learn more as they do more. Other industries sometimes work that way, too.

https://www.austinvernon.site/blog/geothermalupdate2026.html


The LLM's do quite a lot of reading. The question is what to feed them. (What counts as good context?)

I think the lesson is to be careful about introducing incompatibility via the type system. When you introduce distinctions, you reduce compatibility. Often that’s deliberate (two functions shouldn’t be interchangeable because it introduces a bug) but the result is lots of incompatible code, and, often, duplicate code.

Effects are another way of making functions incompatible, for better or worse. It can be done badly. Java fell into that trap with checked exceptions. They meant well, but it resulted in fragmentation.

Sometimes it’s worth making an effort to make functions more compatible by standardizing types. By convention, all functions in Go that return an error use the same type. It gives you less information about what errors can actually happen, but that means the implementation of a function can be modified to return a new error without breaking callers.

Another example is standardizing on a string type. There are multiple ways strings can be implemented, but standardization is more important.


You can also use type inference with union types like ZIO. So you could e.g. return a Result where the error type is `DatabaseError | InvalidBirthdayError`. If you're in an error monad anyway, and you add a new error type deep in the call stack, it can just infer itself into the union up the stack to wherever you want to handle it.

That will help locally, but for a published API or a callback function where you don't know the callers, it's still going to break people if you change a union type. It doesn't matter if it's inferred or not.

IIRC ZIO solution is actually to return a generic E :> X|Y. Caller providing the callback knows what else is on E, and they're the only one that knows it so only they could've handled it anyway. You still get type inference.

Or if you mean that returning a new error breaks API compatibility, then yes that's the point. If now you can error in a different way, your users now need to handle that. But if it's all generic and inferred, it can still just bubble up to wherever they want to do that with no changes to middle layers.


If you declare specific error types and callers only write handlers for specific cases, then adding a new error breaks them. If you just declare a base error type in your API, they have to write a generic error handler or it doesn't type check.

In this way, declaring a type guides people to write calling code that doesn't break, provided you set it up that way. It makes things easier for the implementation to change.

Sometimes you do need handlers for specific errors, but in Go you always need to write generic error handling, too.

(A type variable can do something similar. It forces the implementation to be generic because the type isn't known, or is only partially known.)


Usually in Scala errors subtype Throwable, so as long as new union members continue to do so, if you wanted a generic handler (e.g. you just log and return) you could handle that. But you can also be more specific with actual logic, with the benefit that if you choose to do so and the underlying implementation changes, you detect it.

Go also basically requires you to write actually generic error code (if err != nil return err), so it feels like errors are more work than they are. In Scala (especially ZIO) propagation is all automatic and you just handle it wherever you'd like. The only code that cares about errors is the part generating them and the part handling them. But it's all tracked in the type system so you can always see what exact errors are possible from what methods. And it's all simple return values so no trickiness with exceptions (e.g. across async boundaries).


I mean, the very point of a type system is to introduce distinctions and reduce compatibility (compatibility of incorrectly typed programs).

Throwing the baby out with the water like what go sort of does with its error handling is no solution. The proper solution is a better type system (e.g. a result type with a generic handles what go can't).

For effects though, we need a type systems that support these - but it's only available in research languages so far. You can actually just be generic in effects (e.g. an fmap function applying a lambda to a list could just "copy" the effect of the lambda to the whole function - this can be properly written down and enforced by the compiler)


That’s a bit strong. A coding agent doesn’t know, but they’re pretty good at debugging problems. It can speculate about possible fixes based on its context.

Context: a few weeks ago, Anthropic signed a deal to buy "multiple gigawatts of next-generation TPU capacity" from Google and Broadcom [1]. There have been several previous deals, too.

Some people call this sort of thing a "circular deal", but perhaps a better way to think of it is as a very large-scale version of vendor financing? The simple version of vendor financing is when a vendor gives a retailer time to pay for goods they purchased for resale. This is effectively a loan that's backed by the retailer's ability to resell the goods. There's a possibility that the retailer goes broke and doesn't pay, but the vendor has insight into how well the retailer is doing, so they know if they're a good risk.

Similarly, Google likely knows quite a lot about Anthropic because Anthropic buys computing services from Google for resale. They're making an equity investment rather than a loan, but the money will be coming back to Google, assuming Anthropic's sales continue to rise as fast as they have been.

Also, if you own Google stock, some small part of that is an investment in Anthropic?

[1] https://www.anthropic.com/news/google-broadcom-partnership-c...


So yes, but that doesn't negate the circular investment aspect, for most intents and purposes.

The risk is from this structure is mostly to do with how this affects market cap. Companies using the value of their shares to fund demand for their services.

That's a risk.


I feel like the whole market at this point is just AI since big tech other than Apple are all massively invested into that. Everyone owns either the S&P or the total world ETF which are both heavily skewed towards big tech and this trade - so literally everybody is in it. It might go well for a few more quarters/years but once something breaks or gets exponentially cheaper this will take down the whole market with it.

It's just hard to tell the difference between "real" demand and "circular." That's the concern.

PG had an essay about this during the dotcom, when he worked at yahoo. Iirc...Yahoo's share price and other big successes in the space attracted investment into startups. Startups used that money to advertise on yahoo. Yahoo bought some of these the startups.

So... a lot of the revenue used to analyze companies for investment was actually a 2nd order side effect of these investments.

Here the risk is that we have Ai investments servicing Ai investments for other Ai investments.

Google buys Nvidia chips to sell anthropic compute. Anthropic sells coding assist to Ai companies (including Google and Nvidia). They buy anthropic services with investor money that is flowing because of all this hype.

Imo the general risk factor is trying to get ahead of actual worldly use.

The Ai optimists have a sense that Ai produces things that are valuable (like software) at massive scale...that is output.

But... even if true, it will take a lot of time, and lot of software for the Econony to discover this, go through the path dependencies and actually produce value.

The most valuable, known software has already afy been written. The stuff that you could do, but haven't yet is stuff that hasn't made the cut. Value isn't linear.


While value isn't linear, prejudgement of value for allocation of resources is very imperfect.

A lot of stuff that doesn't make the cut is the the stuff that does have value. When you're lowering the bar, remember it's a noisy bar - so a lot more good stuff is going to come through as well.


Yes.. I agree.

.. and that entropy can be where all the ultimate value is. That said... considering the point at hand is the context, it's important to start with the diminishing marginal returns.

To give a simple example... Google and FB do not have "invest able software opportunities" at hand. They've been searching everywhere for nails for their "build software" hammer. They are well resourced and risk tolerant.

The diminishing returns curve for "more software" is steep.

Good stuff coming through often starts with $100m markets becoming $1bn markets. That's not even noise at the scale they're thinking about. Long term, sure. Plausibility range is as wide at it has maybe ever been.

But... systemic value is hard to make.


Most places I've worked have roadmaps, i.e. investable priorities.

If you can burn through lower priority experiments quickly it's great!

They might be working on all of the super high level things they can think of, but there are always more A/B tests, more features, etc. that are just lower priority, and the chaos of scaling up the org to address them all is super linear whereas the return on going down the list is sub linear.

So you end up with an equilibrium. If the cost shifts, just like in econ 101, the output will change.


I'm starting to transition how we build software at our company due to the power of AI. No more: five code monkey contractors under a lead. Two top-notch devs are all that is needed now, unrestrained by sprints and mindless ceremonies. There is going to be a giant sucking sound in India.

I can't continue the current model. The dev that gets AI is done in five hours, the ones that don't are thrashing for the next two weeks. I have to unleash the good AI dev. I have the Product team handing us markdown files now with an overview of the project and all the details and stories built into them. I'm literally transforming how a billion dollar company works right now because of this. I have Codex, Claude and GitHub Copilot enterprise accounts on top of Office 365. Everyone is being trained right now as most devs are behind, even.


> No more: five code monkey contractors under a lead. Two top-notch devs are all that is needed now, unrestrained by sprints and mindless ceremonies.

This doesn't tell me anything. Two devs who cared and didn't have a bunch of pointless meetings could already, and regularly did, scoop the big tech teams.

There were always 2 ways to complete a ticket. One that did what the stakeholder wanted, and one that does what the ticket says.

But devs that care about the product and what the stakeholders need are rare, and finding one of them was already a significant bottleneck on most projects.

AI might be an accelerator, but we've yet to see if it's optimizing the part that was actually the bottleneck yet.


Ok... but extrapolating from this to "whole market" paradigms is speculative.

The (imo) question isn't how you produce software, but what the value of this software is. Are you going to make make/better software such that customers pay more, or buy more? Are those customers getting value of this kind?

The answer may be yes. But... it's not an automatic yes.

Instead of programming think of accounting. Say you experience what you are experiencing, but as an accountant. 6 person team replaced by 2-3 hotshots.

So... Maybe you can sell more/better accounting for a higher price. But... potential is probably pretty limited. Over time, maybe business practices will adjust and find uses for this newly abundant capacity.

Maybe you lower prices. Maybe the two hotshot earn as much as the previous team.

If you are reducing team size, and that's the primary benefit... the fired employees need to find useful emplyment elsewhere in the economy for surplus value to be realized.

Mediating all this is the law of diminishing returns. At any given moment, new marginal resources have less productive value than the current allocation.


And the day you don't have that drug what do you do? If anything you are training people to become dependent on one or more subscription services.

Solidworks is also a subscription service.

Like the drug of electricity and Internet, running water grocery stores?

I don't think the likelyhood of "electricity and Internet, running water grocery stores" being pulled out from underneath you (either by long term failure or prohibitive cost changes) is anywhere near as high as it is for subscription-based AI tools (at least not in the US).

That was a factor with electricity early on as it was first put to use. The flip side of the infamous "does it make the beer taste better?" adage/nonsense is that, per the story, back then you had breweries build their own power plants, because electricity was just that useful. It took a while for the market to start feeling comfortable with reliability of electricity supply and price point.

Except the dev that gets AI done in 5 hours will have a poorer mental model of the code. Whether that's important might or might not depend on whether that bites you in the ass at some point.

Don’t really agree with this.

That dev is productive with AI precisely _because_ they have a good mental model.

AI like other tools is a multiplier - it doesn’t make bad devs good, but it makes good devs significantly more productive.


Don't agree - the dev is productive because they have a good mental model of the problem space and can cajole the agent into producing code that agrees with the spec. The trend is for devs to become more like product managers (which is why you see some whip-smart product managers able to build products _without_ human devs)

I believe these tools change the value of different skill sets in a very profound ways. Being good with rules of a programming language and syntax is no longer as valuable as it used to be.

Understanding the problem space is becoming more valuable. Strength in architecture of a solution is another skill that is becoming very valuable.

We are close to getting to a point where someone with overall general (and perhaps not very detailed) understanding of arch and design and a good understanding of the problem space and having a good taste in usability will be able to create awesome solutinos.

I can't wait to see these solutions being created by one or two person teams.


But does it matter?

If you write a program in Python or JavaScript, you have a terrible mental model for how that code is actually executed in machine code. It's irrelevant though, you figure it out only when it's a problem.

Even if you don't have a great mental model, now you have AI to identify the problems and generate an explanation of the structure for you.


No, but you have a great mental model of the interface between your problem domain and the code, which is where you can affect change.

Outsourcing that to an AI SaaS might be ok I guess. Given past form there's going to be a rug-pull/bait-and-switch moment and dividends to start paying out.


The effect of JavaScript or python code is well defined - they have an excellent model of what it will do.

The performance - how that is executed on the machine is what you were referring to. “As if” is the key to optimization


> It's irrelevant though, you figure it out only when it's a problem.

For the past decade people have been clawing their eyes out over how sluggish their computers have become due to everything becoming a bloated Electron app. It's extremely relevant. Meanwhile, here you are seemingly trying to suggest that not only should everything be a bloated, inefficient mess, it should also be buggy and inscrutable, even moreso than it already is. The entire experience of using a computer is about to descend into a heretofore unimaginable nightmare, but hey, at least Jensen Huang got his bag.


That is the doom side. However AI has found and fixed a lot of security issues. I have personally used AI to improve my code speed, AI can analyze complex algorithms and figure out how to make them much faster in ways I can do as a developer, but it's a lot of work that I typically wouldn't do. Even just writing various targeted benchmarks to see where the problems really are in my code is something I can do, but would be so tedious I often would not bother. I can tell AI to do it and it will write those.

Only time will tell which version of the future we end up with. It could be good or bad and we will have to see.


In terms of runtime performance of applications, AI is a net win. You can easily remove abstractions like Electron, React, various libraries. Just let the AI write more code. You can even do the unthinkable and write desktop native again.

> literally everybody

I personally make sure I really diversify, so that when I buy funds, I buy those with stocks of EU companies which pay dividends. AFAICT there are 0 European AI companies that pay dividends.


There are zero US pure-play AI companies which pay dividends, right?

You have to go pretty far down the list of holdings (under "Holding details") to find any big bets on AI:

https://www.vanguardinvestor.co.uk/investments/vanguard-ftse...


For tax reasons most companies are avoiding paying dividends. It still happens but it's not nearly as common and companies are trying to get away from it because for many investors it is better not to have dividends paid.

>Companies using the value of their shares to fund demand for their services.

That's not what's happening here though. Google isn't using the value of its shares to fund demand. Google is using its own cash flow to fund this demand from Anthropic.

The question is whether Anthropic has demand from end users for the capacity they are buying from Google (that's a yes I guess) and whether that demand is profitable for Anthropic (that's a question mark).


True.

Regardless, (a) it's ability/desire to make such investments is still driven by stock-driven optimism and (b) these transactions' "signal" can have a similar, warping effect.

In this case the transaction creates demand for Google's services and also funds anthropic's growth... which represents demands for google's services.

"Loop" is an approximation of an analogy. The risk is that enough of such transactions create a dynamic that distorts feedbacks.


>(a) it's ability/desire to make such investments is still driven by stock-driven optimism

I don't think it has much to do with the stock price at all. Current platform oligopolists fear the rise of new platforms. They want a foot in the door for strategic reasons.

What could happen is that frontier labs like Anthropic and OpenAI never become platforms and turn out to be providers of a largely commoditised, low margin service.

In that event, current valuations are too high. But Anthropic's valuation doesn't seem extreme to me. Their $30bn annual run rate is valued at $380bn.

Given this price and Anthropic's strategic value, Google's investment seems reasonable.


But OpenAI/Anthropic are not selling the compute as they're buying that from Google/Amazon/etc.

So they're selling the transformation, or the model. Or the ability to make a model. And their brand and their harness.

And it seems like the model is definitely not worth 380 billion. Models depreciate incredibly fast. There are lots of models and the other models aren't that far behind.

And it seems like the harness is not worth much as there's already open source alternatives that people claim are better.

And all these companies are paying lots of money for these AI training experts.

But I suspect that any regular Hacker News reader of 10 years dev experience could become a training expert in months if allowed to play with a load of compute and a lot of data for a bit.

Just like any of us could have become a data scientist, this stuff is not particularly hard. Random horny dudes on the internet are putting out loras and quantized models in days against the open source image models.

So what's worth 380 billion exactly? The brand?

These valuations just look really off. Not by one order of magnitude, but more like by 4 orders of magnitude. Like 380 million might be a reasonable valuation, but not billion.

What I also don't get is that it's pretty obvious to me that the Europeans should all be spinning up their own, not necessarily massive, data centers and throwing a few billion at some guys in Cambridge or Stockholm or London or Berlin to make their own AI models.

Only the French have done it.

But instead the rest seem to be trying to court Anthropic or OpenAI to build data centers. Which is just stupid politics given what's happening in the world right now.


The technical task is not the business task... unless the task really is a commodity.

Coding facebook isn't rocket surgery either. Neither is Visa, Salesforce or many other tech-centric companies. Replicating their business model is.

Those are locked in by network effects. Path dependencies and suchlike can play a role. But... the upshot is that anthropic, open Ai and whatnot have the model people are using for work.

A government sponsored model isn't a bad thing to have, but I thing it's unlikely (but possible) that it will also be the product people want to use or the business that succeeds.


>So what's worth 380 billion exactly? The brand?

Whatever it is that leads to a $30bn run rate, growing >200%. Right now it's having the better model and being able to show how to use it in specific verticals.

But I suspect in the long run only platforms have high margins (and they will need margins not just revenues to justify their valuation). Are they becoming platforms? Google seems to think (or fear) that they might.


Not directly related to the valuation question you asked, but for Google there's a lot of value in getting as much Anthropic workload to run on their hardware as possible. The value comes from getting the insights and learnings of running these workloads, especially when they run on custom Google hardware. That hardware will get better as a result and increase the likelihood that Google has world class AI hardware in the future.

I can't say with any confidence that the $40B is a reasonable amount to pay for that value, but it doesn't seem unreasonable over a multi year time horizon given the stakes.


Moonshot (Kimi) and Deepseek trained their model on chinese GPU, with little capital, and are raising now at around 20b$ valuations.

Their latest models are arguably comparable to frontier ones. It is obvious that the valuations of the US companies are totally surreal now.


Apparently it's not obvious by evidence of the investment in them and stock value.

Kimi and Deepseek are in China and don't have access to the US capital market.

Because everybody is playing the same game?

>So what's worth 380 billion exactly? The brand?

>These valuations just look really off. Not by one order of magnitude, but more like by 4 orders of magnitude. Like 380 million might be a reasonable valuation, but not billion.

Or maybe the USD isn't worth that much now.


Can you share more on this market cap risk? I see legit stability / correlation risks but can’t work out the market cap risk mechanics.

The cash was just sitting on their balance sheet not increasing Google’s valuation, turning it into revenue is value creation.

The equity transfer is a bit murkier, Google I guess gets to mark this on their books according to Anthropic’s latest valuation, but isn’t this more of a volatility swap than conjuring market cap? Analysts are not going to apply $30b of future spend at current PE, they will additionally discount this by the P(Anthropic demand crashes). So it’s not like this just boosts their market cap for free.

Of course Google’s balance sheet now has higher vol equity instead of cash for their products.


The tech industry goes through investment phases to produce oligopolies it turns around and enshittifies, parasitizing income off what it has built. Venture capital, acquisitions, acquihires, circular investments - It’s been incestuous for years. The question is whether competition from China’s sophisticated tech sector, which already surpasses the US in many areas, will put a pin in these plans this time round.

I don't agree with the "full cynicism" POV, but I do agree that TechnoChina's existence is a potential paradigm shifter.

But generally speaking, AI is currently pretty competitive and robust. Straightforward business model where users pay money and select the best deal are central. Market power is relatively dispersed.

So... Idk. Nvidia doesn't have competition. But Intel didn't have much competition either, and they drive the Moore's law bus for a long time.

Hardware has been less prone to enshitification. Maybe it's because the demand curve for compute doesn't have natural limits. Drive down price, and demand grows by enough that the total market grows.


There is a giant capital outlay required to produce a competitive model. Joe Schmo can’t jump into this market. Best he could do would be to ingratiate himself to an existing funding cartel. The moat surrounding a handful of market participants is billions of dollars wide.

There’s competition now among the American companies (who have a head start in this space) as always happens as the professional oligopolists try to manufacture their footholds in the new market.

Nor is it cynical to objectively appraise the interests and economics at play. People aren’t playing circular financing games out of the goodness of their hearts.


Nvidia clearly has competition, that's what this deal with Google is about (TPUs).

Economics is circular. The baker buys shoes from the cobbler, and the cobbler buys food from the baker.

Yes but the baker doesn't just give the cobbler money to buy bread and take a share in the shoe shop in return.

But there's nothing wrong with that. It's not a circle; it's an exchange. Like any transaction.

I like this abstraction. If the baker says “I could sell 10x more if only I had shoes that allowed me to bake faster” then the cobbler says, “split the growth with me and I’ll craft you all the shoes you want.”

The claim was circularity is evidence the business activity is fake.

Those are tangible items. Here, the baker is buying shoes from someone who says they're going to be a cobbler some day.

It’s no different with services. Making deals with potential cobblers seems like a fine market activity.

To be honest, I think "vendor financing" is still a very risky premise.

Vendors may be positioned to know how a customer is doing, but they're also incentivized to overestimate how well a customer is going to perform.

GE Capital (edit: and GMCA) is a great example of how seemingly reasonable vendor financing can cause the lender serious problems.


The risks are different, but there's no getting around that the value of any investment is based on future cash flows and that's speculating about the future.

To the extent that Google and Anthropic are competing for AI business, Google is somewhat hedged against Anthropic winning market share. They still get data center revenue and they own equity, so that’s a consolation prize.

On the other hand, it’s increasing Google’s investment in AI, in general.


    > To be honest, I think "vendor financing" is still a very risky premise.
Are you aware that all heavy industry in all highly developed nations make extensive use of vendor financing to sell their products? Siemens is a perfect example of a well-run, stable, industrial giant. They offer vendor financing for large purchases. Same for the "heavies" (Mitsubishi, Kawasaki, IHI, Hyundai, Doosan, Hanjin) in Japan and Korea.

If anyone is interested to learn about the damage that the financialisation of General Electric (USA) brought upon itself, you can ask ChatGPT to tell you the story. It is too long to repeat here.

Here is a sample prompt that I used to remind myself:

    > I am interested in the history of General Electric and the trouble that their financing units brought in the early to mid 2000s. Can you tell me more?

Are we replacing "Let me google that for you" with "Here is a prompt to feed ChatGPT" now?

Edit: I am not asking whether ChatGPT is better than Google Search, I am asking after the standard dodge of citing one's sources.


Fair point/question. For many of my HN responses, I first ask ChatGPT for a bit of information about the topic. For the case of GE Cap's wrecking of parent GE with excessive financialisation, I could only loosely remember the details from the 2000s. It is a long time ago! That prompt that I shared gave a reply that was 100s of words. Too much for copy/pasta, and too hard for me to summarise briefly. Instead, I decided to share the prompt. It is not my intention to dodge sources. Plus, the newest versions of ChatGPT is pretty good about sharing sources. (Of course, the quality of sources can be debatable.) In short, it was not my intention to be snarky by sharing my ChatGPT prompt.

EDIT ---- Also, the OP was so brief about GE Cap, I realised that most readers under 30 (maybe 35) will have almost no knowledge or memory of that economic history. I wanted to offer an "intellectual carrot" (ChatGPT prompt) for anyone wishing to learn more. ----

What bothered me most about the original post was the person was putting all vendor financing in the same "bad" bucket. I disagree. I would characterise GE Cap as an infamous example! They were the worst of the worst in a generation (25 years). Most vendor financing is very boring and is used to buy big heavy things with very long operational lives. If the buyer goes bankrupt, it is (relatively) easy to repossess the big heavy thing and sell it again (probably with vendor financing again!).


Well, at the very least one thing I would caution against in "prompt sharing as a way to lead people to information" is that a chat bot is far less deterministic than even a traditional web search (let alone a link to a static source): any other user putting in the prompt won't get the same explanation you got, and thanks to hallucination they may get a wildly different answer or a case where this time around the bot wound up misunderstanding what was asked.

Yes, cause google has been giving crap results long before chatgpt was a thing and it only got worse. Before ai it was "let me google that on reddit for you".

Very tangentially related comment, but I remember seeing a post on a local Facebook clone with a prompt to throw at Claude to "make a custom YouTube downloader for MacOS", so the general "Here is a prompt to feed an LLM" is somewhat real for some, apparently

It's a good use case really – it'll tell it differently according to what it knows about your background, if you 'just Google it' you'll get the same maybe-appropriate results as anyone else.

Google search has gone way down hill after they nerfed it and then did nothing to prevent the flood of AI slop seo websites. So unfortunately, instead of sharing links everyone now gets sent to the inefficient text generator that hallucinates nonsense and will color the average summary of a topic by whoever trained it and your most recent chat history instead.

I haven't run a Google search in two years. Your comment just made me realize that. Doing a Google search is like trying to watch cable after being on YouTube for years.

I use different search engines than Google. They have similar issues, but some are better at ignoring the slop.

I just cannot justify the environmental impact and surveillance of using LLMs for everything. I prefer to summarize recent information myself. LLMs are not particularly good at it.

Funny thing about the cable analogy. Ever since all streaming providers have started cranking up prices and still forcing users to see hundreds of ads my family has been buying second hand dvds. So we have regressed from streaming to right after cable. I know one family that went back to cable, they do still watch YouTubes here and there but they got sick of it.


Yes.

> Are you aware that all heavy industry in all highly developed nations make extensive use of vendor financing to sell their products?

The OP did mention GE Capital, the motherload of all heavy industry vendor financing. And of massaging the accounting books in order to increase shareholder value in the short term, also.


    > motherload of all heavy industry vendor financing
I doubt they are bigger than other national "heavy industry" champions from East Asia and Western/Central Europe. Without checking, I would guess that the global leaders are Boeing and Airbus.

Its odd that don't care enough to at least confirm your arguments are right.

I'd suggest learning more about this subject instead of assuming you already know enough.


Back in the day GE (including GE Capital) was, on paper at least, the largest company in the world when it comes to market cap, my memory is fuzzy but I’d say that happened just after the Dot Com crash and going into the Great Recession. Greater than a heavy industry company like Samsung, yes. Again, this was in a big part as a result of GE Capital doing very scammy things, but for a good few years Jack Welch was regarded like an actually business guru.

GE Capital was a different creature, riding the line of fraud in some ways. They misapplied accounting rules and had to write down or capitalize over $20B for long term care insurance.

That's what brought them down, but that could bring down anyone. My point is that vendor financing turns non-finance companies into finance companies, and brings along a huge can of worms.

That’s fair. You become vulnerable to a guy like Jack Welch.

The vendor financing stuff I saw (as a junior / intern at a supplier) in those days was a reflection of that culture. They’d lease capital equipment through GE Capital, and pack it with other stuff to the limit of their accountants appetite for risk. (You can usually roll 20% of the value into services or peripheral stuff) I remember one deal where we had to run around and buy office supplies and tools with a corporate card. I did 4 Honda Civic of laser toner.

GE was reporting their own capital equipment and office supplies as revenue on the Capital side. :) But that is penny ante stuff in terms of what they did.

The AI stuff is a shady variation of that, but likely far worse as we’ve fired all of the watchers.


I don't know the full the history of this story, but I honestly wonder if type of scandal is still possible in the United States. After Enron and Worldcomm, the US introduced Sarbanes-Oxley reporting regulations. Additionally, after the Global Financial Crisis of 2008/2009, there was a dramatic increase in regulations for banks (of all kinds) and insurance companies.

.. yet today we have Kalshi, Polymarket, et al.

Those are private gambling businesses akin to a casino, not publicly listed businesses subject to the aforementioned regulations.

GP said: > there was a dramatic increase in regulations for banks (of all kinds)

and if I can deposit funds, make "investments" (gambling or not.. to me most stock investments are gambling anyway since they pay out in dividends based on quarterly profits) and withdraw money, that is at it's core a bank.

However, in these and many other cases they are now effectively unregulated ones. Partly because the executive branch responsible for said regulations have their hands in the pie.


>and if I can deposit funds, make "investments", and withdraw money, that is at it's core a bank.

Not according to pretty much all countries. When a depositor is making investments, that is called a brokerage, not a bank. But these gambling websites are not making investments either.

Also, with the transition to digital currency, banks don't really have the same purpose anymore of storing and guaranteeing people's access to their money, now that money is entries in a digital ledger and there is no risk of losing the money (outside of political risks where someone edits the database). They have been obviated, but the system persists.

>most stock investments are gambling anyway since they pay out in dividends based on quarterly profits

This is not true, publicly listed companies' boards regularly choose to vary dividend amounts based on longer term cash flow, and you will be sorely disappointed if you are expecting dividends to land in your account as a function of quarterly profits.

And how do you expect a business to work if it did not vary dividends? No business knows the future, and dividends are the same as owners paying themselves a profit, so if owners do not know how much profit there will be in the future, how can dividends be a "known" quantity? They have to be based on profits, they literally are the owners distributing the profit amongst themselves

Any venture in life is gambling, just like people exploring the oceans and traversing continents without knowing what was ahead of them was gambling. But that is a different type of gambling than risking money with no effect on the outcome such as on the aforementioned gambling websites and casinos.


lol yes.

The POTUS kids are players in Polymarket and Kalshi, and are running crypto grifts.

The SEC fired most of their investigators, hasn’t appointed members to key boards, and cancelled most of their contracts with FINRA. (Which has laid off a ton of people) Nobody is watching.

So there’s an open season for normal corporate bullshit, and if you’re personally committing felonies attributable to you, you make sure you do it in Florida, and pay a vig to the library fund for a pardon.

We’ll have a fun run, then everything starts exploding in mid 2027-2029.


GE Capital was not just vendor financing and its serious problems were not due to vendor financing. I don’t think it is a great example in any way.

$40 billion is about a quarter’s worth of profits for Google. They make that much every 3 months, what’s the risk

Hat tip. Great point. To quote J Paul Getty: "If you owe the bank $100, that's your problem. If you owe the bank $100 million, that's the bank's problem." In this case, yes, the investment is large, but not bankrupting for Google if it goes wrong.

Assuming you mean GMAC, their biggest losses were not so much a result of vendor financing as branching into the real estate market.

Reciprocal agreements aren't new, sometimes they're used to gain access to a market the other party already has established a foothold in for other industry segments. These companies operate in the same general industry: tech/internet so it could be complementary services they are each after.

So far both of these companies have shown they suck at support so we know that's not it. It could be that it might help Anthropic to leverage Gemini in their competition with OpenAI and Google will take compute commitments.

Anecdata: I'm finding a lot of my "type random question in URL/search bar" has decent top Gemini answers where I don't scroll to results unless I need to dive deeper.


Funny how Gemini generally takes into account all the words you type whereas Google search tends to ignore most words you type or otherwise direct you to results for thematically (or grammatically or semantically) similar words to what you searched but otherwise wholly irrelevant.

Google crippling search to bolster AI is a dangerous game. But without people going to competitors, what's the recourse?


They're already crippling their AI to perform what look like sponsored searches.

The plural of anecdote is not data but this does not feel like a one-off thing: I was trying to find where it would be possible to get to have a reasonable holiday, and asked Gemini to list me all the international airports in two named countries that had direct flights from my preferred departure airport. The response came back with a single proposed flight destination with "book here" prominently available.

Only once I told it that the search was NOT an impulse purchase intent and I really wanted to know the possible destinations - then did it actually come back with the list of airports that satisfied my search criteria.

Although if we are looking for the bright side, it did provide a valid and informative answer on the second try. I haven't had that kind of experience on SEO-infested Google search for quite a long time now.


I agree those results ate handy, but I had several occasions where they turned out to be completely wrong. 95% correctness rate is not good enough.

LLMs have a lot of issues with facts, because they are probabilistic and you typically only get one answer per query instead of multiple covering a larger space.

However, they are still useful in these cases if you know the above and use their output as a starting point to think and ask questions.


In another context I might see it as vendor financing. However given that Google and Anthropic are competitors in this segment and given that Google has previously invested in them I'd rather see this as a sort of bartered stock purchase presumably for the purpose of hedging against failure. If Anthropic wins the race and it turns out to be winner takes all and you happen to own half of Anthropic then you still win half of the immediate spoils even though your internal team lost. If you view losing the race as an existential threat then having all your eggs in the one basket is a terrible proposition.

Sure, since Google is both a supplier and a competitor, it’s both vendor finance and hedging. Also, it increases their investment in AI, in general.

Arguably, too much of this kind of hedging is anti-competitive. But that doesn’t seem to be much of a problem yet?


Are we stoping too early in this analysis though?

Google versus OpenAI and Anthropic, sure, but Microsoft is deep into OpenAI. Google helping Anthropic is also putting MS into a corner (one that may even be shrinking? Copilot and openAI financing hurting their brand, rumours of deep displeasure at OpenAIs promises v returns).

Seen from afar I see Google happy to provide TPUs for money (improving Googles strategic positioning), torpedoing confidence in LLMs with their search AI summaries, and using their bankroll to force larger competitors (MS in particular), to keep investments high regardless of performance and user revolts and internal tensions with Sam Altmans sales approach. Plus, Anthropic is in ‘the lead’ right now product wise, so grooming them as a potential purchase would also seem to be a strategic hedge in the long term.


You make some good points, but this part feels like a wild overreach:

    > torpedoing confidence in LLMs with their search AI summaries
That is some real tin foil hat thinking.

Straightforward observations of market impact aren’t tin foil :)

Google didn’t launch LLM products despite being a tech leader, and have gotten piles of bad press for their misleading AI search summaries. They know how and why they suck. Google search is a highly popular and market facing service packaging bad summaries as “AI”. Meanwhile LLM searches threaten to disrupt Googles primary cash cow (advertising around search).

Here on HN, on Reddit, and media writ large, a lot of the “AI” failure stories are not about ChatGPT hallucinations, it’s the shockingly wrong search summaries from Google, undermining consumer confidence and breaching trust.

ChatGPT and other LLM providers rarely show conflicting source material side by side with misleading text gen. The number one search provider who leads in some LLM tech does though, routinely, looking incompetent and generating negative “AI” sentiment through repeated failures at mass scale…

So the theory here is either that the best search org in the world filled with geniuses can’t tell they’re pooping on their own product and profitability and aren’t fixing it because they can’t/won’t… … or <tinfoil mode engaged>… Google already makes money and is happy with substandard product and market performance in the cases where it hurts the necessary hype critical to other businesses but not themselves (while also pre-positioning in case LLM search becomes essential).

Win/win/win strategy with a substandard product, versus Google not being aware of what their biggest product is doing.

Googles AI summaries are doing lotsa work to make AI summaries seem terrible. I ascribe profit motives to their actions. Ascribing incompetence seems naive and irreconcilable with their strategic corporate history.


MS is not so deep with OpenAI, it's not all black and white, they have signed several distribution deals where Claude drives Copilot [1], since Anthropic and MS are better aligned in the Enterprise market, it makes sense. It also makes sense for MS not to lose ground anywhere at this point and play with the best. Actually, any cash rich company that is not OpenAI or Anthropic wants to be close-by when any of the two needs money. That's the ultimate win they can aspire for right now, get a financial slice of frontier models on one hand while not losing revenue on the other given the existential ordeal AI represents for them.

1. https://www.microsoft.com/en-us/microsoft-365/blog/2026/03/0...


> Arguably, too much of this kind of hedging is anti-competitive. But that doesn’t seem to be much of a problem yet?

By the time it is a problem, it will be too late.


How can there be a "winner takes it all" situation with AI?

OpenAI lead the game while they were best. Antrophic followed and got better. Now openAI is catching up again and also google with gemini(?) ... and the open weight models are 2 years behind.

Any win here seems only temporary. Even if a new breakthrough to a strong AI happen somehow.


Recursive self-improvement is one argument. Otherwise winner takes all seems much less likely than a OpenAI/Anthropic duopoly. For the best models, obviously other providers will have plenty of uses, but even looking at the revenue right now it's pretty concentrated at the top.

So if I'm Google I'd want a decent chunk of at least one of them.


What is the argument for a duopoly when Kimi and Deepseek models are only months behind?

It’s a commodity in the making.


The argument is based on one of these companies hitting the singularity, making it impossible for any other company to catch up ever. I still think it's way more likely we'll see a typical S-curve where innovation starts to plateau. But even a small chance of it happening in the future is worth a lot of money today.

How does it follow that companies that are months apart will trip the singularity and this will prevent the others from doing so?

Who supplies the hardware for the singularity?


There's a massive thinking gap in this singularity thinking. We ARE the singularity. It has been exponential all the way back to the big bang. First the stars, the solar system, life, consciousness, language, computers, the internet. Yes it is speeding up and that is exciting, cause we are going to experience a lot in our lifetimes. We have a lot of exponential growth to go before progress becomes instant. There are physical limits, too. Power generation for example. I can't believe on what dumb shit people bet the world economy on.

That's certainly how it looks right now but where's the guarantee? What happens if it turns out that deep learning on its own can't achieve AGI but someone figures out a proprietary algorithm that can? That sort of thing. Metaphorically we're a bunch of tribesmen speculating about the future potential outcomes of the space race (ie the impacts, limits, and timeline of ASI).

Imagine such an AI exists. What good is AI that is so good that you cannot sell API access because it would help others to build equivalently powerful AI and compete with you?

If you gatekeep, you will not make back the money you invested. If you don't gatekeep, your competitors will use your model to build competing models.

I guess you can sell it to the Department of War.


> What good is AI that is so good that you cannot sell API access because it would help others to build equivalently powerful AI and compete with you?

Its awesome and world dominating, you just don’t sell access to that AI, you instead directly, by yourself, dominate any field that better AI provides a competitive advantage in as soon as you can afford to invest the capital to otherwise operate in that field, and you start with the fields where the lowest investment outside of your unmatchable AI provides the highest returns and, and plow the growing proceeds into investing in successive fields.

Obviously, it is even more awesome if you are a gigantic company with enormous cash to to throw around to start with when you develop the AI in question, since that lets you get the expanding domination operation going much quicker.


To dominate the real world, you need correcting feedback loop from reality. These feedback loops and regulations (in medical and other industries) take long time to come back with good signals. So you are still time bound by how fast your experiments are.

Yup. That doesn't really take a full-blown AGI on the path to ASI on the path to godhood - it'll take a bit better and more reliable LLM with a decent harness.

That's why I've been saying that the entire software industry is now living on borrowed time. It'll continue at the mercy of SOTA LLM operators, for as long as they prefer to extract rent from everyone for access to "cognition as a service". In the meantime, as the models (and harnesses) get better, the number of fields SOTA model owners could dominate overnight, continues to grow.

(One possible trigger would be the open models. As long as the gap between SOTA and open is constant or decreasing, there will be a point where SOTA operators might be forced to cannibalize the software industry by a third party with an open model and access to infra pulling the trigger first.)


Don't open models and competition between frontier providers both serve as barriers here? If a frontier provider pivoted as you describe it would certainly change the landscape but they wouldn't be unassailable without developing some sort of secret sauce that gave them an extremely large advantage over everyone else. They'd need a sufficient advantage to pull out far ahead of everyone else before others had a chance to react in a meaningful way. Otherwise the competitors that absorbed all your subscriptions would stack that much more hardware and continue to challenge you.

I think meaningful change to the current equilibrium would require at absolute minimum the proprietary equivalent of the development of the transformer architecture.


> If a frontier provider pivoted as you describe it would certainly change the landscape but they wouldn't be unassailable without developing some sort of secret sauce that gave them an extremely large advantage over everyone else.

Integration, and mindset. AI, by its general-purpose nature, subsumes software products. Most products today try to integrate AI inside, put it in a box and use to supercharge the product - whereas it's becoming obvious even for non-technical users, that AI is better on the outside, using the product for you. This gives the SOTA AI companies an advantage over everyone else - they're on the outside, and can assimilate products into their AI ecosystem - like the Borg collective, adding their distinctiveness to their own - and reaping outsized and compounding benefits from deep interoperability between the new capability and everything else the AI could already do.

Once one SOTA AI company starts this process, the way I see it, it's the end-game for the industry. The only players that can compete with it are the other SOTA AI companies - but this will just be another race, with nearly-equivalent offerings trading spots in benchmarks/userbase every other month - and that race starts with rapidly cannibalizing the entire software industry, as each provider wants to add new capabilities first, for a momentary advantage.

Once this process starts, I see no way for it to be stopped. Software products will stop being a thing.

Open models can't compete, because they're always lagging proprietary. What they do, however, is ensure the above happens - because if, for some reason SOTA AI companies stick to only supplying "digital smarts a service" for everyone, someone with access to sufficient compute infra is bound to eventually try the end-game strategy with an open model, hoping to get a big payday before SOTA companies respond in kind.

Either way, the way I see it, software industry as we know it is already living on borrowed time.


I don't understand where the unbeatable edge is supposed to come from here. Don't we already have this in the form of agents using tools? Right now it's CLI but it's not difficult to imagine extending that to a GUI coupled with OCR and image recognition in a way that generalizes.

So suppose ACo attempts to subsume Spotify or Photoshop or whatever. So they ... build their own competing platform internally? That's a lot of work. And now they what, attempt to drive users to it by virtue of it being a first party offering? Okay sure that's just your basic anticompetitive abuse of monopoly I guess. MS got in trouble for that but whatever let's assume that happens.

So now lots of ACo users are using a Photoshop competitor behind the scenes. I guess they purchased a subscription addon for that? And I guess ACo has the home team advantage here (anticompetitive and illegal ofc) but other than that why can't Photoshop compete? It just seems like business as usual to me. What am I missing?

If ACo sells widgets and I also sell widgets, assuming I can get attention from consumers and offer a compelling set of features for a competitive price why can't I get customers exactly? ACo's AI will be able to make use of either widget solution just fine assuming ACo doesn't intentionally sabotage me.

I think the more likely issue is that at some point the cost of building software falls far enough that it ceases to be a viable product category. You just ask an agent for a one off solution and it hands it to you.

Projecting out even farther, eventually the agents get good enough that you don't need to ask for software tools in the first place. You request X, the agent realizes that it needs a tool for that, builds the one off tool, uses it, returns X to you, and the ephemeral purpose built tool gets disposed of as part of the the session history. All of this without the end user ever realizing that a tool to do X was authored to begin with.

So I guess I agree with your end outcome but disagree about the mechanics and consequences of it.

> Open models can't compete

They can though. There's a gap, sure, but this isn't black and white. Plenty of open models are quite useful for a particular task right now.


One of the most valuable software products in the world is Instagram. Tens of billions of revenue annually.

Any of Meta’s competitors could reproduce Instagram “the software” in every meaningful detail for (let’s say) $100M.

They still don’t have Instagram the product. Reducing that outlay to a few billion tokens doesn’t change that.

I guess I’ll believe this theory when Anthropic or OpenAI rolls out a search engine with an integrated ad platform that can meaningfully compete with Google. How hard can that be?


It's not clear to me that one horse-sized AI allows you to outcompete 100 duck-sized AIs in use by everyone else once you factor in the non-intelligence contributions that the others with weaker AIs bring to the table.

There's a lot more to building a successful product than how smart your engineers/agents are, how many engineers/agents you have, and capital.

Google, for example, can be extremely dysfunctional at launching new products despite unimaginably vast resources. They often lack intangible elements to success, such as empathizing with your customers' needs.

If we were in a world where AI was not already widespread, then I would agree that having strong AI would be an immense competitive advantage. However, in a world where "good enough" AI is increasingly widespread, the competitive advantage of strong AI diminishes as time goes on.


> Imagine such an AI exists. What good is AI that is so good that you cannot sell API access because it would help others to build equivalently powerful AI and compete with you?

At this point, if you can no longer safely drip-feed industry the access to "thinking as a service" and rake in rent, you start using it, displacing existing players in segment after segment until you kill the entire software industry.

That's pre-ASI and entirely distinct from the AI itself becoming so good it takes over.


If you assume the status quo - a powerful not quite human level AI - then you are most likely correct. However one of the primary winner takes all hypotheticals (and to be sure it remains nothing more than a wild hypothetical at this point) is achieving and managing to control proprietary ASI. Approximately, constructing something that vaguely resembles a god.

Being unfathomably smarter than the people making use of it you could simply instruct it not to reveal information that would enable a potential competitor to construct an equivalent. No need to worry about competition; you can quite literally take over the world at that point.

Not that I think it's likely such a system will so easily come to pass, nor that I think humanity could manage to maintain control over such a system for long. But we're talking about investments to hedge against existential tail risks here so "within the realm of plausibility" is sufficient.


They're months behind now and have very low market share, so as long as they stay months behind the duopoly/triopoly can hold.

Look at the "winner takes all" situation in web search. Of course other search engines exist, but the scale of the Google search operation allows it to do things that are uneconomical for smaller players.

The first to AGI, or a close approximation, is the winner. That’s what the investors in Anthropic and OpenAI are betting on.

I’d be willing the bet that the Venn diagram of investors in those two companies is nearly a circle.


"The first to AGI, or a close approximation, is the winner. "

But why? Assuming there is a secret undiscovered algorithm to make AGI from a neuronal network ... then what happens if someone leaks it, or china steals it and releases it openly tomorrow?


So, what will AGI be able to do that will make that bet pay off? Human-like intelligence is already very common. Vastly better than human intelligence seems like it would be worth the expense, but I don't know where we'd get suitable training data.

The bet is that they perfect a new kind of neural network which is roughly as good at "training" as the human mind is as far as "amount learned/experience gained per bit of information input".

Current LLMs are absolutely stupidly inefficient on this front, requiring virtually all human knowledge to train on as a prerequisite to early-college-level understanding of any one subject (granted, almost all subjects at that point, but what it has in breadth it lacks in depth).

That way instead of training millions of TPUs on petabytes of data just to get a model that maintains an encyclopedia of knowledge with a twelve-year-old's capacity for judgment, that same training set and compute could (they hope) instead far exceed the depth of judgement, planning, and vision of any human who has ever lived (ideally while keeping the same depth, speed of inference, etc).

It's one of those situations where we have reason to believe that "exactly matching" human intelligence is basically impossible: the target range is too exponentially large. You either fall short (and it's honestly odd that LLMs were able to exceed animal intelligence/judgment while still falling short of average adult humans.. even that should have been too small of a target) or you blow past it completely into something that both humans and teams of humans could never compete directly against.

Chess and Go are fine examples here: algorithms spent very short periods of time "at a level where they could compete reasonably well against" human grand masters. It was decades falling short, followed by quite suddenly leaving humans completely in the dust with no delusions of ever catching up.

That is what the large players hope to get with AGI as well (and/or failing that, using AI as a smoke screen to bilk investors and the public, cover up their misdeeds, play cup and ball games with accountability, etc)


Are these investors high? Or just insane?

Finance professor Aswath Damodaran, and others, have made many useful insights as to how AI as an investment is likely to pay out.

One technique is, instead of trying to pick individual winners, look at the total addressable market. Then compare the market size with the capital being pumped in. If you look on this basis, Aswath concluded that collectively AI investment is likely to provide unsatisfactory returns.

Here's a recent headline: "Nvidia’s Jensen Huang thinks $1 trillion won’t be enough to meet AI demand—and he’s paying engineers in AI tokens worth half their salary to prove it"

There are two parts to this. 1. A staggering $1t is expected to be invested in AI. Someone worked out that this was more than the entire capital expenditure for companies like Apple. We're talking about its entire existence here. IOW, $1t is a lot of dough. A LOT.

Secondly, this whole notion that AI is such a sure thing that half the salary will be in tokens should ring alarm bells. '“I could totally imagine in the future every single engineer in our company will need an annual token budget,” he said. “They’re going to make a few 100,000 a year as their base pay. I’m going to give them probably half of that on top of it as tokens so that they could be amplified 10 times.”'

I recall from the dotcom fiasco that service companies like accountants and lawyers were providing services to the dotcom companies and being compensated in stock options rather than cold hard cash like you'd normally expect.

Very dangerous.

As another poster pointed out, this really boils down to FOMO by big tech. I'm expecting big trouble down the line. We await to see if I'm early or just plain wrong.


Neither. It's the most severe FOMO in history. The best case scenario is equivalent to attempting to pick future winners just prior to the industrial revolution really kicking off. Except this time around the technological timelines appear to be severely compressed and everyone is fully aware of what's at stake. And again, that's the best case scenario.

Its just market euphoria.

This depends on a fantasy cascade of functional consequences of AGI, whatever that acronym even means anymore.

It is just cargo cult financing at this point.


2 years? 2 years ago, gpt-4o was OpenAI's flagship model. The gap is real, but much smaller than 2 years.

I guess if you build the first AI that can autonomously self improve, then nobody can catch up anymore.

This is a common canard. AI already autonomously self improves. All the training pipelines for modern frontier models are filled with AI. AI generates synthetic data, it cleans data, it judges output quality and feeds back via RL, it does hyperparameter tuning, it rewrites kernels for speed and a thousand other things.

But: no singularity. At least not yet.

The flaw in this thinking seems to be the idea that AI is a singular thing. You point the model back at its own source code, sit back and watch as it does everything at once. Right now it's more like AI being an army of assistants organized by human researchers. You often need specialized models for this stuff, you can't just use GPT for everything.


That seems really paradoxical and I think it would just burn up compute. The AI really doesn't have any way to know it's getting better without humans telling. As soon as the AI begins to recursively improve based on its own definition of improvement model collapse seems unavoidable.

If humans are able to judge, and if the AI is more capable than a human in every respect, then why can't the AI be the judge of its own performance? Humans judge their own output all the time.

The difference IMO is that every single human is a slightly different model, not the same one with a different prompt, or weights.

I'm not sure I buy that competition between individuals is a hard requirement but lets assume that to be the case for now. Then how many variants of itself do you suppose an AI could instantiate in parallel given full control of a gigawatt class datacenter?

Humans ultimately judge their output by comparison and competition. When we get to the point an AI is capable of participating on the market directly, it'll no longer make sense to proxy judgement through humans anymore.

Agreed. But also, comparison and competition between individuals is only one of the ways in which improvement happens. Consider for example that it's also possible to build something for personal consumption and iteratively improve on the design without regard for what anyone else thinks of it. Cooking comes to mind.

Right. But even that is shaped directly or indirectly by environment you live in. The way you scratch your own itch looks differently depending on what itch you have. Plus, humans are social animals, we live in groups and constantly judge each other and try to have others judge us favorably.

AI has none of that now - it only gets direct human feedback from those controlling the training (or at a second level, the harness), and that feedback is really in service of the humans at the steering wheels. Sum total of humanity, mixed in the blender, and flavored to make the trainers look good in front of their peers.

Now, if AI could interact directly and propagate that feedback to their training, or otherwise learn on-line, that changes. It's a qualitative jump. The second one is, once there's enough AIs interacting with human economy and society directly, that their influence starts to outweigh ours. At that point, they'll end up evolving their own standards and benchmarks, and then it's us who will be judged by their measure.

(I.e. if you think we have it bad now, with how we're starting to adapt our writing and coding style to make it easier for LLMs, just wait when next-gen models start participating in the economy, and we'll all be forced by the market forces to learn some weird, emergent token-efficient English/Chinese pidgin that AI-run companies prefer their suppliers to use.)


Humans have imagination, AI doesn't.

But if the second AI that can self improve comes up?

Then it all remains a question of who has the most compute power, as self improve seems compute heavy with the current approach.


If that happens catching up will be meaningless, everything we know and care about will change. You don’t have to be doomsday about it even, a self improving AI will quickly be more efficient than a human brain, all the data centers will be useless, tech companies will collapse (so will most others), everyone will have an incredible AI resource for the price of a hotdog. There’s no way it wouldn’t leak from whoever made it, either by people or by the AI itself.

> There’s no way it wouldn’t leak from whoever made it, either by people or by the AI itself.

It seems pretty wild to bet the future on such an assumption. What are you even basing it on?


Because any goal can be better achieved if you're under fewer constraints. We're building super powerful agentic problem solving machines. Give them literally any complex goal. Breaking out of the sandbox is a useful subtask to increase their options.

The first one to reach AGI is the winner-takes-all. AGI or something similar will result in exponential progress to super-intelligence since AGI can just improve itself.

Not even 2 years behind.

I wonder if Google is that much a competitor. Sure, they tried to make an AI of their own.

But they also have access to an unimaginably large data set plus reach into people’s daily lives.

Seems more like partners for world domination.


$40B is not anywhere near half of Anthropic at this point. You do get the same access as nvidia, aws, and other investors, which has value.

I look at this as Google needs a competitor. While Anthropic seems to be the flavor of the quarter OAI looks like such a dumpster fire right now that it's in Google's best interest to help keep Anthropic moving towards winning the #2 spot. I say the #2 spot because it doesn't matter how good this week's LLM is. Until someone else owns the infra and has an actually profitable business model they're all just lighting money and the world around us on fire.

I actually mentioned to a Google friend the other week that I wouldn't be surprised to see Google tipping the hat towards Anthropic soon so as to put a little more heat on OAI.


It's still circular. They will succeed or collapse together. And since they make up such a fraction of global market cap, we're in for the ride together.

And the circularity makes the actual investment numbers fairly meaningless. They don't mind if they end up overpaying for future services, as long as they overpay each other equally.


Even if Anthropic completely folds, that wouldn't "collapse" Google. $40B is less than 1/3 of Google's net income (that is, the profit they made which otherwise would just be lying around) in a single year.

Google already knows Anthropic is a good investment. Google owns the chrome browser and they already know from traffic data how well Anthropic is doing. This is similar to how Mark Zuckerberg came to know Instagram is a good deal.

So Google gives Anthropic money which anthropic then uses to buy compute from Google. How does this end up screwing over regular people?

IIRC Google already outright owns 15% of Anthropic.

An article from a year ago [1] says 14%, capped at 15%, and no voting rights.

And that was before these new investments. I wonder what they did? I suppose the remaining 1% is worth a lot more now.

[1] https://www.nytimes.com/2025/03/11/technology/google-investm...


Good perspective.

Let's say Anthropic fails to pay it's debt, can Google take those TPU's back and make money from them?


They don't get equity back, but the TPU's are all in Google's datacenters and they can rent them to whoever pays.

$40B is barely more than one quarter of google's net profit.

It's pretty much vendor financing (although we could argue whether it should be classed as circular investment), with the extra trick being that both sides get to make number go up with it, through stock market valuations and the ability to borrow more money to set fire to so you can show how successful you are.

I think everyone is incentivized to keep the music playing and the party going with AI. Because the alternative is a massive correction like we have rarely seen.

What if AI is never good or cheap enough to reach significant profitability?


It could be legit, it could be a thickly veiled accounting fraud continuing the valuation inflation with fake deals that count money multiple times.

Maybe a little bit of both.


Lots and lots of vendor financing during the dotcom era, and it ended up being a material part of those vendors' own difficulties. Especially when service providers were concerned (e.g. the huge crash in optical in particular).

Obviously it's not a perfect comparison, but you have to wonder how much of NVIDIA's income (for instance) is ultimately funded by its own money.


it's your time..

~ TK


It may be doubtful, but it seems like it would be worth looking into to see if you qualify for a refund?

Also true of any other refund a business might get for any other expense the business was overcharged for. Not sure why anyone is surprised.

Many businesses added specific surcharges to final sales to offset the tariffs they paid. While they have no legal obligation to refund those surcharges they imposed, it would be straightforward to do so and it would be the right thing to do.

> While they have no legal obligation to refund those surcharges they imposed, it would be straightforward to do so and it would be the right thing to do.

I'm actually interested to see how this goes legally. I haven't seen an actual attorney who understands the subject chime in on it yet. But I could see a case being made that a line item like that could have a basis of being refunded if the company charging them itself received a refund. Certainly a long shot, but I'm guessing someone will bring a case at some point to see what happens.

Ironically companies that broke out tariffs charges as line items were lauded for "doing the right thing" and are the only companies who could possibly be remotely on the hook here - any other company simply adding it to general margins is quite obviously in the clear.


Or keep it as a rainy day fund against the next time one of their major markets goes insane with attempted extortion, possibly successfully next time? Their customers paid a price they were comfortable with —- if a company returns part of that to the customer, they disadvantage themselves compared to their competitors who do not do so in the next round of tariffs, since their competitors can use the rainy day fund to delay price rises, capturing customer spend (which is to say, competitor-voluntary-donation-to-customer spend).

Why would they do that when they could fund share buybacks, or pay it out to shareholders as dividends?

The ghost of Milton Friedman speaks!

"this is all just business as usual" is a specific kind of deflection which serves a master.

Depending on the relationship it’s totally normal to say hey we want to adjust what you builded us.

Not every business the business relationship works that way, but it’s not unusual.

As for a surprise goes, I don’t know about surprised, but certainly it’s worth noting that after a massive illegal tax …. voters get no justice.


The actual incidence of tariffs is mostly on consumers, so giving remedies to businesses doesn't actually make any sense.

I'm not surprised, but I think this is a miscarriage of justice.


You knew what the price was when you paid for it. You weren't misled. What's the injustice?

Yes, it's a windfall for the business, and it would be nice for them to pass it on, but unless they promised to do it, that wasn't the deal.


I don't know what world you live in where an arbitrary and illegal price increase on essentially everything is "just."

It's not a matter of "surprised", rather it's outraged over the lack of accountability. The administration acted illegally, which caused harm to consumers. It's reasonable to expect consumers to be made whole from the results of those illegal actions - the same as if corpos were found to be illegally colluding to raise prices without Grump spearheading it.

(although honestly I wouldn't be surprised if such a push ended up with the profligate spendthrift in chief sending more paltry "stimulus" checks with his ugly-ass signature on it right before midterms)


It doesn't seem obvious. How can you tell?

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: