Hacker Newsnew | past | comments | ask | show | jobs | submit | adam_arthur's commentslogin

I don't understand the position that learning through inference/example is somehow inferior to a top down/rules based learning.

Humans learn many, and perhaps even the majority, of things through observed examples and inference of the "rules". Not from primers and top down explanation.

E.g. Observing language as a baby. Suddenly you can speak grammatically correctly even if you can't explain the grammar rules.

Or: Observing a game being played to form an understanding of the rules, rather than reading the rulebook

Further: the majority of "novel" insights are simply the combination of existing ideas.

Look at any new invention, music, art etc and you can almost always reasonably explain how the creator reached that endpoint. Even if it is a particularly novel combination of existing concepts.


Which is exactly how humans learn many things too.

E.g. observing a game being played to form an understanding of the rules, rather than reading the rulebook

Or: Observing language as a baby. Suddenly you can speak grammatically correctly even if you can't explain the grammar rules.


The 9% of borrowers defaulting stat cited in the title is not the same as 9% of the loanbook defaulting.

As stated in the article, 9% is the number of borrowers that defaulted, which was concentrated in smaller borrowers (thus smaller loans).

And then, again, you can say probably half of the dollar amount of those defaults are recoverable.

Bond defaults spiked to around 6% in aggregate in 2008, to use a worst case example.


There is so much misinformed fear-mongering about private credit right now.

Important Facts:

1) The majority of private credit funds are classed as "permanent capital". When you put money into these vehicles, you give the Asset Manager discretion over when to give the money back. Redemptions are often gated at ~5% per quarter.

(So there cannot, by definition, be a run on the bank)

2) Credit is senior to equity, so if you expect mass defaults in private credit, it means the majority of private equity is effectively wiped out. Private equity has to be effectively a 0 before private credit takes any losses.

3) The average "recovery rate" for senior secured loans is 80%. Even if private equity gets wiped to 0, the loss that private credit incurs is cushioned significantly by the collateral backing the loan. These are not unsecured loans the borrower can just walk away from.

(The price of senior secured loans dropped by ~30% in 2008, as a worst case datapoint)

4) Default rates on many of the major private credit managers is ~<1% in recent years. There are other estimates stating higher default rates, but that often classifies PIK income as a default. A loan modified and extended with added PIK that ultimately gets repaid is not a "true" default.

5) Finally, it's true that NAVs are likely overstated, but generally it's by a modest amount. Every Asset Manager today could go out tomorrow, mark NAVs down by 20% and suddenly there is no crisis.

(The stocks of Asset Managers have already traded down such that this seems expected and priced in anyway)


> Private equity has to be effectively a 0 before private credit takes any losses

Technically yes. But the overlap between private equity as it's commonly described and private credit is slim.

> average "recovery rate" for senior secured loans is 80%

Oooh, source? (I'm curious for when this was measured.)

> A loan modified and extended with added PIK that ultimately gets repaid is not a "true" default

True. It's a red flag, nonetheless.

> Every Asset Manager today could go out tomorrow, mark NAVs down by 20% and suddenly there is no crisis

Correct. The question is if 20% is enough, and if a 20% markdown creates a vicious cycle as funding for e.g. re- or follow-on financing dries up.

You seem knowledgable about this. I'm coming in as an equities man. Would you have some good sources you'd recommend that make the dovish cash for private credit today?


> Oooh, source? (I'm curious for when this was measured.)

It depends when you measure, but you can Google around and find figures in the 60-80% range. 80% may have been a bit on the optimistic end of the range. But it's important to note that a "default" doesn't imply a 0.

Of course this will depend on the covenants, underwriting standards, type of collateral.

I would guess software equity collateral recovery rates are lower than hard assets like a building. (Which is why I personally don't like Software loans, nothing to do with AI)

> Correct. The question is if 20% is enough, and if a 20% markdown creates a vicious cycle as funding for e.g. re- or follow-on financing dries up.

I think it's almost certain that new fundraising for private credit will be materially hindered going forward. But this just limits the growth rate of these firms, does not introduce any "collapse" risk.

They may move from net inflows to net outflows and bleed AUM over a period of some years.

If NAVs were inflated previously, they may be forced to mark down the NAV to meet redemptions rather than using inflows to payoff older investors.

In the world of credit, 20% is an enormous haircut. Again, senior secured loans fell by around 30% peak to trough in 2008.

We have the public BDC market as a comparison point where the average price/book is around 0.80x. So the public market is willing to buy credit strategies at a 20% discount to stated NAV.

The real systemic risk here, if we were to reach for one, is really that these fears become self fulfilling.

If investors pull funds out of credit strategies en-masse, there is no first order systemic issue, but it means borrowers of many outstanding loans may not be able to secure refinancing as money is drying up.

This could lead to a self-fulfilling default cycle. But this would be a fear driven default cycle, there is no fundamental issue with cash flows of borrowers or otherwise (in aggregate, currently).

Finally, in regards to the asset managers themselves, many are quite diversified.

Yes, they have private credit funds, but many have real estate funds, buyout funds etc. OWL is one of the biggest managers of data center funds, for example (which they also got hammered for on AI bubble fears)

Given how depressed pricing is in public REITs, for example, I expect a lot of asset managers to pivot towards more real asset funds.


So, if I hold a bunch of Private Equity, and my holdings need a continuity of business loan, would I:

(a) have the holding take out the debt, exposing 100% of my stake

or,

(b) have the holding divest a piece of itself, giving me control of the existing and new entities, then have that piece take out the debt, exposing 0% of my stake?

I imagine any PE firm worth its salt would go with option (b).

Presumably regulators would sometimes try to block such deals, but I cannot imagine that happening during the current administration. (Do the regulators even still work for the US government? I thought they were mostly fired.)

Similarly, I can imagine the banks refusing to lend in scenario (b), but I cannot imagine bank leadership being allowed to make such a decision if the PE firm is politically connected to the current administration.


It sounds like you're effectively describing some fraud scheme.

A smart lender will not issue loans without real collateral. If you create a subsidiary, that subsidiary has to have sufficient collateral and cashflow to secure a loan.


The current governor is proposing cutting property taxes in ~half by eliminating the school district portion and instead funding schools directly via the state's budget surplus.

Remains to be seen, as the next legislative session isn't until 2027.


I mean, most property developers are playing shell games to avoid the requirement of having to build school districts anyway in Texas from my experience living there. Build small developments up to just short of the line where it's required, then continue development as a different legal fiction with what turns out to be ultimately the same beneficent owners. Texas education system leaves much to be desired.


I'm not familiar with that specific example, but I do know that independent players in any economic system will follow the incentives.

Expecting companies, people etc to do the "right thing" when it's financially disincentivized usually doesn't work out.

Same will happen in regards to all these new taxes reinforcing existing population migration trends.


The system is simple. Your development hits a certain size, you have to build and fund a school for the community through fees if you're renting. So they go just short of the line, and crap out two developments and no schools, and leave the populace to figure out the rest. That isn't following incentives. That's being an asshat.


No, it's bad policy.

Cliffs in policies will always lead to players working around the cliffs.

E.g. in NYC there is an additional 1% sales tax on home sales above 1 million dollars.

So nobody in the market would ever sell a home between 1m and 1.01m as the tax increase is greater than the sales price.

These are failed policy implementations (in the above example the tax should be marginal, not thresholded)

Any policy which does not account for individual actors optimizing financially is a badly designed policy.

There are numerous similar examples re: CRE when requiring subsidized housing units for certain sizes of development. Often it's more lucrative to build smaller and get around subsidized unit requirements.

You can call them "asshats", but I'd rather live and discuss policy in reality.

Many of these new, clearly strictly punitively intended, taxes aimed at the wealthy will have the same logical outcome.

Show me the incentive and I'll show you the result


>Show me the incentive and I'll show you the result

Ah, you're one of those.

See, this clever little aphorism of yours is the constantly reached for salve of the "wiseguy". "Everyone would do it if they were in my position; so I'm not going to bother myself about it. Let's work around it."

Problem is, in reality, that isn't the case. Most people will sit there, look at the regulation, realize the development is likely going to attract families or soon-to-be-families, and would realize, yeah. Okay. Need to accommodate that. They approach it in good faith. Then you come along and start acting in bad faith. Your bad faith implementation for maximized extraction creates knock on problems, that create knock on problems, that now are everyone else's problem to solve. Eventually, with a high enough concentration or frequency of such agents, we enter game theory territory, and escalation tends to happen quickly from there.

Historically, this comes with a brand of solutions for people like that. It'd stew to a point, then generally involved an entire community not seeing a damn thing while someone came to physical harm in a tragic accident. Or just straight up Wildcat demonstrations.

Communities/ planners don't want that. So they make regulations that are a good faith attempt at curtailing spirals of reasonably foreseeable problems. A wiseguy comes along and creates reasonably forseen problems through non-compliance.

Are you noticing a pattern yet? You being a bad faith asshat isn't the policy's fault.

That's your fault for being a garbage human being, and maybe just a bit our collective fault for making the world such a comfortable and safe place for humans with garbage mindsets drawn to bad faith in all things business. Nevertheless, the gradient is clear. Do good faith business. Everyone wins. Do bad faith, and you win til it's worth someone's time to ensure you lose.

Too damn smart to learn the virtue of self-restraint, too damn stupid to recognize the threat too many of you pose to everyone else. Or how quickly things go bad once people start catching onto the games you seem to delight in playing.


Oil futures (months out) are priced lower than spot, presumably due to anticipation that Iran driven disruption to the market will be short lived.

(Remains to be seen whether that's true)


ty!


Yeah, but a large monorepo can consist of many small subprojects. And arguably this is becoming a best practice.

Just spawn the agent in one of the subprojects


LLMs have clearly accelerated development for the most skilled developers.

Particularly when the human acts as the router/architect.

However, I've found Claude Code and Co only really work well for bootstrapping projects.

If you largely accept their edits unchanged, your codebase will accrue massive technical debt over time and ultimately slow you down vs semi-automatic LLM use.

It will probably change once the approach to large scale design gets more formalized and structured.

We ultimately need optimized DSLs and aggressive use of stateless sub-modules/abstractions that can be implemented in isolation to minimize the amount of context required for any one LLM invocation.

Yes, AI will one shot crappy static sites. And you can vibe code up to some level of complexity before it falls apart or slows dramatically.


>If you largely accept their edits unchanged, your codebase will accrue massive technical debt over time and ultimately slow you down vs semi-automatic LLM use.

Worse, as its planning the next change, it's reading all this bad code that it wrote before, but now that bad code is blessed input. It writes more of it, and instructions to use a better approach are outweighed by the "evidence".

Also, it's not tech debt: https://news.ycombinator.com/item?id=27990979#28010192


People can take on debt for all sorts of things. To go on vacation, to gamble.

Debt doesn't imply it's productively borrowed or intelligently used. Or even knowingly accrued.

So given that the term technical debt has historically been used, it seems the most appropriate descriptor.

If you write a large amount of terrible code and end up with a money producing product, you owe that debt back. It will hinder your business or even lead to its collapse. If it were quantified in accounting terms, it would be a liability (though the sum of the parts could still be net positive)

Most "technical debt" is not buying the code author anything and is materialized through negligence rather than intelligently accepting a tradeoff


All those examples were borrowing money. What you're describing as "technical debt" doesn't involve borrowing anything. The equivalent for a vacation would be to take your kids to a motel with a pool and dress up as Mickey Mouse and tell them its "Disney World debt". You didn't go in debt. You didn't go to Disney World. You just spent what money you do have on a shit solution. Your kids quite possibly had fun, even.

> term technical debt has historically been used

There are plenty of terms that we no longer use because they cause harm.


Agreed.

What I've found is that AI can be alright at creating a Proof of Concept for an app idea, and it's great as a Super Auto-complete, but anything with a modicum of complexity, it simply can't handle.

When your code is hundreds of thousands of lines, asking an agent to fix a bug or implement a feature based on a description of the behavior just doesn't work. The AI doesn't work on call graphs, it basically just greps for strings it thinks might be relevant to find things. If you know exactly where the bug lies, it can usually find it with context given to it, but at that point, you're just as good fixing the bug yourself rather than having the AI do it.

The problem is that you have non-coders creating a PoC, then screaming from the rooftops how amazing AI is and showing off what it's done, but then they go quiet as the realization sets in that they can't get the AI to flesh it out into a viable product. Alternatively, they DO create a product that people start paying to use, and then they get hacked because the code is horribly insecure and hard-codes API keys.


> We ultimately need optimized DSLs and aggressive use of stateless sub-modules/abstractions that can be implemented in isolation to minimize the amount of context required for any one LLM invocation.

Containment of state also happens to benefit human developers too, and keep complexity from exploding.


Yes!

I've found the same principles that apply to humans apply to LLMs as well.

Just that the agentic loops in these tools aren't (currently) structured and specific enough in their approach to optimally bound abstractions.

At the highest level, most applications can be written in simple, plain english (expressed via function names). Both humans and LLMs will understand programs much better when represented this way


The most interesting thing for me is that I am sure it does.

I have been coding for 20+ years and I have used AI agents for coding a lot, especially for the last month and a half. I can't say for sure they make me faster.They definitely do for some tasks, but over all? I can solve some tasks really quickly, but at the same time my understanding of the code is not as good as it was before. I am much less confident that is is correct.

LLMs clearly make junior and mid level engineers faster, but it is much harder to say for Senior.


Valknut is pretty good at forcing agents to build more maintainable codebases. It helps them dry out code, separate concerns cohesively and organize complexity. https://github.com/sibyllinesoft/valknut


> LLMs have clearly accelerated development for the most skilled developers.

Have they so clearly? What's the evidence?


Most people's "truth" nowadays is what they've heard enough people say is true. Not objective data/measures. What people believe is true, and say is true, IS truth, to them.


> accrue massive technical debt

The primary difference between a programmer and an engineer.


> We ultimately need optimized DSLs and aggressive use of stateless sub-modules/abstractions that can be implemented in isolation to minimize the amount of context required for any one LLM invocation

Wait till you find out about programming languages and libraries!

> It will probably change once the approach to large scale design gets more formalized and structured

This idea has played out many times over the course of programming history. Unfortunately, reality doesn’t mesh with our attempts to generalize.


Relying on the model for security is not security at all.

No amount of hardening or fine-tuning will make them immune to takeover via untrusted context


Comment definitely reads like AI


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: