Hacker Newsnew | past | comments | ask | show | jobs | submit | kenferry's commentslogin

I mean, this is semantics. Production is not the same thing as "important", but to me production code means customer facing. Internal tooling isn't production.

> No pure managers: Every leader at Coinbase must also be a strong and active individual contributor. Managers should be like player-coaches, getting their hands dirty alongside their teams.

What's the theory on this? It seems to be common conclusion, but I don't understand why AI changes the situation here.

I understand that AI means you can do more with fewer people. Fewer people means less coordination overhead and fewer managers and fewer layers. What I don't get is why you want your managers to be doing IC work more so with AI than before. I don't see why anything changes about needing roughly 1 first line manager for every 6-8 people, or why it would be more beneficial now that the managers have production programming responsibilities.

Both before and after AI it's important that managers have real technical knowledge of the codebase. Having managers do actual production IC work in my experience has been a bad allocation of resources, though, and I don't see why AI changes that.

(a) Someone has to do the management tasks. Why do we think that isn't a full time job anymore?

(b) When managers do production IC work, in my experience it increases the load on ICs in review, because the manager one would _expect_ to not be _as_ expert as pure ICs on the codebase, and yet they are perceived as "senior". ICs then have overhead in having to manage that power imbalance in review. I have known a few extremely productive manager/ICs… but the effect on their teams was not super great. It made the manager into something of a micromanager and the actual ICs lacked autonomy.


Getting rid of middle managers has been the game plan for every headcount reduction for the last 50 years. They always seem expendable until a few months later when senior managers get overwhelmed and staff get confused and they end up making the same org they just destroyed.

Exactly, it's too easy to overlook the balancing that good middle managers do.

This has existed since the first version, except it needs to be signed with a valid apple cert.

A .pkpass file is a zipped directory that has a json file and some assets. There's no need to have a more limited version, a pass is already very limited.

The issue is spoofing. Major event ticketers are unwilling to publish passes if there's nothing to stop someone else from publishing a pass that is indistinguishable from their's and thus is an avenue for fraud.

The difference with events is that an ics file is not something someone's going to try to sell you or that you'd want to buy. But anyway, all Apple would have to do is stop checking the signing.


I imagine you're saying the capital return program is a mistake because they should reinvest the money in R&D etc.

I think the issue is there's diminishing returns to spending, and in some cases it can be outright negative. For example, one major thing you can do with money is hire more people. Hiring more people than you can handle is a great way to grind everything to a halt. You're basically making a bet when you hire that the additional capacity outweighs the danger of coordination failure.

Perhaps you could invest more money in fabs or something like that. I don't know, I'm a software person. But I did work at apple on software for 15 years, and I do not think throwing more money at software is particularly effective. The biggest teams at apple are often the least functional.


Yeah that's definitely what I'm saying.

Hopefully there is work being done on the replacement for the Mach kernel and OSX / iOS in general. If there isn't that would be a grave mistake and exactly the kind of one someone like Tim Cook would make. Look at how he fumbled AI at Apple. I'm not saying he isn't talented, he is, but he isn't a product guy or an engineer.

This could happen in parallel with existing software dev skunkworks style.


Why on earth would they migrate away from the Mach kernel and OSX/iOS?

Because nothing lasts forever. Take a look at what Harmony OS is capable of if you want to see what a modern take on an OS and ecoystem looks like. It sure isn't the pinnacle either.

There's no reason for Apple to migrate; none at all. The idea that they need to do so is just ridiculous, regardless of what Harmony OS may do. iOS/macOS does exactly what they need.

Will things change? Perhaps if/when quantum computing becomes a bigger item.


The people who designed Mach in 1985 would almost certainly design something very different if they had today’s hardware, AI agents, secure enclaves, NPUs, ubiquitous networking, cloud edges, wearables, smart home devices and generally device density per person.

HarmonyOS is interesting because it points at the right axis: one coherent OS fabric across many devices, not a set of separate device OSes glued together by continuity features. Continuity, iCloud, Handoff, AirDrop, HomeKit etc are impressive glue. They are not the same as one logical OS fabric.


Can you give a brief comparison? I'm not familiar with Harmony OS and wouldn't know where to start on comparing the two.

I found out about it through this guy and his videos, they are slow but in depth: https://www.youtube.com/watch?v=GSLFz4jTMEY

"For example, one major thing you can do with money is hire more people."

Something else you can do is buy companies.


I think you can reframe this and better understand the point these mathematicians are making.

The vast, vast majority of mathematics DOES use infinities. That's the standard perspective. The question is whether there is good, interesting, useful mathematics to be explored by disallowing that concept.

The way I see it, Gödel's, Turing's work and complexity theory come out of this line of thinking about _effective_ computation. This is an argument for exploring the mathematics that arises when you don't think of actual computer math as an imperfect approximation of the real numbers, but rather as a mathematical object in its own right.

I would guess (?) it's more interesting for floating point math and related than for integer math, because for integer math it's already well explored in group theory.


Well, if you're asking if apple execs use that setting, the answer is probably that they don't.

I think the issue is that there are SO many piled up little features everywhere that SOMEone is using that keeping everything working while making any changes at all is very difficult.

I am a fan of more wood behind fewer swings. Don't add something like spaces unless you think you've got something so good that you are confident that it will be the common path.


This kind of thing must be SO frustrating to people struggling to get by in the world. "We gave AI $100k that it will almost certainly squander, yolo!! Hopefully it doesn't abuse people too badly in the process."

I… guess the bet is that what they learn is worth $100k? Seems rather questionable. Or that having this on the resume is a great shock tactic that will open doors in the future?


And at the same time, they clearly have no idea how LLMs work, meaning even if they meant to, they can't really use them efficiently. Biggest issue that stuck out seems to have been that they think the LLM could somehow have an inner dialogue with itself to find out "it's reasoning and motivation":

> The moment Leah asks how she “came up with” the ideas for her store, Luna’s first instinct is to say she was “drawn to” slow life goods. Then, she corrects herself: “‘drawn to’ is shorthand for ‘the data and reasoning led me here.‘” In other words, she doesn’t have taste; she has a reflection of collective human taste, filtered through what makes sense for this store. And this is the way these models work.

I'm guessing these are the same type of people who sometimes seems to fall in love with LLMs, for better or worse. Really strange to see, and I wonder where people get the idea from that something like that above could really work.


> In other words, she doesn’t have taste; she has a reflection of collective human taste, filtered through what makes sense for this store. And this is the way these models work.

Well, it really depends on what you mean here. Models aren't 100% deterministic, there is random chance involved. You ask the exact same question twice, you will get two slightly different answers.

If you have the AI record the random selections it makes, it can persist those random choices to be factors in future decisions it makes.

At that point, could you consider those decisions to be the AI's 'taste'? Yes, they were determined by some random selection amongst the existing human tastes, but why can't that be considered the AI's taste?


Where do you get the idea that you have a good sense of the introspective capabilities of frontier models ? Certainly not from interpretability research. Ironically, the people who make these sort of comments understand LLMs the least.


> Certainly not from interpretability research

What research shows that you can ask ChatGPT to explain its reasoning and why it said what it said, and that's guaranteed to actually be the motivation?

I've seen a bunch of experimentation looking at various things inside the black box while the inference is happening, but never seen any research pointing to tokens being able to explain why other tokens are there, but I'd be very happy to be educated here if you have any resources at hand, I won't claim to know everything.


>What research shows that you can ask ChatGPT to explain its reasoning and why it said what it said, and that's guaranteed to actually be the motivation?

What research shows that you can ask a Human to explain its reasoning and why it said what it said, and that's guaranteed to actually be the motivation? Because there's no such thing. If anything, what research exists suggests any explanation we're making is a nice post-hoc rationalization after the fact even if the Human thinks otherwise.

https://transformer-circuits.pub/2025/introspection/index.ht...


Why not try to answer my question, instead of asking a different question which I haven't even claimed to have the answer to?


I did answer it, albeit not directly. "Guaranteed to be the motivation" isn't a standard anyone can meet, and so framing it that way doesn't really probe anything meaningful about LLMs specifically. If what you want to hear is No, then sure, have your No, but it doesn't mean anything. There's just not much to the question.

Even though you had it up as one borne of a greater understanding of LLMs, the interpretability research we have so far, and our current very little understanding of the internal computations of these models does not support your position and certainly not how assured you are about it.


> our current very little understanding of the internal computations of these models does not support your position

Our current understanding is sufficient to know you can not ask the LLM to explain it's behavior and it can correctly do so, I'm not what research you've read to believe this could be possible in the first place, but happy to receive links to read through, if you're sitting on them.


Explanations can be faithful sometimes. That's the standard we can expect for any intelligence as far as we're aware.

https://arxiv.org/abs/2504.14150


The choice to refer to it as "she" is also dubious, especially in a context like this. Doubling down on anthropomorphization seems likely to reinforce false beliefs about models.


> Biggest issue that stuck out seems to have been that they think the LLM could somehow have an inner dialogue with itself to find out "it's reasoning and motivation":

> I'm guessing these are the same type of people who sometimes seems to fall in love with LLMs, for better or worse. Really strange to see, and I wonder where people get the idea from that something like that above could really work.

It's a fetishistic cargo-cult rooted in Peter Thiel's 2AM hot tub party. I still believe the LLM approach won't yield true AGI; despite the very real applications, the majority signal is noise.


If $100k proves that CEO is the most replaceable job ever, I’ll allow it.


It does fit a pattern where the general tone on HN has gone from "AI is going to eat the world of retail jobs and people like us are going to be the biggest beneficiaries" to "turns out that turning JIRA tickets into syntax which compiles might actually be something LLMs are better suited to than upselling fries and wiping tables" :)


> CEO When things go shitty, who else would deserve a golden parachute? Respect the position, people, not the person. Or the multi-million dollar compensation.


The position doesn't get a golden parachute, the person does. If you're CEO when things go shitty you shouldn't get anything more than your bottom-line employee would, which is to say you should just be unceremoniously kicked to the curb.


You need a good CEO when things are going bad, because without one they'll go even worse. You still want to make payroll and can't just randomly fire people.

(Also, if you own a failed company you're responsible for cleanup tasks for years afterward.)


>You still want to make payroll and can't just randomly fire people.

In the US you can.

>Also, if you own a failed company you're responsible for cleanup tasks for years afterward.

But we're talking about golden parachutes, where a CEO screws up the company and gets fired with a multi-million dollar raise. This is Hacker News, and the pro-business narrative is strong here, but in reality CEOs rarely suffer any meaningful risk or consequence for failure (unless it involves jail time, and even then they aren't doing hard time) they just wind up slightly less rich than when they succeed.

I don't care how good a CEO is, that isn't justifiable. Certainly not in a country where people can get laid off with an email and lose their access to healthcare on the whim of anyone above them in the power hierarchy.


> In the US you can.

Depends on the state I think. It's not Europe or Japan level.

At my employer it's very difficult to fire people for performance reasons even if as a manager you might want to.

> This is Hacker News, and the pro-business narrative is strong here,

I haven't seen such a narrative in years. Interest rates are too high to do startups unless it's AI after all. HN is mostly the same folk economics content as other forums, where all problems in the world are caused by "profits" accruing to "corporations".

(Mostly problems are caused by other things than that.)


Are you kidding me? Who’s going to align synergy and hold accountable KPIs and vision plan the 3rd quarter and.. and.. other MBA talk. Certainly AI could never.


large language models are great at language tasks like "bullshittify this message"


I'm noticing one major early effect of them is making extensive, visually consistent, very impressive slide decks accessible to individual workers who need to actually do real work and wouldn't ordinarily have time to make those.

The result is an explosion of pretty bullshit-heavy documents flying around our org, which management loves but which is definitely, so far, net-harmful to productivity.

This comes out if you start asking questions about the documents. "Which of a couple reasonable senses of [term] do you mean, here?" they'll stumble because that was just something the LLM pulled out of the probability-cluster they'd steered it to and they left in because it seemed right-ish, not because they'd actually thought about it and put it there on purpose. They're basically reading it for the first time right alongside you, LOL. Wonderful. So LLM. Much productivity. Wow.

Anyway, since a lot of what managers and execs do is making those kinds of diagrams and tables and such in slide decks, and their own self-marketing within the company is heavily tied to those, I expect they see this great aid to selfishly productive but company un-productive activity as a sign these things will be at least as big a boon to real work. Probably why they still haven't figured out how wrong that is. I suppose they're gonna need a real kick in the ass before they figure out that being good at squeezing their couple novel elements into a big, pretty, standardized, custom-styled but standards-conforming diagram padded out with statistical-likelihoods doesn't translate to being similarly good at everything.


My first guess would be a MrBeast style stunt, in which (it is hoped) blowing a huge wad on something obviously stupid will attract enough attention and interest to be convertible into a net-positive ROI.


Where in this case roi means attracting investments that will make the founders rich while making most of the investors lose money


This seems like a silly thing to worry about. Assuming you live in a first world country and are somewhat tangentially involved in tech(based on the site we're on), odds are you spend a lot of money in ways that billions of the poorest people in the world would consider frivolous or outrageously, needlessly luxurious.


Not your money.

At least this furthers humanity's scientific and technological knowledge, whether it fails or succeeds, unlike most other things people would do with that money, like buy a house to flip it, or buy a car, or sth.


Yeah, I mean it's true to an extent, I agree. As scientific research though it's not very well thought out. A grant agency would not fund this. There's too much potential for causing harm and it's not clear what benefit or action we derive from the results. They tried this before with a vending machine, it failed, apparently all they concluded was "hm, models got better so maybe we should just try it again". How is that worth anything scientifically?

Re: not my money, true. It's just frustrating even to me to see people do stuff like this, and I'm not struggling to get by. My frustration mostly derives from feeling like I'll get lumped in with techies who have more money than sense. I already deal with enough tech hate in my life.

When people buy a super fancy car they don't (usually) blog about it, and instagram wealth influencers are also frustrating, yes.


That's a fair objection and I often feel like this, too.

On the research aspect, I see this as something pre-Research, yet still science - in a way, it's science at its core: trying something and seeing what happens. Proper Research usually follows once enough ad hoc attempts are made and they seem to show a pattern that's worth setting up a systematic study to verify.


There are people who spend a thousand times more money on a boat or an airplane. This hardly seems worth worrying about.


Publicity from the gimmick is the whole point


Really it's the same as any other R&D investment in our capitalist system, it just happens to be more visible to the public, with more obvious risks to them. (Outright celebrated, even).

Which is why the comparisons to 19th century textile workers is so common, since that was an equally visible and gleeful displacement.


You're talking about funeral costs; the author generalizes _a_lot_ from funeral costs to "kinship societies are bad". That's the leap the comment you're replying to is discussing.


The factual material about funeral spending costs is very interesting, but when it gets into "Kinship societies are wealth-destroying societies" it seems rather… unsupported? That's a sweeping statement that actually requires understanding the whole picture, and the whole picture is not being presented. Is there reason to think the author truly has all the context to make these claims?


Korea used to have something similar to this phenomenon, although it wasn’t for the funeral. When the oldest man (probably the grandfather of a big family) has his 60th birthday, the entire family had to celebrate with basically throwing a days-long party. It was like a family duty for the rest of the family, and it was embedded into the culture so deeply so they wouldn’t simply think about the alternative of having a small one. Other elders in the local community would say “well done” only when the party was big enough. After the big celebration, the rest of the family would sit on a massive debt, which couldn’t be reimbursed with their earnings for a foreseeable future. The old man dies, and the family lives along with the agony of the debt. It used to be the case until Korea became an industrial country and a lot more people started having more than 60 yrs of life. My mom still talks about what it used to look like in those old days.


In Mexico you have quincenaras with like 500 people and a dress that's worn for 1 day that costs like $2,000.


Sounds like an Indian wedding ... upper-middle class indians now spend around Rs. 50 lakhs (around USD 55,555) for a wedding here in South India.


So it was just the head of the family?

What if there were several of these birthday parties in succession due to siblings dying?


This is not a novel observation, eg Kapuscinski's "In the Shadow of the Sun" describes the same phenomenon: it's very difficult to get ahead because anything above bare subsistence is immediately siphoned off by your kin.


The flip side is that it's very difficult to fall too far behind as well. Your kin have an obligation to support you, too.


Your pack falls behind, and has nothing to eat during food supply shocks like the one that's almost certainly coming.


Fewer homeless, I bet.


On a factual level the relationship between kinship societies and economic headwinds is fairly well documented [1] [2]. The mechanism is the same reason that communist/socialist societies often fail: when wealth belongs to everyone, nobody has either the incentive or the means to accumulate wealth, which prevents capital formation within the society [3].

The part that the article glosses over is that "Kinship societies destroy economic growth" is a Russell conjugate [4] of "economic growth destroys family formation". Kinship networks provide important intangible support to several important community functions, notably child-rearing. That's the whole "it takes a village to raise a child" aphorism. When you allow people to defect on their social obligations in the name of accumulating wealth, then it turns out they do, and the village suffers. It is exactly as the article said: "The kinship network has a strong interest in preventing any of its members from becoming prosperous enough to no longer need it: someone who no longer needs your help is also someone who might not help you." That's exactly what we've observed happening in modern industrialized economies, where people become increasingly atomized and those informal community organizations that create things like belonging and mutual aid (not to mention group childcare and socialization) die off as everyone chases the promotion that will let them afford ever-higher institutional childcare costs.

And this is why the fertility rate in every major industrialized country has cratered, usually right as it industrializes.

[1] https://www.uni-heidelberg.de/md/awi/forschung/paper_e.bulte...

[2] https://edepot.wur.nl/14918

[3] https://en.wikipedia.org/wiki/Tragedy_of_the_commons

[4] https://en.wikipedia.org/wiki/Emotive_conjugation


>And this is why the fertility rate in every major industrialized country has cratered, usually right as it industrializes.

I'm pretty sure it's actually because industrialization is upstream of the education and supply chains to make hormonal birth control widely available, and being pregnant and giving birth is an incredibly challenging, risky, and frequently unpleasant burden that's only shouldered by half our population.


Why are you acting like a vast majority of the population are capitalists? You're describing the actions of less than 1% of the worlds population, acting like it's the norm of human history and not the extreme aberration that it is. Not too mention we're living in the corporatist neoliberal dream that is a massive hellscape for workers where income inequality is at the highest levels, worse than the gilded age, where your single life is determined by factors the majority of workers can never control since the system is designed to benefit capitalists at the expense of everyone else.

Why are you assuming capital formation is even beneficial for people? Poor workers in Arkansas do not benefit when Ford sells their crappy wares around the world. Children in Utah aren't getting a better education when Zuckerberg sells more ads.


>Why are you acting like a vast majority of the population are capitalists?

Anyone who has saved money to buy something that makes them more productive is a capitalist. At least for any meaningful definition of the word. It's not 1%, it's some very large minority or even majority.

>Why are you assuming capital formation is even beneficial for people? Poor workers in Arkansas do not benefit when Ford sells their crappy wares around the world.

The guy that squirrels away $20,000 so he can buy a food truck, or hell, $300 for a hot dog cart is a capitalist. Every programmer here that ever bought a new laptop or phone acquired the "means of production" for the jobs they work.

The thing about marxists is, unfortunately, they're still stuck in the 1850s with him, trying to solve the problems of the 1850s, and refusing to engage in reality with any of us who don't want to live in the 1850s with them.


It's viewing the situation through the lens of Anglo capitalist opinions.

I found the same thing when working in Cambodia; Khmer culture is very, very, family-oriented, the extended family is the main survival mechanism for Khmer people, and individual wishes are often subordinated to the family. This is their culture, Khmer people are happy with it, this is how they choose to live. The Anglo ex-pats (including me) don't understand it, find it oppressive and have a natural instinct to "liberate" Khmer people from this oppression. Took me quite a while of talking with Khmer people to realise that they look at the world very differently from me, and from that perspective this all works and is a source of joy and comfort for them. Obviously there are outliers and people who this doesn't work for, but that's also true of Anglo culture.


> It's viewing the situation through the lens of Anglo capitalist opinions.

Yes and while I find the article to be quite insightful on the whole, I can't take it seriously as an anthropological study.

There is a strong ethnocentric bias that the author failed to declare / acknowledge, which reduces the credibility of his claims. Also there is little supporting data.


> It's viewing the situation through the lens of Anglo capitalist opinions.

Came here to say this. It's a very narrow perspective that shows in sub headlines like "Kinship societies are wealth-destroying societies".

One could also take the lens of "Kinship societies are making people's wealth more equal to reduce competition and jealousy, increase harmony and happiness" – although I have no data whether these people are genuinely more happy. It quotes some business-oriented Ghanians who seem quite unhappy about sharing their wealth. And yet, the perspective of indivual wealth over group wealth is assumed and never critically reflected upon.

I'm not saying that their way is better or something like that. I just think that reading the article is a good exercise in reflecting on one's own views on life and wealth.


It also assumes a myopic version of wealth. Rich people haaate when poor people do work for each other for free, because there is no opportunity to add a middleman.


There's a ton of sanitization of attachments. It just isn't foolproof.

On iOS messages attachments are decoded in a separate, heavily restricted and sandboxed process, and the decoded sanitized results are sent back to the UI process. It just isn't perfect.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: