Hacker Newsnew | past | comments | ask | show | jobs | submit | norseboar's commentslogin

I think the argument is interesting, but the specific example of prop 65 doesn't really work on a few levels. The argument in the post is that Prop 65's warnings are legitimate in some sense, but only apply in specific contexts.

However, Prop 65 is much broader than that. To qualify, a chemical just needs to show up on one of maybe half a dozen lists that show the chemical has some association w/ cancer, but all these show is that in some study, at some quantity, the association existed. The amount that was linked to cancer could be far beyond what is ever present in a consumer good, and the links could have only been shown in non-humans.

The lists aren't the ones gov't agencies like the FDA use to regulate product safety, they're lists far upstream of that that research institutions use to inform further study. The typical starting point is a mouse study with a huge dosage. It's not a useless study, but it's not meant to inform what a human should/should not consume, it's just the start of an investigation.

I don't think this actually has any bearing on the substance of the broader argument, but Prop 65 is not the best example.


prop65 have the same level of coordinated opposition and information corruption as the food pyramid or cigarettes damage had for most of the time.

industry coluded to make it seems useless and industry spoon fed you the narrative you repeated. the list is very informative and meant to force the "invisible hand of the market" (its a pun, relax) to pay for better studies if they truly believe it is not harmful but studies are inconclusive. industry just decided to band and spend on making the signs useless.


> the list is very informative and meant to force the "invisible hand of the market" (its a pun, relax) to pay for better studies if they truly believe it is not harmful but studies are inconclusive

To make sure I understand right: you're saying a good way to run things is: publish a list of a bunch of things that could be true or false, and then if industry cares enough, they should spend time/money debunking it?

I think that would be an extremely slow/conservative way to run just about anything, and is not the way we handle basically any claim. I can see an argument for "don't do something until you prove it's safe", useful in some very high-risk situations, but "warn that all kinds of commonplace things could cause cancer until somebody proves it doesn't" is misleading, not just conservative.

And it doesn't even work -- lots of places have spent time/money debunking e.g. negative claims about aspartame, but claims about how unsafe it is persist. And it all comes back to dosage. There is no good evidence that aspartame, at the levels found in a normal soda, cause any issues for humans, but this gets drowned out by studies either showing effects from massive doses on rodents, or indirect effects (e.g. it makes you hungrier, so if you eat more refined sugar as a result of that hunger, then yes it's bad for you, just like more refined sugar is almost always bad for you).


you are still misguided that the list is utterly useless. i cannot open your eyes for you.

go for first hand experiences. you are still repeating others you don't know (and have been told told are authorities)


Is there actually an epidemic of firing programmers for AI? Based on the companies/people I know, I wouldn't have thought so.

I've heard of many companies encouraging their engineers to use LLM-backed tools like Cursor or just Copilot, a (small!) number that have made these kinds of tools mandatory (what "mandatory" means is unclear), and many companies laying people off because money is tight.

But I haven't heard of anybody who was laid off b/c the other engineers were so much more productive w/ AI that they decided to downsize the team, let alone replace a team entirely.

Is this just my bubble? Mostly Bay Area companies, mostly in the small-to-mid range w/ a couple FAANG.


There are definitely a fair few companies laying off programmers at the moment, though few of the ones I've seen blamed it on AI (usually more either overhiring, the pandemic ending and usage going down, or someone thinking they can outsource everything to save money). Wouldn't be surprised if a few tried to say it was because of AI when it was really for some other reason though.


The "era of good enough" here really resonates with me, I've been in product and people mgmt and there's a lot of tension between "optimal amount of quality for the business" vs "optimal amount of quality for the user", esp in B2B or other contexts where the user isn't necessarily the buyer. The author sort of blows off "something something bad incentives" but IMO that is the majority of it.

On top of that, people have genuinely different preferences so what seems "better" for a user to one person might not to another.

And then on top of that, yeah, some people don't care. But in my experience w/ software engineers at least, the engineers cared a lot, and wanted to take a lot of pride in what they built, and often the people pushing against that are the mgmt. Sometimes for good reason, sometimes not, that whole thing can get very debateable.


I think this confuses the responsibilities a CEO may have (write memos, etc) with the responsibilities they must have (ultimate authority/responsibility for company decisions/direction). If a CEO hired somebody to do ~all company comms, and maybe financial modeling, and even make important decisions about company strategy, the CEO did not hire another CEO. The CEO delegated. All managers do this to some extent, that's the point.

There still needs to be some entity who says "here is when we'll listen to the AI, here are the roles the AI will fill, etc", and that entity IMO is effectively the CEO.

I suppose you could say that entity is the board, and the AI is the CEO, but in practice I think you'd want a person who's involved day-to-day.

The article quotes:

> "...But I thought more deeply and would say 80 percent of the work that a C.E.O. does can be replaced by A.I.”...That includes writing, synthesizing, exhorting the employees.

If AI replaces those things, it has not replaced the CEO. It has just provided the CEO leverage.


> It has just provided the CEO leverage.

Exactly. And you could say this about a lot of other roles as well. AI certainly has its flaws, but at this stage it does rather frustrate me when people actively resist using that leverage in their own roles. In many ways I couldn't go back to a world without it.

My days are now littered with examples where it's taken me a minute or two to figure out how to do something that was important but not particularly interesting (to me) and that might otherwise have involved, for example, an hour or two of wading through documentation without it, so that I can move on to other more valuable matters.

Why wouldn't you want this?


Same reason people try to compete in sports without using PEDs, or prepare for standardized tests without a good nootropic stack.

There's some large contingent of the population who believes being "natural" places them on a moral high horse.


I think part of the idea here is that we’re not talking about putting GPT-4o in charge of a company, we’re talking about GPT-7a (for “agent”). By the time we get to that turn of the game, we may not have as many issues with hallucinations and context size will be immense. At a certain point the AI will be able to consume and work with far more information that the human CEO who “employs” it, to the point that the human CEO essentially becomes a rubber stamp, as interactions like the following play out over and over again:

AI: I am proposing an organizational restructuring of the company to improve efficiency.

CEO: What sort of broad philosophy are you using to guide this reoorg?

AI: None. This week I interviewed every employee and manager for thirty minutes to assemble a detailed picture of the company’s workings. I have the names of the 5272 employees who have been overpromoted relative to their skill, the 3652 who are underpromoted or are on the wrong teams, 2351 who need to be fired in the next year. Would you like me to execute on this plan or read you all the rationales?

CEO (presumably after the AI has been right about many things before): Yeah OK just go ahead and execute.

Like, we’re talking about a world where CEOs are no longer making high level “the ship turns slowly” decisions based on heuristics, but a world where CEO AIs can make millions of highly informed micro-decisions that it would normally be irresponsible for a CEO to focus on. All while maintaining a focus on a handful of core tenets.


This is just saying “if an imaginary thing was good at being a CEO, it could replace a CEO.” Which is a tautology on top of a fantasy.


> I think this confuses the responsibilities a CEO may have (write memos, etc) with the responsibilities they must have (ultimate authority/responsibility for company decisions/direction).

The vast majority of articles about CEOs are populist rage-bait. The goal isn't to portray the actual duties and responsibilities of a CEO. The goal is to feed anti-CEO sentiment which is popular on social media. It's to get clicks.


Then why did the CEO's in the survey agree with the premise?


Do CEOs have much responsibility though? I've only ever seen them punished when they've done something intensely illegal and even then they get off for lighter crimes usually.

Golden handshakes mean if they move on they win. They can practically suck a company dry & move on to another one thru MBA social circles.


I think there's a bad assumption in here, which is that pay should keep pace with productivity gains in the first place.

I'd argue the whole point of productivity gains is that they do outpace pay. The idea is the same work generates more value. Some of that extra value can be passed back to the employee, but if all of it is passed back to the employee, then the goods produced don't actually get cheaper. If nothing gets cheaper, there's no incentive for a business to invest in tech that makes employees more productive.

The data in the piece has a lot of problems with it too, but I think the core assumption is fundamentally off-base.


These are relative terms. Let's say in 1980 I was flipping burgers for $3 an hour and making $10 for the company. Let's (just for the sake of argument, ignore inflation) say in 2020 I'm making $20 for the company. For my pay to keep pace with productivity gains I'd be making $6 an hour, leaving my employer $7 better off (they were making $7, now $14). My pay would've kept pace with productvity - it doubled, my pay doubled.

When people say that "wages kept pace with productivity growth" that's what they're saying - that a 10% increase in productivity resulted ina 10% increase in pay, not a $10 increase in productivity resulted ina $10 increase in pay.


This flies in the face of so many HN readers that claim that salaries are based on the value created by the employee.


> so many HN readers that claim that salaries are based on the value created by the employee.

That's because those people are sadly wrong. Salaries have never been based on value created. They have always been based on the minimum a company could pay for the talent they desire.


Value created is the ceiling, the floor is min(cost of employee's alternative, cost of employer's alternative).


> if all of it is passed back to the employee, then the goods produced don't actually get cheaper.

There's a spectrum between none and all.

Also, why wouldn't goods get cheaper? Scaling up production is easy so it's not a supply issue. There would be two outcomes: people would buy more goods overall (as has happened over time) or they would work less and enjoy more leisure time.


No bad assumption here: people say that productivity growth and wage growth ought be the same. If this was not the case, then in the long run 100% of value added would go to capital! Of course this does not mean that the productivity gains should directly translate into wages.


I think you have to look at the division of the added value by the changes in productivity. Were it something closer to 50-50 that might be reasonable, but so far as I can tell based on changes in wages over time, that's not the case at all.


Totally. I don't think there's anything wrong with saying "Gee, this store is making way more money but its employees are being paid the same, that sucks". I think indexing the minimum wage to inflation makes sense. I think a lot of low-skill jobs should be higher-paid, and I think raising the minimum wage is a good tool in some cases (although I think the people pushing for national increases often overlook the effect that a doubling wage will have in a rural area where the cost of labor really does impact the ability of a smaller store to stay open).

My point is just that I don't think "wages should keep pace with productivity" is true. If wages always rose with productivity, we'd be focusing all the gains on the people in the sectors where productivity is growing, and not lowering the cost of goods for everybody else.


The article argues that “the minimum wage should keep pace with productivity growth,” not “pay should keep pace with productivity gains.”

Suppose a business with a profit margin of 20% increases its revenue by 10% without increasing labour input. If the owner captures that 10%, the profit margin is now 27%. If the revenue is paid out as higher wages (“pay keeps pace with productivity gains”), the profit margin falls to 18%. If wages increase by 10% (“pay keeps pace with productivity growth”), the profit margin remains constant and profit can also increase by 10%.

What happens to the profit margin of individual businesses will vary. But across the entire economy, it’s reasonable to expect that the wage share will remain pretty constant, and until the 1980s, it did. https://en.wikipedia.org/wiki/Wage_share


  If nothing gets cheaper
Would anything need to get cheaper if everyone was making more money?


It depends on why productivity is increasing. Generally speaking, the employer is pushing productivity increases (implementing better processes like assembly lines, buying equipment that lets an employee do more, etc). If the employer is pushing the increases, they need some incentive to do that.

Often that incentive is making goods cheaper, so they're more competitive. That's a huge generalization, but it makes the point that there's nothing wrong with productivity gains outpacing wages.


If productivity doubles, and worker pay doubles, then the cut going to capital also doubles. Don’t see the problem with that. Your math is wrong.


But isn't this the cause of inequality? If you can't pay commensurate with productivity, a larger share of capital flows to capital owners and you stretch out the exponential curve. There are humans on the other side of the equation and there are other costs besides labor involved with operations.

There's a reason why common good capitalism is an increasingly attractive model for a lot of folks. A society cannot maintain the maximize returns to capital model indefinitely. I think many people are realizing that Friedman's economics either cannot be sustained or lead to a place where many would prefer not to go. People, and therefore the invisible hand, are inherently flawed.


The problem is all that extra value that workers have been generating the past 50 years is going into the pockets of the rich.


Presumably, it is possible for a product to also get better.


Yes, and product improvement is largely attributable to R&D, rather than assembly-line-level production. Those gains disproportionately flow to high skill white collar workers.


(Benchling PM here)

As far as sweeping generalizations go, I think that's a pretty reasonable one :). I'd imagine that almost all of our users (including most lab admins who assign permissions) don't want to keep a complex permission system in their head.

What we've seen is that this system ends up leading to a small number of well-designed and well-named roles. Most users see the roles themselves ("DNA Designer"), but don't need to worry about exactly what the configuration behind it is.

Somebody needs to be aware of the powerful (although not quite turing-complete) configuration system, but what we've seen in practice is that it's usually one or two technical admins whose job it is to gather requirements from the different teams and figure out how to translate those into a few digestible policies that everybody else can assign.

We certainly didn't invent this model (it's basically RBAC), but we've found it's a good way to address the often-complex demands of a big pharma (where IP is crazy regulated from like, 3-4 angles) without taxing the individual scientists too much.


I think the author's complaint has merit, but I don't think tying this complaint to wanting Zelda to be "saved" makes sense.

The issue is genre. To take an extreme example, I might like the brutal open-world genre a la Dark Souls and complain that the Forza games are all garbage because they aren't that. Most would recognize this complaint as silly, because if I want a brutal open world with no hits I should find a game that purports to contain that, not go bashing racing games because I don't like that genre as much.

What the author's doing to Zelda obviously isn't as extreme, and it is a bit more grounded (Zelda games did used to be more like what the author wanted). But it's the same type of argument: back when there were no open world games, the author played one called Zelda and really liked it. Since then (starting with the third), almost /every single title in the series/ has been an extremely dungeon-focused, puzzle-focused game. The overworld has always been a big part, and there have always been some secrets, but the defining aspect of the genre has become these dungeons and puzzles.

This isn't to say the complaints are invalid; there's nothing wrong with wanting a game that is more hidden, less hand-holding, more focused on the action and less on the gimmicks that let you solve a puzzle. But that's not asking for a better version of Zelda, that's asking for a different genre altogether. Focus on asking for new titles in that genre, and leave other genres in peace.


Where is the evidence that present hiring methodologies don't predict successful business outcomes? I recall seeing evidence re: resumes, but not on the overall process itself [1].

If this data is present (data showing that present hiring methodologies don't predict successful business outcomes), do you have data showing that hiring people who need jobs is any better? We /are/ different from each other -- as an example so obvious it borders on the ridiculous, people who have been programming for ten years will be much faster at it than people who haven't. At what point do you draw the line to state that people stop being different? If so, what data did you use to draw that line?

[1] http://blog.alinelerner.com/resumes-suck-heres-the-data/


I understand the paradigm paralysis - we have been led to believe that having the best CS zombies in the world is what makes success, but it's just not true. Meanwhile, jobs aren't filled and the people who need them are suffering.


Just one instance. https://twitter.com/mxcl/status/608682016205344768 , I've seen hundreds like this.


Honestly, if you weren't in the room you can't tell what happened. It's just as likely that he wasn't a good culture fit.


"Cultural fit" - I forgot this term existed outside of the HBO television show "Silicon Valley".


It's a nice catchall phrase for all kinds of discrimination that would be illegal if stated explicitly. If we call it a gut feeling or culture fit, it's suddenly legal.


Cannot ++ you enough. In my experience, people prefer to surround themselves with those similar to them. The same holds true for any software organization.


It is important to work with people who are you are comfortable working with.


Just as it was important to people in the pre-1960s southern US that their children only go to school with children whose families they were "comfortable with." Somehow, they've since (somewhat) figured out how to adjust to their discomfort.


Rational self interest might be a myth when applied to the extent that some economic models apply it, but at a basic level holds up. I know /tons/ of people that have taken one job over another because it pays more, even if it's one that they're less interested in.

Not everybody has the luxury of being passionate about something lucrative -- that's not something you really have control over (maybe some, but I don't think much. This is another issue entirely, though). I get that it might be lamentable that an industry isn't full of the most passionate people anymore, but the tradeoff is that it became a large-scale industry. If the original author wants the kind of deep, nerdy devotion he used to have, he can find it! There are plenty of research institutes and startups investing in long-term big bets that need this kind of thing. But being mad that /everybody/ isn't that way anymore seems childish.


I think there are two different problems here: building a good solution, and building a solution for a problem that people have to begin with.

If you /don't know/ if there will be any customers for your product (no matter how well-built it might be), spending time and money on building a very solid v1 is a waste. Sure you will have experience that you can leverage in your next venture, but that's an extremely expensive way to get it. And if you're bootstrapping, or have hired other people, you're potentially spending the financial stability of you or your employees to get this experience.

If you /do know/ you will have customers (either because you're sure it's a problem people have, or you've got people giving you money for basic R&D without any guarantee of returns), then I completely agree with you and the author -- build the product right and it will pay dividends later on.

In the prototype-then-throw-away model, you might not get the engineers' best development work, but you will get the best brainstorming and design work, because everybody's comfortable adjusting the product until they're confident they have something that people want. If you marry yourself to it beforehand, if you commit people's livelihoods to it, people will naturally try to rationalize what they're doing because they're committing so much to it, even if it's wrong. And if you've got the smartest people working with you, they'll be incredibly good at it. This creates a much bigger problem years down the line if it fails.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: