Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Thanks for the detailed critique.

I think we might be talking past each other on the "super-manager" term. I defined it as a hybrid of EM + IC roles, not pure management, though I can see how that term invited misinterpretation.

On the false dichotomy: fair point that I painted two archetypes without acknowledging the complexity between them or the many other archetypes. What I was trying to capture was a pattern I've observed: some skills from managing and reviewing others' work (feedback, delegation, synthesizing approaches) seem to transfer well to working with AI agents, especially in parallel.

One thing I'm curious about: you said my framing overlooks "the real benefit of going through the process of authoring a change." But when you delegate work to a junior developer, you still need to understand the problem deeply to communicate it properly, and to recognize when their solution is wrong or incomplete. You still debug, iterate, and think through edge cases, just through descriptions and review rather than typing every line yourself. And nothing stops you from typing lines when you need to fix things, implement ideas, or provide examples.

AI tools work similarly. You still hit edit-compile-test cycles when output doesn't compile or tests fail. You still get stuck when the AI goes down the wrong path. And you still write code directly when needed.

I'm genuinely interested in understanding your perspective better. What do you see as the key difference between these modes of working? Is there something about the AI workflow that fundamentally changes the learning process in a way that delegation to humans doesn't?



> But when you delegate work to a junior developer, you still need to understand the problem deeply to communicate it properly, and to recognize when their solution is wrong or incomplete

You really don't. Most delegation work to a junior falls under the training guideline. Something trivial for you to execute, but will push the boundary of the junior. Also there's a lot of assumptions that you can make especially if you're familiar with the junior's knowledge and thought process. Also the task are trivial for you meaning you're already refraining from describing the actual solution.

> AI tools work similarly. You still hit edit-compile-test cycles when output doesn't compile or tests fail.

That's not what the edit-compile-test means, at least IMO. You edit by formulating an hypothesis using a formal notation, you compile to test if you've followed the formal structure (and have a faster artifact), and you test to verify the hypothesis.

The core thing here is the hypothesis, and Naur's theory of programming generally describe the mental model you build when all the hypotheses works. Most LLM prompts describe the end result and/or the processes. The hypothesis requires domain knowledge and to write the code requires knowledge of the programming environment. Failure in the latter parts (the compile and test) will point out the remaining gaps not highlighted by the first one.


Well put and I concur with your points (for what that is worth :-)).

And thanks for referencing "Naur's theory of programming". For those like myself previously unaware of this paper, it can be found below and is well worth a read:

https://pablo.rauzy.name/dev/naur1985programming.pdf


@skydhash posted a great response here[0], which is why I am focusing on the question below.

> Is there something about the AI workflow that fundamentally changes the learning process in a way that delegation to humans doesn't?

Yes.

Using LLM document generators to produce source artifacts "short-circuits" the learning process people must undertake in order to formulate a working mental model (as expounded upon by the referenced @skydhash comment). An implication of this is engineers using this approach primarily learn the LLM tool and secondarily the system being modified.

While this may be acceptable for senior engineers steeped in a system's design and implementation choices, such as being involved in same from inception, this does not transfer to others regardless of skill level and can easily result in a "pull the ladder up behind you" type of situation.

0 - https://news.ycombinator.com/item?id=45537628




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: