Just anecdotally - I think your reason for disagreeing is a valid statement, but not a valid counterpoint to the argument being made.
So
> Reason: you cannot evaluate the work accurately if you have no mental model. If there's a bug given the systems unwritten assumptions you may not catch it.
This is completely correct. It's a very fair statement. The problem is that a developer coming into a large legacy project is in this spot regardless of the existence of AI.
I've found that asking AI tools to generate a changeset in this case is actually a pretty solid way of starting to learn the mental model.
I want to see where it tries to make changes, what files it wants to touch, what libraries and patterns it uses, etc.
It's a poor man's proxy for having a subject matter expert in the code give you pointers. But it doesn't take anyone else's time, and as long as you're not just trying to dump output into a PR can actually be a pretty good resource.
The key is not letting it dump out a lot of code, in favor of directional signaling.
ex: Prompts like "Which files should I edit to implement a feature which does [detailed description of feature]?" Or "Where is [specific functionality] implemented in this codebase?" Have been real timesavers for me.
The actual code generation has probably been a net time loss.
> I've found that asking AI tools to generate a changeset in this case is actually a pretty solid way of starting to learn the mental model.
This. Leveraging the AI to start to develop the mental model is an advantage. But, using the AI is a non-trivial skill set that needs to be learned. Skepticism of what it's saying is important. AI can be really useful just like a 747 can be useful, but you don't want someone picked off the street at random flying it.
So
> Reason: you cannot evaluate the work accurately if you have no mental model. If there's a bug given the systems unwritten assumptions you may not catch it.
This is completely correct. It's a very fair statement. The problem is that a developer coming into a large legacy project is in this spot regardless of the existence of AI.
I've found that asking AI tools to generate a changeset in this case is actually a pretty solid way of starting to learn the mental model.
I want to see where it tries to make changes, what files it wants to touch, what libraries and patterns it uses, etc.
It's a poor man's proxy for having a subject matter expert in the code give you pointers. But it doesn't take anyone else's time, and as long as you're not just trying to dump output into a PR can actually be a pretty good resource.
The key is not letting it dump out a lot of code, in favor of directional signaling.
ex: Prompts like "Which files should I edit to implement a feature which does [detailed description of feature]?" Or "Where is [specific functionality] implemented in this codebase?" Have been real timesavers for me.
The actual code generation has probably been a net time loss.