Awesome article, I feel a lot of people have also forgotten that good projects take iteration not 100 new features.
To get few features to an excelent state it requires multilpe iterations at multiple stages.
1) The developer who does a task validates that their thinking was the correct one, they see how they changes impact the system, is it scalable? Does it need to be scalable? While you are working and thinking on it you get more and more context which simply wasn't there at the begining.
2) A feature done once (even after my perfect ClaudeCode plan) is not done forever, people will want to make it better/faster/smoother/etc. But instead of taking the time to analyze and perfect it we go onto the next feature, and if we have to iterate on the current one, we don't iterate we redo...
Really like the article I think it is awesome, and I strongly believe AI for coding will stay, but I also beleive that we need to still have a strong understanding of why we are building things and what they look like.
Really cool blog. I have been thinking about this recently as I was a center piece of our startup, and to me the hardest part of giving away responsibilities has been 2 phase: firstly that suddenly I feel like I am not in control, and second that people are spending time on things they don't like and because of that we are not leveraging their true potential/string sides (moneyball).
Curious to hear what other think about that - should you leave people focus only on their strong sides, or they should still help out with e2e things.
I understand, so you would much rather count an active Issue more valueable then few stars. I am saying active as I have seen already issues being open and as soon as a comment on my side gets in the person has disappeared, I suppose people are busy and don't have too much time for open-source anyway, so if the project doesn't run first try they give up.
So in order to keep the diagram up-to-date with commits we use the git difference of python files. An agent is tasked to firstly evaluate if the change is big enough to trigger a full clean analysis.
If the change is not big enough we start doing the same thing component by component recursively and update only components which are affected by the new change.
But comparing the control-flow-graph probably makes more sense for a big refactor commits, as it might blow the context. However so far we haven't seen this be an issue.
Curious to hear what was your approach when building diagram represnetation!
Really like the article I think it is awesome, and I strongly believe AI for coding will stay, but I also beleive that we need to still have a strong understanding of why we are building things and what they look like.
reply