Fundamentally, it's the act of using your brain to simulate the computer that actually teaches your brain how the computer works so that you can reason about it later. In most domains, I find it best to start with doing things yourself, and only move on to the tool-assisted version once you thouroughly understand what the tool is doing for you. That way, you are still reasoning about the underlying system when working with the tool, and can figure out what happened when things go wrong.
Similarly in my domain of computer networking, the best advice I received was 'be the packet' - i.e. visualize the steps/hops the packet would take through the network and where are the decisions about forwarding would happen and what evaluations would be made to make that determination.
Troubleshooting became much easier as I took that advise.
I've written up three different replies now and I don't like any of them. I'm not sure how to respond to this statement. Thank you for writing it.
I fundamentally disagree with the two assertions "it's the act of using your brain to simulate the computer that teaches your brain how the computer works so you can reason about it later" and "In most domains, I find it best to start with doing things yourself, and only move on to the tool-assisted version once you thoroughly understand what the tool is doing for you".
I agree entirely with the notion that ability to reason about the underlying system is incredibly important, but I disagree about the methods to get there.
I disagree with those two ideas because (and maybe we have different perspectives here) but the choice of whatever level is the "base level" or "bottom of the stack" seems entirely arbitrary every time. Is assembly the bottom? Or C? No it's machine code. No it's the physical wires.
I think I should come back to my original comment here and reiterate. I'll clarify what I mean about those "computing environments that do that thinking for us" because I expressed myself poorly.
There are computing environments and workflows that people have built which expose to the end user (the programmer) deep information about the state in which they are working. As a field, as a culture we have not embraced this thinking and rather stick to the simplistic notion of working alone to build understanding for ourselves alone.
Sharing is discouraged (outright banned at school under penalty of expulsion) and difficult requirements are kept in place primarily for hazing purposes rather than pedagogical(I have this from a one-on-one discussion with the course designer at my school). There's this thinking that "I had to go through it, and the system produced me, so it must be good" and to think otherwise would be a recognition of being failed by the system, of missing something. A recognition that you could be smarter now than you currently are.
So to give a concrete example, as my sibling comment talks about "being the packet". Why is the 'network', and 'understanding the network' not realized as exactly the same idea? Why should I ever have the simulate the network in my head? The network is an man-created artifact, my understanding of the network should come from the network, the literal code defining it, not from text file RFCs however well they are written and whatever brilliant ascii art they have (because they ARE well written, and well explained by their diagrams). Programmers should have inspection tools that are borne out of the definition of the artifact they are inspecting.
This is of course being done in other disciplines first. An architect working in Revit is orders of magnitude more powerful than an architect working on pen in paper. In every case where an architect prefers pen on paper it is due to a failure of technology to realize practical or targeted workflow affordances, not due to the superiority of paper.
Working at a more concrete level is generally more effort and gives you greater flexibility than working in the abstract. There’s a tradeoff here, and the right answer will be different for every person, domain, and stage of development. The heuristic I use personally is that I shift to a more concrete perspective when I’m having trouble understanding something and to a more abstract perspective when things feel tedious.
I used tool-assisted incorrectly as a proxy for this scale. There are tools to help provide a more detailed, concrete view in addition to tools that speed up repetitive tasks.
There is some evidence that assistive tools may impede learning, particularly regarding navigation with GPS vs a paper map. If you just want to get the job done, tools are great. If you’re going to be doing similar jobs frequently, it’s probably worth investing the effort to try the lower-level/older method a few times; it’ll make you better at using the more modern methods. It seems that going more than about 2 levels of abstraction away from my original problem gives too little return for the effort, though— Diving into the debouncing circuitry of my keyboard is unlikely to help me diagnose a syntax error.
For learning, I’ll stick to paper and pencil, personally. I could never get my head around double-entry bookkeeping until I started keeping my personal accounts in a paper ledger book. I wouldn’t hire a professional accountant doing things that way, though: I now understand what the tools are doing well enough to appreciate both the benefits and drawbacks.
Similarly, spaced repetition systems like Anki never really clicked for me. It only started to work when I made a physical Leitner box and wrote out cards by hand. I eventually moved to a hybrid system where I computer-generate most of the cards but still print them out and review with physical cards. The extra flexibility of the concrete system let me try thing and figure out what I needed to make a system that works for me; I doubt it’s right for anyone else, but that doesn’t matter.