It's things like these that really get you into understanding the underlying "truth" of things.
Four other examples really come to mind that I've encountered:
- Void pointers and structs in C (what actually are objects?)
- "Curiously recurring template pattern" in C++ (how does template metaprogramming work?)
- Ruby's declarative syntax in class definitions (ex: rail's `has_many`) (what are ruby classes?)
- Monkey-patching instances of Ruby objects (what are Ruby objects?)
It's a similar feeling to those moments when you realize almost everything you've ever interacted with is artificial: other people made the decisions that made that thing this way...
This is basically about peeling away abstractions. A professor of mine calls this "mechanical sympathy", a term borrowed from I think racing. You don't need to understand how the n-1th abstraction level (the engine, git internals) works to get things done (racing, using git), but man is it very useful to know it anyway.
If you are working at level n, then in practice you usually do need to know about n-1 and n-2. E.g. web programmers write HTML and JavaScript but that stuff strongly depends on the basic properties of HTTP and indeed TCP connections.
There is also the phenomenon that the lower level are often more stable and universal than the higher levels. Your languages and APIs might make all kinds of arbitrary decisions, but they must still somehow tackle the same basic issues of resource management, synchronisation -- concerns which bubble up through all the layers.
Argument against considering them the same level of abstraction: HTTP and especially TCP stand on their own to do useful things (e.g. gRPC + every video game ever), so it's not necessary to understand HTML to understand and use HTTP. HTML and JavaScript can just be static files served over file://, even if most modern HTML/JS is delivered over HTTP. If you wrote perfect modern HTML/JS, you might never hit certain aspects of HTTP like 404s or 500s. You might be a DOM wizard but not have a deep understanding of 301s va 302s. You might understand nearly every HTTP header detail but get tripped up on ES6.
None of this is to argue that that a deep understanding of HTTP/TCP doesn't help write better HTML/JS, but they are separate things that can be considered separately.
I didn't say I considered http and TCP to be part of the same level, I consider parts of them to be part of the same level (e.g. the basic nature of requests, like the GP was talking about)
Full use of xhr needs some (but not complete) understanding of http and TCP because the abstraction isn't a perfect one, nor is it supposed to be.
However, full understanding of HTTP/TCP can be very helpful, even if not necessary.
To be precise, the main difference is that git filter-branch operates primarily on the underlying "snapshot" model, while rebase essentially looks at everything as diffs.
Four other examples really come to mind that I've encountered: - Void pointers and structs in C (what actually are objects?) - "Curiously recurring template pattern" in C++ (how does template metaprogramming work?) - Ruby's declarative syntax in class definitions (ex: rail's `has_many`) (what are ruby classes?) - Monkey-patching instances of Ruby objects (what are Ruby objects?)
It's a similar feeling to those moments when you realize almost everything you've ever interacted with is artificial: other people made the decisions that made that thing this way...
...and different decisions are possible.