1) The Debugger and the program being debugged -- are effectively running in the same execution environment; the same memory space (both programs are different ranges of the same memory and both can access each other's memory without restrictions, but only one can be running at one time (single stepping swaps control (for an instant) and then swaps it back again, as does running under the debugger until a breakpoint is hit -- as there are absolutely NO threads (concurrent code execution paths) in this environment!)
If the debugged program crashes for any reason, for example, enters an infinite loop and/or doesn't yield control back to the debugger via executing a RET instruction -- then the entire PC (if DOS is running directly on hardware (as opposed to under Windows and/or a VM) will crash, AKA "lock-up"! Powering the entire machine on and off is now necessary to reboot it! (Dave Letterman, many years ago, in response to the then-feared upcoming Y2K disaster: "Just Reboot!")
2) While this mode of programming is hard, laborious, counter-intuitive, slow, and error-prone(!) -- it should also be lauded, praised -- because equal-and-oppositely, whoever is programming at this level has effectively gotten rid of 99.99% of the "tech stack" dependencies -- the code written by other programmers in compilers, operating systems, programing languages, programming environments, libraries, frameworks, tech stacks, other software components, etc., etc.
Oh sure, there's still DOS in the background... but the DOS code/API fits into like what, like 32KB? (That's Kilobytes with a 'K', not Megabytes or Gigabytes... several orders of magnitude smaller...)
And a pure assembly low-level programmer -- does not even need to depend/rely on DOS or BIOS calls... they can effectively get rid of those too by writing their own hardware drivers... OS developers that write in Assembly typically do this or something like this...
DOS makes a nice runtime for low level and embedded applications.
Some implementations are 64 bits, such as this one: https://github.com/dosemu2/fdpp
I wish there were an ARM-compatible version of DOS, if possible stateless. It would often be more suitable for an ARM board than a full-fledged Linux, given its almost non-existent attack surface, low resource consumption, and simplicity. Heck, I'd even like to see DOS microservices on stateless nano-VMs.
> whoever is programming at this level has effectively gotten rid of 99.99% of the "tech stack" dependencies
Not necessarily: I once worked with a team whose reference manual was the S/360 "Principles of Operation" — but they weren't exactly working on the metal; everyone's model of the 360 architecture was virtual, all running on top of a (1960's!) supervisor multiplexing the actual box.
>"The slowest System/360 model announced in 1964, the Model 30, could perform up to 34,500 instructions per second, with memory from 8 to 64 KB.[3] High-performance models came later. The 1967 IBM System/360 Model 91 could execute up to 16.6 million instructions per second.[4] The larger 360 models could have up to 8 MB of main memory,[5] though that much memory was unusual; a large installation might have as little as 256 KB of main storage, but 512 KB, 768 KB or 1024 KB was more common. Up to 8 megabytes of slower (8 microsecond) Large Capacity Storage (LCS) was also available for some models."
But -- Even if someone had the full 8 Megabytes (8MB, not 8GB or 8TB!) back in the 1960's (which would have been the exception, rather than the norm!) -- that still would not be enough to shoehorn a modern-day Linux kernel into it, much less also gcc, glibc, libraries, Node.js, npm, Python, and whatever other programs, libraries or software components are being used for someone's tech stack...
Whether a virtual machine was running in the background -- or not...
In the 1980's and 1990's, while corporate Mainframes may have had hardware support for virtualization -- consumer x86 PC's most certainly did not.
Thus, if someone was running MS-DOS directly on an x86 PC of that era without Windows, they were effectively running on "bare metal" -- because MS-DOS didn't trap and proxy and reroute x86 IN / OUT instructions (or DMA for that matter) -- assembler programmers of this era using an assembler under DOS had full access to ALL of the underlying hardware in their PC's...
Still, you make an excellent point that modern VM's, as programs, have historic ancestors that existed in the mainframe computer world as far back as the 1960's -- most notably as CMS on the IBM S/360 mainframe family.
1) The Debugger and the program being debugged -- are effectively running in the same execution environment; the same memory space (both programs are different ranges of the same memory and both can access each other's memory without restrictions, but only one can be running at one time (single stepping swaps control (for an instant) and then swaps it back again, as does running under the debugger until a breakpoint is hit -- as there are absolutely NO threads (concurrent code execution paths) in this environment!)
If the debugged program crashes for any reason, for example, enters an infinite loop and/or doesn't yield control back to the debugger via executing a RET instruction -- then the entire PC (if DOS is running directly on hardware (as opposed to under Windows and/or a VM) will crash, AKA "lock-up"! Powering the entire machine on and off is now necessary to reboot it! (Dave Letterman, many years ago, in response to the then-feared upcoming Y2K disaster: "Just Reboot!")
2) While this mode of programming is hard, laborious, counter-intuitive, slow, and error-prone(!) -- it should also be lauded, praised -- because equal-and-oppositely, whoever is programming at this level has effectively gotten rid of 99.99% of the "tech stack" dependencies -- the code written by other programmers in compilers, operating systems, programing languages, programming environments, libraries, frameworks, tech stacks, other software components, etc., etc.
Oh sure, there's still DOS in the background... but the DOS code/API fits into like what, like 32KB? (That's Kilobytes with a 'K', not Megabytes or Gigabytes... several orders of magnitude smaller...)
And a pure assembly low-level programmer -- does not even need to depend/rely on DOS or BIOS calls... they can effectively get rid of those too by writing their own hardware drivers... OS developers that write in Assembly typically do this or something like this...
Anyway, an excellent article!