One thing from back then that I really miss is how easy it was to do some complex things.
This might be my biggest disappointment with "modern" programming. I want direct access to the hardware with stuff like $100 1GHz 100+ core CPUs with local memories and true multithreaded languages that use immutability and copy-on-write to implement higher-order methods and scatter-gather arrays. Instead we got proprietary DSP/SIMD GPUs with esoteric types like tensors that require the use of display lists and shaders to achieve high performance.
It comes down to the easy vs simple debate.
Most paradigms today go the "easy" route, providing syntactic sugar and similar shortcuts to work within artificial constraints created by market inefficiencies like monopoly. So we're told that the latency between CPU and GPU is too long for old-fashioned C-style programming. Then we have to manage pixel buffers ourselves. We're limited in the number of layers we can draw or the number of memory locations we can read/right simultaneously (like how old arcade boxes only had so many sprites). The graphics driver we're using may not provide such basic types as GL_LINES. Etc etc etc. This path inevitably leads to cookie cutter programming and copypasta, causing software to have a canned feel like the old CGI-BIN and Flash Player days.
Whereas the "simple" route would solve actual problems within the runtime so that we can work at a level of abstraction of our choosing. For example, intrinsics and manual management of memory layout under SSE/Altivec would be substituted for generalized (size-independent) vector operations on any type with the offsets of variables within classes/structs decided internally. GPUs, FPUs and even hyperthreading would go away in favor of microcode-defined types and operations on arbitrary bitfields, more akin to something like VHDL/Verilog running on reprogrammable hardware.
The idea being that computers should do whatever it takes to execute users' instructions, rather than forcing users to adapt their mental models to the hardware/software. Cross-platform compilation, emulation, forced hardware upgrades that ignore Turing completeness, vendor/platform lock-in and planned obsolescence are all symptoms of today's "easy" status quo. Whereas we could have the "simple" MIMD transputer I've discussed endlessly in previous comments that just reconfigures itself to run anything we want at the maximum possible speed. More like how a Star Trek computer might run.
In practice that would mean that a naive for-loop on individual bytes written in C would run the same speed as a highly accelerated shader, because the compiler would optimize the intermediate code (i-code) into its dependent operations and distribute computation across a potentially unlimited number of cores, integrating the results to exactly match a single-threaded runtime.
The hoops we have to jump through between conception and implementation represents how far we've diverged from what computing could be. Modern web development, enterprise software, a la carte microservice hoards like AWS that eventually require nearly every service just to work, etc etc etc, often create workloads which are 90% friction and 10% results.
Just give me the good old days where the runtime gave us everything, no include paths or even compiler flags to worry about, and the compiler stripped out everything we did't use. Think C for the Macintosh mostly worked that way, and even Metrowerks CodeWarrior tried to have sane defaults. Before that, the first fast language I used, called Visual Interactive Programming (VIP), gave the programmer everything and the kitchen sink. And HyperCard practically made it its mission in life to free the user of as much programming jargon as possible.
I feel like I got more done between the ages of 12 and 18 than all the years since. And it's not a fleeting feeling.. it's every single day. And forgetting how good things were in order to focus on the task at hand now takes up so much of my psyche that I'm probably less than 10% as productive as I once was.
Microcode? I don't think that's how modern μarch works. You can definitely make modern compute accelerators more like a plain CPU and less bespoke, and this is what folks like Tenstorrent and Esperanto Technology are working on (building on RISC-V, an outstanding example of "simple" yet effective tech) but a lot of the distinctive featuresets of existing CPUs, GPUs, FPUs, NPUs etc. are directly wired into the hardware, in a way that can't really be changed.
This might be my biggest disappointment with "modern" programming. I want direct access to the hardware with stuff like $100 1GHz 100+ core CPUs with local memories and true multithreaded languages that use immutability and copy-on-write to implement higher-order methods and scatter-gather arrays. Instead we got proprietary DSP/SIMD GPUs with esoteric types like tensors that require the use of display lists and shaders to achieve high performance.
It comes down to the easy vs simple debate.
Most paradigms today go the "easy" route, providing syntactic sugar and similar shortcuts to work within artificial constraints created by market inefficiencies like monopoly. So we're told that the latency between CPU and GPU is too long for old-fashioned C-style programming. Then we have to manage pixel buffers ourselves. We're limited in the number of layers we can draw or the number of memory locations we can read/right simultaneously (like how old arcade boxes only had so many sprites). The graphics driver we're using may not provide such basic types as GL_LINES. Etc etc etc. This path inevitably leads to cookie cutter programming and copypasta, causing software to have a canned feel like the old CGI-BIN and Flash Player days.
Whereas the "simple" route would solve actual problems within the runtime so that we can work at a level of abstraction of our choosing. For example, intrinsics and manual management of memory layout under SSE/Altivec would be substituted for generalized (size-independent) vector operations on any type with the offsets of variables within classes/structs decided internally. GPUs, FPUs and even hyperthreading would go away in favor of microcode-defined types and operations on arbitrary bitfields, more akin to something like VHDL/Verilog running on reprogrammable hardware.
The idea being that computers should do whatever it takes to execute users' instructions, rather than forcing users to adapt their mental models to the hardware/software. Cross-platform compilation, emulation, forced hardware upgrades that ignore Turing completeness, vendor/platform lock-in and planned obsolescence are all symptoms of today's "easy" status quo. Whereas we could have the "simple" MIMD transputer I've discussed endlessly in previous comments that just reconfigures itself to run anything we want at the maximum possible speed. More like how a Star Trek computer might run.
In practice that would mean that a naive for-loop on individual bytes written in C would run the same speed as a highly accelerated shader, because the compiler would optimize the intermediate code (i-code) into its dependent operations and distribute computation across a potentially unlimited number of cores, integrating the results to exactly match a single-threaded runtime.
The hoops we have to jump through between conception and implementation represents how far we've diverged from what computing could be. Modern web development, enterprise software, a la carte microservice hoards like AWS that eventually require nearly every service just to work, etc etc etc, often create workloads which are 90% friction and 10% results.
Just give me the good old days where the runtime gave us everything, no include paths or even compiler flags to worry about, and the compiler stripped out everything we did't use. Think C for the Macintosh mostly worked that way, and even Metrowerks CodeWarrior tried to have sane defaults. Before that, the first fast language I used, called Visual Interactive Programming (VIP), gave the programmer everything and the kitchen sink. And HyperCard practically made it its mission in life to free the user of as much programming jargon as possible.
I feel like I got more done between the ages of 12 and 18 than all the years since. And it's not a fleeting feeling.. it's every single day. And forgetting how good things were in order to focus on the task at hand now takes up so much of my psyche that I'm probably less than 10% as productive as I once was.