The thing I love most about older computing technology is that it wasn't ridiculously over complicated yet. In order processors with fixed instruction sets, no MMUs, no caches, no 500 layers of abstraction between you and what you're actually trying to accomplish. You know, stuff you could form a solid mental model of and reason about. Part of me really wants to go back to that world, even if it does mean a huge loss of performance.
There are still some developers who work on microcontrollers and the like.
Also a lot of the embedded world still has MMUs and caches but there's not a whole lot of software between you and hardware.
I have a few problems with that, though. For one, I don't really care about the vast majority of projects that kind of stuff is used for, and for another they often have closed proprietary toolchains.
Yeah, see, that's a hold up for me. If I'm going to go through the effort of building something completely non-commercially viable than I want it to at least be completely open. And before you say anything, no, I'm not at all interested in making anything commercially viable.
Retro platforms were current technology. If I'm going to use technology with several orders of magnitude less power, then I'd at least like it to be open, is all I'm saying.
I agree, and is much of what I try to design by computer design (although there is ways to improve the performance (in different ways than what is common now; some of it is strange and may have some unusual (but still simple) features), and there is other improvements from the old designs too, while still having a lot of the features of the older designs). It doesn't use superscalar, out of order execution, automatic caching (there is a cache but it must be programmed explicitly, and otherwise it does nothing), and all of that mess.