MIPS is a fun architecture (other than the delay slots that plagued early RISC ISAs) and implementing a subset of it on an FPGA is still a pretty common undergraduate university course project. I was kind of amazed just how simple it is to get a basic CPU working, though even the 1988 version was quite a lot more sophisticated than the class project version (multiple cache levels, having an MMU, probably much better branch predictor, etc).
It makes the hardware implementation more complicated. The delay slot was perfect for the 5-pipeline original design. Once you try to push this to out of order execution (executing more than one instruction per cycle), the delay slot just doesn't make any sense.
That's my understanding as well. Software-wise, I, for one, have not had issues with reading or writing code with branch delay slots -- automatic nops, at worst. I guess it all depends how early in one's development they were introduce to the concept of delay slots.
There was one nifty thing that fell out from having delay slots - you could write a threading library without having to burn a register on the ABI. When you changed context to a different thread, you'd load in all the registers for the new thread except for one which held the jump address to the new thread's IP. Then, in that jump's delay slot, you load in the thread's value for that register and, presto, zero overhead threading!
in addition to the complexities they add to every layer of the stack that ajross and alain94040 brought up, they're not all that useful in practice. i seem to recall that they'd rarely be over 50% utilized and the majority of instructions in the delay slot were nops
Because it's interesting! I think it is, at least.
From the guidelines:
> What to Submit
> On-Topic: Anything that good hackers would find interesting. That includes more than hacking and startups. If you had to reduce it to a sentence, the answer might be: anything that gratifies one's intellectual curiosity.
Yeah, but it wasn't that big of a deal in practice. I sort of like that it was such a weak memory model that you just explicitly had to synchronize any sorts of cross thread comms.
that may have been what they wanted to write. unfortunately they wrote something wrong and misleading instead. i understood what they wrote perfectly well and the implication therein isn't, somehow, my fault
The "always" qualifier does not apply to anything after the "and". Why do you assume it does? You are being very combative and pedantic so I assume you have a diehard rule for this that you can reference? Like some english grammar handbook?
It was an interesting moment, though there was a higher chance we would have been running SPARC or POWERPC. The architecture was top notch, but the company was too focused on being "MIPS Computers" not "MIPS Technology" at the time.
Do you mean the MIPS line vs the x86 line of processors, or the R4000 vs Pentium specifically? I don't think the R4000 was "killed" by any competitor, especially the R4400 and VR4300 versions were really successful powering many SGI machines and the Nintendo 64.
Even more fun alternative history: What if Motorola didn't end 68000 with the 68060?
68060 released earlier than the pentium, and regardless it had about twice the performance per clock. But then Motorola decided to abandon 68k to focus on PowerPC.
Going further back: What if IBM picked 68000 over the 8086 for the PC?
I'm hoping RISC-V will put an end to x86. It's about time.
I'm somewhat familiar with the Apollo Core, as I own a V500v2+.
It's amazing; Too bad it's not open source.
There's some open source 68000 implementations, but they're all still slow. The most well-known is perhaps tg68k, used in MiSTer cores and in FPGA Arcade.
Yeah, I've got a pipe dream of taking BOOM and strapping a 68k decoder frontend to it too. Like the PowerPC 615, but RISC-V/68k instead of PowerPC/x86.
> I'm hoping RISC-V will put an end to x86. It's about time.
Not only is this wishful thinking and ignores the success so far of ARM, it doesn't make much sense any more - x86 is already "dead", almost all modern PCs will run amd64, with x86 as a kind of short-instruction emulation mode.
To confirm: Yes, amd64, as a descendant of x86, was included in the condemnation.
As for the bitmanip extension, it is well underway; the latest risc-v workshop had a presentation on it.
The keys, in my opinion, are the vector extension and the DSP extension. They do seem to still be 1-2 year away.
Other than that, there's some work on making interrupt and context switch latency lower. This is important for RTOSs and even more in general for microkernel multiserver operating systems such as Fuchsia which will matter a lot starting in the near future.
> Even more fun alternative history: What if Motorola didn't end 68000 with the 68060?
Well, then we would all be doing our computing on Amigas, or at least on Atari Falcons. People blame C= for the downfall of the Amiga, but Motorola abandoning the 68k line was a much bigger factor, with no immediate way forward being evident at the time.
That's right. It took Commodore seven years to ship a major revision to the Amiga graphics chips, which had four times the display bandwidth and twice the blitter performance. They should have had a 16-fold performance increase in all categories by that time.
Several would-have-been-awesome ready or near-ready new chipset designs were discarded by CBM management. Years upon years of effort wasted, on a critical time in the history of personal computers.
"This is too expensive."
As if releasing what would have been the absolute best microcomputer for several years (as per what actually happened, that is, assuming the competition didn't pull a miracle after seeing it) would have no value. As if economies of scale weren't a thing. As if making it cheaper later wasn't an option.
Morons.
For detail on the insanity, refer to The Insider Story[0] book.
I'm happy my computing platform does not depend on a single corporation. The shock vibrations of those people who developed for beloved Amigas and other amazing but now dead platforms must still be echoing throughout the universe.
I remember there was a 68000-series with a 56000 on chip. I would love to have seen a 68000 make it into the 64-bit era with a built in DSP. I did love programming those things. I would imagine some ColdFire-style pruning of the instruction set would have happened for the 64-bit era.
They did for a short while, and it was the successor to the 68060 (the correct 'what if' is 'what if Motorola had continued the CISC 68k line instead of jumping on the RISC bandwagon). But in 1990 or so, Motorola joined with IBM and Apple to do PowerPC and dumped 88k, banking on (among other things) IBM and Apple to be solid customers in a crowded CPU market.
R10k killed itself, IMO. It had so many weird bugs that sort of look like today's Spectre vulnerabilities, but were so bad normal code had problems running. Like it would speculatively write into page tables and then be unable to unwind itself.
Nah, we would be wondering about one of the others. What if Power hadn't? What if Alpha hadn't?
No one would wonder at x86 passing, it would have seemed totally obvious and overdue. Pentium surviving was astonishing. Still is. Probably another crime to lay at Microsoft's door.
I remember reading a computer architecture book that used a fictional MIPS instruction set and used washing and drying clothes as an example of how to pipeline / do parallel instructions. Does anyone remember the name of that book?
That description probably applies to several computer architecture books, but the one I read was like that and called "Computer Organization and Design: The Hardware/Software Interface".