Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
MIPS R3000 (wikipedia.org)
37 points by tosh on June 20, 2019 | hide | past | favorite | 59 comments



MIPS is a fun architecture (other than the delay slots that plagued early RISC ISAs) and implementing a subset of it on an FPGA is still a pretty common undergraduate university course project. I was kind of amazed just how simple it is to get a basic CPU working, though even the 1988 version was quite a lot more sophisticated than the class project version (multiple cache levels, having an MMU, probably much better branch predictor, etc).


> I was kind of amazed just how simple it is to get a basic CPU working

You might be interested in this:

https://en.wikipedia.org/wiki/One_instruction_set_computer


Why does everyone seem to hate delay slots? I understand that it makes writing assembly more annoying but most people use a compiler anyway.


They make writing assembly more annoying. They make writing compilers more annoying.

But the big reason is that except for the case of simple, short pipeline designs like the early MIPS parts they make designing CPUs annoying too.

The second you introduced parallel decode or a branch predictor with more than a cycle of latency these things hurt and don't help.


It makes the hardware implementation more complicated. The delay slot was perfect for the 5-pipeline original design. Once you try to push this to out of order execution (executing more than one instruction per cycle), the delay slot just doesn't make any sense.


That's my understanding as well. Software-wise, I, for one, have not had issues with reading or writing code with branch delay slots -- automatic nops, at worst. I guess it all depends how early in one's development they were introduce to the concept of delay slots.


There was one nifty thing that fell out from having delay slots - you could write a threading library without having to burn a register on the ABI. When you changed context to a different thread, you'd load in all the registers for the new thread except for one which held the jump address to the new thread's IP. Then, in that jump's delay slot, you load in the thread's value for that register and, presto, zero overhead threading!


in addition to the complexities they add to every layer of the stack that ajross and alain94040 brought up, they're not all that useful in practice. i seem to recall that they'd rarely be over 50% utilized and the majority of instructions in the delay slot were nops


I'm honestly confused - why is a wikipedia page about an old computer architecture on the front page of HN?


Because it's interesting! I think it is, at least.

From the guidelines:

> What to Submit

> On-Topic: Anything that good hackers would find interesting. That includes more than hacking and startups. If you had to reduce it to a sentence, the answer might be: anything that gratifies one's intellectual curiosity.


Why wouldn't it be on the front of HN?


Because it has legs.

Hundreds of thousands -- maybe millions? -- of MIPS-derived chips are still made every day.


It's still my favorite RISC processor and the only one that I actually liked coding in assembler (SPARC wasn't fun at all).


Alpha is bae. PowerPC is great as long as you don't look at the fever dream nightmare that is it's supervisor state.


Want one? I've got a 600mhz sitting in the cost. Totally works. I'm in Los Angeles


Haha, my partner would be unhappy if I came home with another stray, but I appreciate the offer. : )


Hrmm. I could do a login if you really want. I have an openvms and netbsd disk.

I've been trying to not take this thing to ewaste for a decade


Alpha had a weakly-ordered memory model though.


Yeah, but it wasn't that big of a deal in practice. I sort of like that it was such a weak memory model that you just explicitly had to synchronize any sorts of cross thread comms.


MIPS never had any memory model at all! They might have one, by now, most likely cribbed from PPC, ARM, or Itanic. Or maybe more than one!


How annoying were the branch delay slots?


You could always put NOPs into delay slots by default and replace them later with something more useful.


no, you could not "always" find something useful to put there later. gross exaggeration


You misunderstood parent. They meant you can always nop the branch delay, and you might find something more useful for it later


that may have been what they wanted to write. unfortunately they wrote something wrong and misleading instead. i understood what they wrote perfectly well and the implication therein isn't, somehow, my fault


The "always" qualifier does not apply to anything after the "and". Why do you assume it does? You are being very combative and pedantic so I assume you have a diehard rule for this that you can reference? Like some english grammar handbook?


Less so than register windows :)


From having read lots of disassembled MIPS code I can tell you that they become second nature very quickly.


Alternative history time: what if MIPS R4000 hasn't been killed by Pentium? Would we be running R10000-compatibles now?


It was an interesting moment, though there was a higher chance we would have been running SPARC or POWERPC. The architecture was top notch, but the company was too focused on being "MIPS Computers" not "MIPS Technology" at the time.


Do you mean the MIPS line vs the x86 line of processors, or the R4000 vs Pentium specifically? I don't think the R4000 was "killed" by any competitor, especially the R4400 and VR4300 versions were really successful powering many SGI machines and the Nintendo 64.


Even more fun alternative history: What if Motorola didn't end 68000 with the 68060?

68060 released earlier than the pentium, and regardless it had about twice the performance per clock. But then Motorola decided to abandon 68k to focus on PowerPC.

Going further back: What if IBM picked 68000 over the 8086 for the PC?

I'm hoping RISC-V will put an end to x86. It's about time.


Check out the Apollo Core. FPGA soft core for an out of order 68k that they call a 68080. Even in an FPGA it's faster than any released 68k.

http://www.apollo-core.com/


I'm somewhat familiar with the Apollo Core, as I own a V500v2+.

It's amazing; Too bad it's not open source.

There's some open source 68000 implementations, but they're all still slow. The most well-known is perhaps tg68k, used in MiSTer cores and in FPGA Arcade.


Yeah, I've got a pipe dream of taking BOOM and strapping a 68k decoder frontend to it too. Like the PowerPC 615, but RISC-V/68k instead of PowerPC/x86.


> I'm hoping RISC-V will put an end to x86. It's about time.

Not only is this wishful thinking and ignores the success so far of ARM, it doesn't make much sense any more - x86 is already "dead", almost all modern PCs will run amd64, with x86 as a kind of short-instruction emulation mode.


Most likely amd64 was included in the condemnation.

If RISC-V gets its Bitmanip extension, it might even deserve to displace ARM and x86.

But I might be biased.

Still, the barrel shifter on the ARM register bank output was inspired. For its time.


To confirm: Yes, amd64, as a descendant of x86, was included in the condemnation.

As for the bitmanip extension, it is well underway; the latest risc-v workshop had a presentation on it.

The keys, in my opinion, are the vector extension and the DSP extension. They do seem to still be 1-2 year away.

Other than that, there's some work on making interrupt and context switch latency lower. This is important for RTOSs and even more in general for microkernel multiserver operating systems such as Fuchsia which will matter a lot starting in the near future.


> Even more fun alternative history: What if Motorola didn't end 68000 with the 68060?

Well, then we would all be doing our computing on Amigas, or at least on Atari Falcons. People blame C= for the downfall of the Amiga, but Motorola abandoning the 68k line was a much bigger factor, with no immediate way forward being evident at the time.


>People blame C= for the downfall of the Amiga

>but Motorola abandoning the 68k line was a much bigger factor

The engineers had plans going forward. C= just didn't allow it to happen. They scrapped everything.


That's right. It took Commodore seven years to ship a major revision to the Amiga graphics chips, which had four times the display bandwidth and twice the blitter performance. They should have had a 16-fold performance increase in all categories by that time.


Several would-have-been-awesome ready or near-ready new chipset designs were discarded by CBM management. Years upon years of effort wasted, on a critical time in the history of personal computers.

"This is too expensive."

As if releasing what would have been the absolute best microcomputer for several years (as per what actually happened, that is, assuming the competition didn't pull a miracle after seeing it) would have no value. As if economies of scale weren't a thing. As if making it cheaper later wasn't an option.

Morons.

For detail on the insanity, refer to The Insider Story[0] book.

[0] https://blog.amigaguru.com/book-review-commodore-the-inside-...


I'm happy my computing platform does not depend on a single corporation. The shock vibrations of those people who developed for beloved Amigas and other amazing but now dead platforms must still be echoing throughout the universe.


I remember there was a 68000-series with a 56000 on chip. I would love to have seen a 68000 make it into the 64-bit era with a built in DSP. I did love programming those things. I would imagine some ColdFire-style pruning of the instruction set would have happened for the 64-bit era.


Are you sure you're not thinking of the Atari Falcon? It was a 68030-based machine that had a 56k as well. It's not on the same chip though.


No, there was one member of the 68K family (68456) that had the DSP on the chip.


I thought motorola did their own RISC chips, the 88000.


They did two that I know of, the 88000 and M.Core. The 88000 had some issues (I do believe floating point) and was really expensive.


They did for a short while, and it was the successor to the 68060 (the correct 'what if' is 'what if Motorola had continued the CISC 68k line instead of jumping on the RISC bandwagon). But in 1990 or so, Motorola joined with IBM and Apple to do PowerPC and dumped 88k, banking on (among other things) IBM and Apple to be solid customers in a crowded CPU market.


R10k killed itself, IMO. It had so many weird bugs that sort of look like today's Spectre vulnerabilities, but were so bad normal code had problems running. Like it would speculatively write into page tables and then be unable to unwind itself.


We would be all now asking ourselves: What if Pentium hasn’t been killed by MIPS R4000?


Nah, we would be wondering about one of the others. What if Power hadn't? What if Alpha hadn't?

No one would wonder at x86 passing, it would have seemed totally obvious and overdue. Pentium surviving was astonishing. Still is. Probably another crime to lay at Microsoft's door.


Indeed...


I remember reading a computer architecture book that used a fictional MIPS instruction set and used washing and drying clothes as an example of how to pipeline / do parallel instructions. Does anyone remember the name of that book?


That description probably applies to several computer architecture books, but the one I read was like that and called "Computer Organization and Design: The Hardware/Software Interface".


This is the book. Thanks for your help


Maybe Computation Structures by Ward? Lecture notes for the class from which it was born show a two-stage Laundromat model (with snarky MIT/Harvard humor) https://ocw.mit.edu/courses/electrical-engineering-and-compu...


That might be the DLX architecture used in some of the earlier Patterson and Hennessey texts.


I loved this CPU, programming it using spim, then actually getting to write some PSX code.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: