I'd imagine it would since they limit the number of possible instruction lengths. The hard thing about x86 is the fact that instructions can by anywhere from 8 bits->256 bits in length (in 8 bit increments). So there's really no way to know if you are loading up an x86 instruction or you are splitting an instruction.
The fact that risc-v only supports 32,64, or 128bit instructions means that you don't have to go through a lot of extra effort to determine where instructions start or end.
Note that RISC-V can have 32, 64 or 128 bit wide registers. The instruction length can be any multiple of 16 bits, though the base instructions are all 32 bits and the C extension adds 16 bit instructions and other lengths have not yet been used. This diagram shows how instructions up to 192 bits long can be encoded:
It is a very simple circuit that can take the bottom 16 bits of any RISC-V instruction and tell you exactly how long it is, unlike the x86 where you need a sequence of steps.
Nitpicking, but the documented RISC-V encoding supports anything from 16-bit through 176-bit long instruction words in 16-bit increments, with instruction length being determined by the first 16 bits of the instruction word. Some encoding space has been reserved for 192-bits or longer. Recent versions of the RISC-V spec have actually de-emphasized this length encoding, so implementations may well be allowed to use the encoding space in different ways as long as they don't conflict with existing 32-bit (or 16-bit if the C extension is specified) instructions.
It's more than just limiting the number of possible instruction lengths; it's also that you only need the first few bits of the instruction to determine its length. With x86, you have to decode the first byte to know if the instruction has more bytes, decode these bytes to know if the instruction has even more bytes, and so on.
But since I'm not a hardware designer, I don't know if the RISC-V design is enough to make a high-performance wide decoder. With 64-bit ARM it seems very easy; once you loaded a n-byte line from the cache, the first decoder gets the first four bytes, the second decoder gets the next four bytes, and so on. With compressed RISC-V, the first decoder gets the first four bytes (0-3); the second decoder gets either bytes 4-7 or bytes 2-5; the third decoder can get bytes 8-11, 6-9, or 4-7, depending on how many of the preceding instructions were compressed; and so on. Determining which bytes each decoder gets seems very easy (it's a simple boolean formula of the first two bits of each pair of bytes), but I don't know enough about hardware design to know if the propagation delay from this slows things down enough to need an extra pipeline step once the decode gets too wide (for instance, the eighth decoder would have 8 possible choices for its input), or if there are tricks to avoid this delay (similar to a carry-skip adder).
I am a hardware designer. See my comment above, it's going to be ugly. Fast adders are still slow (and awkward), but they only have to propagate a single bit of information, this is much messier.
As a side-note carry-skip doesn't really work in modern VLSI, I guess you were probably thinking of carry-lookahead.
Ok so you are saying not only are there implicit super instructions via macro op fusion there are also variable length instructions in there too? Ok, I'm not a RISC-V expert but damn that kind of ruins even the last tiny shred of its original value proposition of being simple. Sure the core instruction set is easy but once you add extensions it's just plain ugly.
Nitpick: the longest valid x86 instruction is 15 bytes, or 120 bits (longer instructions might be structurally sound, but no hardware or software decoder will accept them).
Variable length isn't actually a huge problem at the instruction fetch level (and modern compilers will align x86 instructions to keep ifetch moving smoothly), but it does make both hardware and software decoding significantly more complex.
You seem very confused. RISC-V supports variable length instructions from 16 bits up to 192 bits (and maybe longer in future), and it's easy to tell in an instruction stream when the next instruction starts (although the stream is not self-synchronising).
RISC-V is designed to be extensible. As well as demanding obviously more than 32 bit instruction coding to support many extensions, we expect there will be some (corner) cases where very long instructions might be desirable for particular extensions.
As far as I understand a RISC-V CPU will have fixed length instructions potentially having some of them compressed. I don't think you are going to see 32, 64 and 128 length instructions mixed in the same program.
But I might be wrong about this. Do you have any source that suggests that instructions of different length should be mixed in one RISC-V program other than use of compressed instructions?
No, need to be such a dick about it. I was frank about not being certain about this. I have read section 1.5, and I cannot see it supporting your claim. The very first sentence says:
"The base RISC-V ISA has fixed-length 32-bit instructions that must be naturally aligned on 32-bit boundaries."
Later it talks about:
"For implementations supporting only a base instruction set, ILEN is 32 bits. Implementations supporting longer instructions have larger values of ILEN."
It seems clear to me that a standard RISC-V implementation today is 32-bit fixed sized on a 32-bit boundary. There may however be support for future architectures with longer instructions. None of this suggests that a regular RISC-V implementation has to assume that instructions can be any length.
These things are not even part of the standard yet. So please, don't be such a dick about something that isn't all that clear at the moment.
The fact that risc-v only supports 32,64, or 128bit instructions means that you don't have to go through a lot of extra effort to determine where instructions start or end.