Odd that ARM even bothered to reply. And the arguments are the expected stuff you'd expect to see from an entrenched leader. Basically it's "benefits of standardization trumps openness", though not stated that way.
A few bits struck me as amusing, like the chart with all the extensions to the ARM ISA over the years, most of which have been abandoned in modern cores. Citing the original VFP standard or Jazelle as "innovation" seems laughable.
If I were to write a ARM-vs-RISC-V paper, I'd start with the important things that are actually missing from RISC-V still, like a MMU spec.
Is that openness provides the same benefits of a standard. A ground floor that everyone can stand on, build on, and has no cost of entry (unlike a business ran standard).
Well, that's not really the whole story, though. Look at Linux. Linux is "open." Linux is also a much harder target to write software for than OS X or Windows, because a lot of people do a lot of different things with that openness, and so you can't count on a specific version of certain APIs or ABIs being there. If modern CPUs looked like modern Linux distributions, a lot more effort would be required to make software portable and to run widely. (The irony here is that ARM resembles this Tower of Babel situation a lot more than x86 does.)
In Linux if I want to fork a process to create a sub-process I call
err = fork();
err = exec(a.out);
//now two instances of a.out are running!
In windows
bRet = CreateProcess ("myapp.exe",NULL,NULL, NULL, FALSE, 0,NULL, NULL,&sui,&pi);
//now one instance of my app is running...
In conclusion, no. Linx/Posix is far nicer then Windows/NT. There's a reason we've been using the same interfaces since 1973, they're very clean and very nice.
Both of your examples explode pretty badly once you start adding the additional Linux/POSIX code to fully match the functionality that the Windows stuff has.
Show me the full linux/POSIX code to open a file with different security access modes, with different sharing models, with different dispositions (when should the new file be created, if it should be created, and what should happen to an existing file with same path), with hinting to the OS how to handle the file (should it be encrypted, should it be deleted once all handles are closed to it, etc.).
Show me the full linux/POSIX code to open a process while setting the security ACLs and whatnot for it, setting its priority level, setting the environment strings, setting up the stdin/stdout/stderr file descriptors, setting the window position (if any), etc.
Linux/POSIX aren't nicer than Windows/NT--they're simpler APIs for a simpler world. That doesn't make them better or worse, they're just different tools.
Linux/Unix has a significantly lower barrier to entry compared to Windows and a much nicer command line interface that just works by default. Want to install a C compiler? Just run apt-get install gcc and you are golden. Same for python, ruby, haskell and any other language under the sun. I've barely used Windows since I've outgrown the my fascination for computer games, it has nothing compelling to offer for someone working in a scientific environment, who is not forced to use Microsoft Office.
Yeah, sure, your one cherry-picked use case is awesome. To counter that, I recently tried putting together a wiki server using apt-get, and for some reason the apt-get installed an out of date version of the software that didn't work with the apt-get version of apache I was using, so I ended up having to manually download it anyway and patch a bunch of configuration files. The windows version just used an installer that set it all up right with a few clicks.
Maybe now you've 'outgrown' video games you can outgrow thinking that your user experience is authoritative. Better, nicer, whatever, these are all just words people use to praise things that they like.
But saying Linux / Unix has a lower barrier to entry than Windows is the kind of thing only a long-time Linux user can say with a straight face. Unless you are talking about money, and then, well... yeah.
What if you don't have aptitude (hint: apt-get won't work)? What if I don't want a command-line interface? What if I want to actually debug a large C/C++ codebase?
Python/Ruby/etc. are also available on Windows, usually with friendly installers.
Look, I'm all for Linux/POSIX from an operations standpoint, but the programming story is merely different, and to pretend otherwise makes one look foolish.
EDIT:
Also, as cobrausn pointed out, what about when the official sources for the package are hella out of date? Not so friendly then, eh?
But what exactly can they do on Windows that they couldn't figure out in something like ElementaryOS? Get infected with malware? I think most people are actually pretty clueless about how to use Windows effectively, especially now that we are deep into the Windows 8 world.
MS Office is a big one, Excel is pretty much un-rivalved when it comes to the spreadsheet game. Open/Libre Office are nice but not as good as MS Office, plus everyone already knows how to use it so businesses don't have to spend money retraining them on it.
Don't get me wrong I use OS X/Ubuntu daily and die when I have to use Windows with it's lack of POSIX compliance and abomination of a shell but that doesn't mean it's the right choice for the masses.
Okay, but Office is not Windows. It runs on Macs, and there are plenty of virtualization options for businesses to run Office for people without a Windows desktop. RHEL 7 in particular is built for it. And Microsoft's web apps are getting better all the time. I wouldn't he surprised if the web app catches up to the desktop app in the near future. And everyone's grandma does not know how to use Excel.
I found that programming for Windows is a pain in the ass compared to GNU/Linux. If I need a library, I only need to do a apt-get install XXX or download the source ode and it will compile with usually zero problems.
I'm not talking about writing software, so much as I am talking about distributing software. Yes, you can use apt-get to install whatever libraries you need, but you can't guarantee that users are going to have access to the same version as you through their distro's repo, so you have to either statically link the version of each library you wrote the code with, or wait for your software to be picked up by a maintainer for all the common distros out there.
Packages do fix this problem though. You specify dependencies in your target distros package you make, and either you duplicate that dependency graph across distros (bad idea) or you let that distros packagers handle it (good idea).
For example, you can make a deb that works on Debian, Ubuntu, and any of its derivatives with its dependency graph. You can do the same with Fedora. And the Arch ecosystem will just use PKGBUILDs of the rpm or deb to package it themselves.
This only works if you let the distros handle all the work for you. But say you need an Apache version RHEL 6 doesn't have, so now you have to build from source. And now you have to build PHP from source, because now RHEL's PHP package won't run with your new Apache. On Windows, this requires that you run two MSI files and hit okay a few times. On Linux, this means you're compiling everything from source. Linux apps are less portable between distributions than Windows apps are among Windows versions (hell, thanks to WINE a randomly picked Windows app is more likely to run on both Fedora and Ubuntu without modification than an actual Linux app is, the distribution just hides that work from you most of the time).
Things are a lot better for this in 2014 than they were in 2004 (much less 1994) but I still regularly run into things that need quite a bit of handholding to build.
OS X has good package managers but they aren't as intwined with the os as aptitude or yum, for better or worse. Homebrew and Macports are the most popular package managers.
re: linux I find that's less true than it used to be, at least on x86; I run a few pieces of non-free software, binary software (Xilinx tools, Renoise) and it's problem free. Huge statically linked blobs though.
Note that the whole ARM ecosystem relies on a crapload of open source stuff, like the entire GNU toolchain, kernel and on up.
So any criticism of the call for openness in the RISC-V paper from the ARM camp is sheer, sheer hypocrisy.
In economic terms, the free stuff that helps ARM be popular is a "complementary good". When you sell something, you want the complementary goods to be commoditized and as cheap as possible, while keeping your ingredient as proprietary as possible.
Obvious table reversal:
"No, no! Open source the CPU cores, and buy my compiler instead! That's how you keep costs low, and everything from fragmenting."
Agree that the free GNU stuff is key to ARM's success today but the original armcc compilers came from ARM, though. The GNU toolchain came much later. There are other non-GNU compilers for ARM that are very widely used. Remember that ARM was super popular long before phones.
If you're interested in the development of an open-source SoC using RISC-V then do keep an eye on http://lowrisc.org. We will have more public soon - I'm talking at the OpenRISC conference (http://orconf.org) next month.
The flip side of "no fragmentation" is, there's no ARM core with hardware multithreading, and there won't be one until ARM decides to make one, which AFAIK may well be "never". You can license a core with hardware multithreading if you need it - say, MIPS from Imagination - but the ISA won't be ARM-compatible. To the extent that ARM's "software ecosystem" is valuable, it's a pity nobody can make a multithreaded ARM-compatible core. Similarly for other features.
On the other hand, x86, which is "open" to all of Intel, AMD and VIA who reached a patent war stalemate, has the problem of incompatible instruction set extensions, as documented in "Stop the instruction set war": http://www.agner.org/optimize/blog/read.php?i=25
Not sure about your first point. ARM does license out its architecture/ISA (i.e. for you to implement your own processor with an ARM instruction set) in addition to complete cores (Cortex-A57, etc...).
This is how Apple and nVidia can design their own ARM-compatible chips (Apple A8/NVidia Tegra K1 "Denver") without relying on Cortex cores.
I haven't read this yet, but I can't imagine what ARM would say that I'm not already expecting from a company selling a proprietary product to say about an open source competitor. I think the fact that ARM even thought they should "address this", say quite a lot about RISC-V.
It's written by their marketing department. Basically, they say open architecture poses the risk of fragmentation and designing instruction sets is expensive, so it makes sense for everyone to pay them to do it.
Important to remember that ARM is talking about instruction sets and not cores. There's still a lot of differentiation going on among SoC vendors but the common ISA really helps with the app ecosystem.
If you're really serious you don't have to get a core from ARM, you can do your own (like XSCale was) and still benefit from the common ISA. Not sure how expensive that is, though.
Anyway, it would be great to have a healthy competitor for ARM but right now it's hard to see what problem that's trying to solve.
Does the ISA matter a lot these days? If we look at the mobile world, specifically Android, most programs are written in Java. So, as long as the compiler, the OS, and the JVM are ported to a new ISA, most apps will run happily.
Also, is binary ISA translation restricted by patents?
That's true in theory. In practice, a not insignificant number of popular Android apps (mostly, but not limited to, games) are written in non-Java languages. That limits those apps to ARM unless the developers bother to provide versions for other ISAs, which practically nobody does.
Not as much as it used to but getting a next-generation device off the ground involves a lot of custom low-level code that's very specific to instruction set and architecture. If you're just doing a me-too product you almost don't need to worry about anything above the OS level.
I'd say the ISA still matters. A lot of problems in the GNU/Linux space are device development, which starts with the processor. Since all these projects target ultra-mobile or notebook form factors, you need integrated SoC boards with everything ready to go custom ordered, but the mounting costs of proprietary tech like ARM adds up in part to kill such endeavors, because you need scale enough to subsidize the license costs, at the least.
It doesn't matter a lot, no. But you do have to pick one for your platform, and decide if you want to be tied to, say, Intel's specific offerings, or (in principle) anyone with an ARM license. If RISC-V gets off the ground, you'll be able to pick a completely open standard.
Most apps...? We live in different worlds, clearly! I know people use Java for boring corporate line of business apps and boring corporate web server projects, but none of the apps I actually care about are written in Java, and I see no trends suggesting that this will ever change.
A few bits struck me as amusing, like the chart with all the extensions to the ARM ISA over the years, most of which have been abandoned in modern cores. Citing the original VFP standard or Jazelle as "innovation" seems laughable.
If I were to write a ARM-vs-RISC-V paper, I'd start with the important things that are actually missing from RISC-V still, like a MMU spec.