I was at a talk of his last year and there are a number of fault tolerant MP algorithms being drawn in. MPI hasn't been updated in ages, I don't think that necessarily means we need to ditch it, it just means the standard needs to be modernized. I don't feel very strongly about this since working with MPI is a huge pain in the ass and it seems like the challenge of modernizing it is just gargantuan.
Also I'm not familiar with spark, but isn't Chapel a decade old at this point and barely works at all? I tried their compiler last summer and it took 5 minutes to compile hello world, hopefully its improving.
Certainly Prof Hoefler has done a lot of work driving the design of updated remote-memory access for of MPI-3, and any further progress would be welcomed; but I don't think any modernizing of MPI can fix the basic problem. At the end of the day, it's just too low level for application developers, while being too high-level for tool developers. There are parts of MPI which don't share this problem so much - the collective operations, and especially MPI-IO; but the disconnect between what people need to build either tools or scientific applications and what MPI provides just seems too great.
For Chapel, it depends on what you count; it very heavily borrows from ZPL, which is much older, but Chapel itself was only released in 2009. It is already competitive with MPI in performance in simple cases, while operating at a much higher level of abstraction. Whether Chapel, or Spark, are the right answers in the long term, I don't know; but there's a tonne of other options out there that are worth exploring.
Again I'm not sure if I agree or disagree with this. My hatred of MPI is only outweighed by the fact that I can use it... and my code works.
I think a large part of the inertia behind MPI is legacy code. Often the most complex part of HPC scientific codes is the parallel portion and the abstractions required to perform them (halo decomposition etc). I can't imagine there are too many grad students out there who are eager to re-write a scientific code in a new language that is unproven and requires developing a skill set that is not yet useful in industry (who in industry has ever heard of Chapel or Spark??). Not to mention that re-writing legacy codes means you're delaying from getting results. Its just a terrible situation to be in.
Chapel's made by Cray. If what you're saying is true then Cray's not done a very good job of advertising Chapel. God knows they have the capability to advertise properly.
Oh, sure. I don't think anyone should start rewriting old codes; but as new projects start, I think we have a lot more options out there than we did 10 years ago, and it's worth looking closely at them before starting, rather than defaulting to something. Especially since, once you start, you're probably pretty much locked into whatever you chose for a decade or so.
Chapel has been used for incompressible moving-grid fluid dynamics, so it's certainly feasible. For that problem the result was ~33% the lines of code of the MPI version. There is a performance hit, but the issues are largely understood; if (say) a meteorological centre were to put its weight behind it, a lot of things could get done.
It's also pretty easy to see how UPC or co-array fortran (which is part of the standard now, so isn't going anywhere any time soon) would work. They'd fall closer to MPI in complexity and performance.
You couldn't plausibly do big 3d simulations in Spark today; that's way outside of what it was designed for. Now analysing the results, esp of a suite of runs, that might be interesting.
A decade is no age for a language, though. Creating a language with compiler can be done very quickly, but creating a good language with a good compiler and a good standard library takes time. And then it needs to catch on. This requires about a decade++.
Scala is 12 years old, Go is 6 years old and Clojure is 8 years old.
Yeah good point. I just felt it might be a misleading of the author to suggest Chapel as an alternative when you cannot possibly write a useful program with it.
There are numerous benchmarks implemented in Chapel, some of which are competitive with other implementations (see paper reference in article). There is a growing standard library and literally thousands of test codes that represent a broad set of functionality. That said, Chapel is not yet a product grade language, nor is it promoted as such.
Chapel may not be an appropriate replacement for all MPI programs, but it can be used for some programs today.
I agree with this, but it also sort of risks being a self-fulfilling prophesy; everyone uses MPI because everyone uses MPI, and no one uses Chapel yet because no one uses Chapel yet. At some point, we who are willing to be early adopters need to just start.
http://htor.inf.ethz.ch/
I was at a talk of his last year and there are a number of fault tolerant MP algorithms being drawn in. MPI hasn't been updated in ages, I don't think that necessarily means we need to ditch it, it just means the standard needs to be modernized. I don't feel very strongly about this since working with MPI is a huge pain in the ass and it seems like the challenge of modernizing it is just gargantuan.
Also I'm not familiar with spark, but isn't Chapel a decade old at this point and barely works at all? I tried their compiler last summer and it took 5 minutes to compile hello world, hopefully its improving.