Hacker News new | past | comments | ask | show | jobs | submit login

The biggest hit in performance that people see is the lack of a JIT compiler. If you are able to write your code without loops, Octave performs comparably to Matlab. No, we still do not have a JIT compiler. It is difficult to attract this kind of talent, because the kind of people who use Octave know mathematics and engineering more than they know compilers and languages.



No. The biggest hit is not the JIT. Octave has less support for SIMD operations whereas Matlab uses Intel's propriority Math Kernel Library. You will realize that Octave gets significantly slower in matrix and vectorized operations, where JIT doesn't help much


Octave is more or less BLAS-agnostic. You can build it with any BLAS, including MKL and people do this (but because MKL is too restrictive, you can't redistribute the result). While you may see speedups this way, if your code isn't vectorised, as most code out there is not, you still will see big slowdowns.


Along with the myths about performance, the licence for the Intel stuff isn't what many people think... Anyhow, there's no need for MKL, since at least the operations that matter are as fast in OpenBLAS on x86 up to AVX512 (and infinitely faster on other architectures). You can get that with the Debian packaging of Octave, for instance, and you can fiddle it system-wide on RHEL/Fedora.

Thanks for working on Octave. I wish people would put money into it rather than (in the case of my university) paying enough for Matlab to fund one or two full-time staff who could do development and support of free stuff with Octave and other things like R. Multiply that by N universities in just in this country...


I haven't studied the MKL license, but in the python world, Christoph Gohlke is kind enough to freely distribute some python packages (notably Numpy and Scipy) built with against MKL.

http://www.lfd.uci.edu/~gohlke/pythonlibs/


I wonder if Octave would benefit from something like Graal, for some common JIT infrastructure.


We have a toy LLVM attempt that compiled a few trivial loops but hooked into the unstable C++ LLVM API. Nobody really knew how to maintain it, and we got tired chasing after the unstable API, so we let it bitrot to death.

I don't expect any other JIT backend (libgccjit could be another possibility) would require any less knowledge of compilers and language design. Does Graal have some magic for us? Can it give us a JIT compiler even if most of us are not compiler writers or language designers?


From https://wiki.openjdk.java.net/display/Graal/Publications+and...

Forget “this language is fast”, “this language has the libraries I need”, and “this language has the tool support I need”. The Truffle framework for implementing managed languages in Java gives you native performance, multi-language integration with all other Truffle languages, and tool support - all of that by just implementing an abstract syntax tree (AST) interpreter in Java.

Truffle applies AST specialization during interpretation, which enables partial evaluation to create highly optimized native code without the need to write a compiler specifically for a language. The Java VM contributes high-performance garbage collection, threads, and parallelism support.

So you won't need the know-how to write a compiler, but instead write an interpreter optimized for execution with Graal.


Maybe even hooking it up to an LLVM compiler would be good.


What kind of work and skills would it take? I would be free this summer and am looking to contribute to a good and worthy project (just for kicks and the bragging rights).


Would a language engine like v8 help in this case? Or is it the fact that you can't optimize the actual mathematical operations within loops?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: