Australia has the most installed solar capacity per capita in the world, and has installed nearly 4GW of solar in the last 12 months alone. They're also making strong moves on battery storage
They started behind the curve, but they're moving quickly, relatively at least
Interestingly, "chief" is the same word as "chef" just imported from the Normans. French had a consonant shift, while English did not, and then imported the word again with the new pronunciation and spelling.
That vowel shift is not from French, but rather part of the English Great Vowel Shift [1]. The original loanword from Medieval Norman French to Middle English was /tʃeːf/ (spelled "chef"), compared to the re-borrowed /ʃɛf/.
I can see the argument for operator overloading being necessary, but I don't understand the argument for generics. There's basically no time I've wanted generics coding Go, except a couple times with float64 vs []float64. I also see the need for float32 vs. float64, but that's a very small use case for generics in terms of scope (we can and do autogenerate float32 code).
There's a couple of cases with float64 vs. complex128 matrices, but I have been annoyed with those silent changes in Matlab where the answer is wrong but the code continues anyway.
Sorry, just saw your thing below. I see your point about [2]float64 vs. [3]float64, but that still feels like mostly an operator overloading thing (I realize it isn't exclusivly). Most of the time I've dealt with that (say, [3][3]float64 vs. [2][2]float64) the contexts were different enough that generics would not have been useful because there would still have to be type switching.
I don't use Gonum with a webserver + large calculations so I can't definitively answer. No one has reported problems, but that could be a lack of usage. One thing though is that matrix multiplication (which is a kernel for higher-level operations) is written in a blocked format, and the code can be pre-empted on any of those blocks, so I wouldn't suspect it's a problem.
Yeah, skimming your source it seems most of your loops involve calling some function, and even if that's inlined I believe the Go compiler will put a speculative yield call in there.
The algorithms are (basically) equivalent, and are translations from the Fortran (though row major instead of column major). As far as I know there are no major differences in the answers, though for extremely poorly conditioned matrices (1e14 or so) you shouldn't expect consistent answers across any implementation.
The performance story is complex. Typically we're the same speed on small matrices (and using Go is faster if you include the cgo overhead). We currently have significant speed penalties on large matrices (300x300 or so), but Kunde21 is working on assembly kernels for the BLAS functions to close that gap
I'm surprised your performance is anywhere near that of standard BLAS implementations. The Golang compiler doesn't have support for explicit SIMD or auto-vectorization, so that's a big performance gain just sitting there.
For small vectors and matrices the cgo overhead swamps the assembly speedups. For large vectors cache misses dominate, and the assembly doesn't matter as much. It does matter significantly for medium vectors and large matrices. In that case we provide cgo wrappers and are working on SIMD kernels.
We aren't at full feature parity, but we're pretty close. There are some big things we are missing (ODE, FFT), and we have a bunch of things they don't have (statistical distance measures being one example). We are trying to be pure-go, so it's not at simple as providing a wrapper API. Working on it though!
Power method is not matrix-matrix multiplication (which is not N^3, BTW [1]), but rather matrix-vector multiplication. So the power method is N^2*k where k is the number of iterations required to reach precision (usually polylogarithmic).
All this being said, scalability is _obviously_ a non-issue when talking about a matrix of programming languages. All methods are constant time.
Oh right, of course, because you're iterating the distribution. Duh.
And yea, mat mul is not N^3 theoretically, but most implementations are. I've heard that some (mkl maybe) are 2.8, but haven't had someone point code to me. My personal attempts at implementing Strassen were slower than a tuned N^3 implementation, at least for matrices that fit into memory.