Hacker News new | past | comments | ask | show | jobs | submit login

If you're interested in the mathematical theory behind sub-cubic algorithms for matrix multiplications, you can start from here: https://en.wikipedia.org/wiki/Matrix_multiplication_algorith...

I conjecture that for every j > 0 in R, a number n exists so that any two n x n matrices can be multiplied together in O(n^(2+j)) steps.

(Now proven for for 2+j = w = 2.3728596, or j > 0.3728596)




> I conjecture that for every j > 0 in R, a number n exists so that any two n x n matrices can be multiplied together in O(n^(2+j)) steps.

Is this stated correctly? Because it seems almost meaningless as stated. You start with "for every j, there exists an n such that...". That would mean that for the rest of the statement, n and j are constant. So you are just saying that you can multiply constant sized matrices in constant time. Technically true, but I feel like you are trying to claim something stronger.


It should simply say:

for any j>0 there exists an algorithm multiplying nxn matrices in time O(n^{2+j}).


You are correct, I apologize for the confusion! :)


Assuming your claim is not equivalent to "matrix multiplication is in O(n^2)", then it is false assuming conventional mathematics (i.e. uncountable sets exist), because j in R is uncountable, and algorithms are countable...


You do not need a different algorithm for each real. Just take the algorithm that proves the statement for rational p. It proves the statement for all reals bigger than p. (hence having as many algorithms as there are rational (countably many) is enough)


Then that’s basically saying there is a O(n^2) algorithm which I dealt with in my first sentence.


Are algorithms on a Real RAM [1] countable? That would mean that there are constants you can't express.

[1] https://en.wikipedia.org/wiki/Real_RAM


It does seem to be harder to make progress over time though. Maybe it’ll bottom out at j=1/e. I won’t call my guess even a conjecture though, it just happens to be a convenient constant near where the current value is. It would be a funny prank for math to pull on us.


Predicting that this holds for any j > 0 seems rather bold. Would you care to share your intuition why you think that's the case?


Two matrices with size NxN each can be multiplied naively with the schoolbook algorithm in O(N^3).

It's clear that the algorithm needs at least O(N^2) because to access each element of the matrices once, you need a double for loop, which is O(N^2).

  for i in rows
      for j in cols
          # do something with element matrix1 [i, j], matrix2[i, j],...

so it has to be j >= 0


His question was: what is the reasoning behind there existing an algorithm running in time n^2+epsilon for really small epsilon.


Yeah, we're agreed that j cannot be less than 0. But your conjecture was about _every j > 0_. Do you have any specific line of reasoning which suggests that j can be arbitrarily close to 0 (0 is the greatest lower bound)? Why do you not think there's some other specific limit k \in (0, 0.3728596] beyond which j cannot be improved?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: