Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Predicting that this holds for any j > 0 seems rather bold. Would you care to share your intuition why you think that's the case?


Two matrices with size NxN each can be multiplied naively with the schoolbook algorithm in O(N^3).

It's clear that the algorithm needs at least O(N^2) because to access each element of the matrices once, you need a double for loop, which is O(N^2).

  for i in rows
      for j in cols
          # do something with element matrix1 [i, j], matrix2[i, j],...

so it has to be j >= 0


His question was: what is the reasoning behind there existing an algorithm running in time n^2+epsilon for really small epsilon.


Yeah, we're agreed that j cannot be less than 0. But your conjecture was about _every j > 0_. Do you have any specific line of reasoning which suggests that j can be arbitrarily close to 0 (0 is the greatest lower bound)? Why do you not think there's some other specific limit k \in (0, 0.3728596] beyond which j cannot be improved?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: