If you did not solve the Rubik cube (or did not investigate it in general), I highly advice you to try solving it on your own. Just read nothing about it.
I did it with the help of programming and it is one of the coolest problems I solved (finding an algorithm to solve any instance).
Although I used a computer, it took me something like one or two month to really crack it and lots of RAM to come up with a working solution.
Fun fact: My first approach (I just knew, there was this colorful cube problem, but not that it's that hard) was a simple recursive Haskell program where I thought I could just brute force the solution in an afternoon (I did not know about the problem's math at that point, but looked it up afterwards) ... damn, was I wrong.
I remember that the approach involved restricting the operations (turning a specific side) on the cube, and by allowing just 1 side to rotate one gets len([0, pi/2, pi, 3pi/2]) == 4 solutions, but by allowing _2_ rotations the number jumps to something like ~4*10^7 combinations already.
At that point my simple wonderful Haskell program just ate up memory, and I had to rewrite it in another language (after first solving the 2x2x2 cube to keep my motivation). I tried Python (very slow) and Java (which was the fastest), but could not get rid of some GC panics, so in the end it was good old C which found the solution by encoding one state into 24 Bytes and building various fat, unbalanced binary trees, that provided a lookup-table for specific phases of the algorithm.
Granted my solution was not the best, but my point is that I would not have found it via Haskell, as I was interested in specific upper bounds to sub-problems and being memory constrained was a nightmare with Haskell (my initial program was nice, it type-checked and for sure terminated, but termination required probably a Petabyte - in any case a too big number - of RAM).
Being lazy evaluated, Haskell, makes it hard to reason about performance in more than trivial code (as a rubik cube solver would be).
All those suspended computations may eat your RAM for no good reason at all. For example:
sum (replicate n 0)
has a space leak O(n). While:
foldl' (+) 0 (replicate n 0)
works fine O(1) space wise.
My point is that it is easy to eat all your RAM using haskell even in the simple `sum` case above. More so in more involved computations.
Of course, if you can compute your solution in constant space using C, you can also do it in haskell. But writing it in Haskell would require careful thought in how your code is evaluated.
EDIT: It is about trade offs.
If you want declarative and readabe code, with little memory layout noise (allocate memory, assign variables, etc...) then haskell is a reasonable choise. You can write fairly readable clode for complex algorithms. If you are an expert, you can also make it efficient on the first go (speed and space wise). If speed is of upmost importance, then you can write more compiler specific code but with a lot of added noise.
Or you can start with C, write less (than haskell) declarative code and less readable code. It would be harder to get right(the more complex the computation the harder to debug), but when you are done you probably have efficent code. If you have the experience, you may be able to make it fairly readable too.
Also, Haskell has a nice LANGUAGE Strict pragma since recently. Won't save you from things like lazy pairs or other imported things, of course, but at least can be good for one-file scripts where you want to solve Rubic's cube.
But if you try it on ghci you will see that it breaks. (ghc 8.0.1). So it dependes on how you "interpret" the code. `ghc -O` is sophisticated enough to make the right decisions when compiling your code. You probably don't want a space leak, so it doesn't generate one.
GHC will have more trouble making the right choice in more involved computations. It will revert to "just do what the coder says".
Consider this C code for example:
unsigned int sum = 0;
const unsigned int n = 1000;
for(int i = 0; i<n; i++) {
sum += 0;
}
If you look at it, you can tell this is O(1) in space for any valid `n`. A spohisticated enough compiler may replace it by
unsigned int sum = 0;
and be done with it. But even an unsophisticated(but correct) compiler will produce O(1) space wise code.
EDIT: format
EDIT: What I was trying to say in my original comment is that, in haskell, space leaks my appear even in the most innocent looking code. And that it may require a bit of experience to identify and avoid them.
Well, sure, running code in interpreter does less optimizations than doing full compilation. I can write an interpreter for your C code which runs out of memory if you want :) So your "you can tell" depends on particular compiler or interpreter implementation, and no, you can't tell what my compiler will do.
I'll have to agree with that. But then I would also say that I had to make some assumptions, right? :)
I think GHCi is helpful in this situation, however. It clearly distinguishes between leaky and non-leaky code. This leak is a product of "just doing what the coder says" (go to "Data.Monoid.sum" and do what it says there with such and such...). I'm assuming that, for this code, GHCi is bug-free.
I think
sum (replicate n 0)
is as dangerous as
foldl (+) 0 (replicate n 0)
or
foldr (+) 0 (replicate n 0)
. `sum (replicate n 0)` may be the right thing to do in some cases unknown to me.
There is `{-# LANGUAGE Strict #-}`, as you mention, and BangPatterns for strict arguments. But I would assume, those are used only by coders already familiar with haskell and its leaky pitfalls.
This is one of my biggest regrets: learning to solve it at a young age (10) and not actually solving it, but rather learning an existing algorithm. I have had much satisfaction as an adult in solving 4x4 and 5x5 cubes, but even that tends to be derivative of existing 3x3 solutions (though I did create some sequences myself and those proved easier to scale that traditional ones).
Even then <spoiler alert>, I recall vividly the moment my 10 year old self understood the 3 dimensional nature of "a side", and how a side was not solved until the top row of the each of its neighbors was also correct. It was quite an intellectual thrill for a 10 year old to make that transition.
These days I try to focus on block building rather than memorized sequences. It's a little more intellectually stimulating. If I do go the memorized sequence route, I try to string 2 such sequences together.
See what's happening here boys and girls ? Here I am barely remembering the first steps of a `simple` algorithm to solve the cube and someone from the Internet regales us with a story of how he did not only solve the cube but wrote some recursive haskell mathemagic that would do it for him.
They say we shouldn't compare to others too much but you're sometimes kind of making it real heard HN :).
You don't really need to be a mathematician to do it. All it takes is patience and being good at spotting patterns and observing in general. My brother-in-law solved one before the Internet era all by himself. It took him few months, but he eventually did it.
I did it with the help of programming and it is one of the coolest problems I solved (finding an algorithm to solve any instance).
Although I used a computer, it took me something like one or two month to really crack it and lots of RAM to come up with a working solution.
Fun fact: My first approach (I just knew, there was this colorful cube problem, but not that it's that hard) was a simple recursive Haskell program where I thought I could just brute force the solution in an afternoon (I did not know about the problem's math at that point, but looked it up afterwards) ... damn, was I wrong.
I remember that the approach involved restricting the operations (turning a specific side) on the cube, and by allowing just 1 side to rotate one gets len([0, pi/2, pi, 3pi/2]) == 4 solutions, but by allowing _2_ rotations the number jumps to something like ~4*10^7 combinations already.
At that point my simple wonderful Haskell program just ate up memory, and I had to rewrite it in another language (after first solving the 2x2x2 cube to keep my motivation). I tried Python (very slow) and Java (which was the fastest), but could not get rid of some GC panics, so in the end it was good old C which found the solution by encoding one state into 24 Bytes and building various fat, unbalanced binary trees, that provided a lookup-table for specific phases of the algorithm.
Granted my solution was not the best, but my point is that I would not have found it via Haskell, as I was interested in specific upper bounds to sub-problems and being memory constrained was a nightmare with Haskell (my initial program was nice, it type-checked and for sure terminated, but termination required probably a Petabyte - in any case a too big number - of RAM).