Hacker News new | past | comments | ask | show | jobs | submit login

> program isomorphism in general is basically the halting problem.

How far does this go? On the face of it, for some system to determine whether program X halts period could be different from finding whether program X halts, given that some program Y halts. The fact that program Y halts I guess would count as an additional axiom in the system, and depending on details of program Y could be more or less helpful in determining that X halts?




Yep! If you want to learn a lot more about this stuff, search for the buzzword "codata." Basically, we can start to define type systems which "wall off" a section (which may or may not have all the things you want to compute in practice; we don't know!) which follows simple recursive rules from a base case up. This section is called "data", acting upon it programmatically is called "recursion", and proofs about it are called "induction". This part of the computer is not Turing-complete and therefore we can solve the halting problem. Everything else is "codata" acted upon with "corecursion" and reasoned about with "coinduction".

One super-awesome thing about the "data" used in codata systems is that, once you've proven that something halts, there is no formal distinction between lazy and eager evaluation -- it simply becomes a choice that the compiler makes, perhaps guided by pragmas inserted by a smart programmer.

One not-so-awesome thing: consider the normal model of the natural numbers, `data Nat = Zero | Succ Nat` where every number is either 0 or the successor of another natural number. The obvious definition of recursion involves stripping off a `Succ` structure one level deeper. Anything which follows that pattern is proven to halt, so it's a whitelisted computation that can be used in other data-computations.

But now, consider the usual algorithm for GCD:

    gcd' m n | n == 0    = m
             | m < n     = gcd' n m 
             | otherwise = gcd' n (mod m n)
Is this corecursive or recursive? It's clearly recursive. But does the computer know that? We have to clue it in. We need to tell the computer that actually, given our definition of recursion here -- "it's recursive if we only go from n to n - 1" -- holds a larger theorem, "it's recursive if we only go from n to some number strictly less than n". Even then, we have to prove that `mod m n < n` and our system needs to be guided to understand that it's okay if `m` increases as long as `n` is strictly decreasing (the arguments could have been flipped, after all).

So this is your "additional axioms in the system." They come when your "recursion" method is not the most obvious way to traverse the data structure; in this case we're using argument-swaps and jumping through the graph of N to traverse two data structures simultaneously and it creates a bunch of headaches.


Wow, thanks a lot for the detail! That is quite a bit more practical than I imagined.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: