Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There are just many more points of failure in a computer proof than a human proof (I don't like calling it "theoretical", that doesn't seem like a useful distinction, and it has connotations of not being "real" when it's opposed to "practical"). Maybe they used numbers so big that there was an overflow. Maybe they relied on some library that itself had bugs that didn't express itself until they tried it on large numbers. Hell, even bits flipping due to cosmic rays is a legitimate concern and not as far-fetched as you might think.

There are different kinds of pitfalls in computer proofs, and I wonder how can we account for them. The errors in a computer program are far less forgiving than the errors in a human proof. A typo in a human proof doesn't grind the whole proof to a halt and usually the human reader can quickly mentally correct the typo, but computer proofs are more brittle.




Eh... A decent compiler will cause a proof to grind to a halt on a typo.

I don't really see the difference honestly, besides quantitative complexity. A computer run exhaustive proof is less brittle than a manual exhaustive proof. I think complexity of the proof is just a function of the length. And for a computer proof, that length includes the toolchain.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: