Hacker News new | past | comments | ask | show | jobs | submit login

So Godel proved that there are true theorems that are unprovable. My hunch is that there is a fine grained version of this result <-- that there is a some distribution on the length of the proof for any given conjecture. If true that would mean that we better get used to dealing with long nasty proofs because they are a necessary part of mathematics...perhaps even, in some kind of Kolmogorov complexity-esque fashion, the almost-always bulk of it



Agree that something like this does seem likely. And this line of thought also highlights the work of Chaitin, and the fact that the current discussion around AI is just the latest version of early-2000s quasi-empiricism[1] stuff that never really got resolved. Things like the 4-color theorem would seem to be just the puny top of really big iceberg, and it's probably not going away.

The new spin on these older unresolved issues IHMO is really the black-box aspect of our statistical approaches. Lots of mathematicians that are fine with proof systems like Lean and some million-step process that can in principle be followed are also happy with more open-ended automated search and exploration of model spaces, proof spaces, etc. But can they ever be really be happy with a million gigabyte network of weighted nodes masquerading as some kind of "explanation" though? Not a mathematician but I sympathize. Given the difficulty of building/writing/running it, that looks more like a product than like "knowledge" to me (compare this to how Lean can prove Godel on your laptop).

Maybe it's easier to swallow the bitter pill of poor quality explanations though after the technology itself is a little easier to actually handle. People hate ugly things less if they are practical, and actually something you can build pretty stuff on top on.

https://en.wikipedia.org/wiki/Quasi-empiricism_in_mathematic...


I’m not sure that’s quite true. Say the proof of proposition P requires a minimum of N symbols. You could prove it in one paper that’s N symbols long and nobody can read, or you can publish k readable papers, with an average length on the order of N/k symbols, and develop a theory that people can use.

I think even if N is quite large, that just means it may take decades or millennia to publish and understand all k necessary papers, but maybe it’s still worth the effort even if we can get the length-N paper right away. What are you going to do with a mathematical proof that no one can understand anyway?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: