Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The complete story of Gödel incompleteness (billwadge.com)
81 points by herodotus on March 12, 2024 | hide | past | favorite | 34 comments


> If we allow experimental evidence

That's a remarkably big if. (Consider the history of the parallel postulate: https://en.wikipedia.org/wiki/Parallel_postulate#History )


> With a little more effort we can establish the same for any enumerable set of axioms.

This is a case of presentism, which due to the work of Gödel we accept the costs of the implications of his findings.

RE is semi-decidable, meaning we can find yes-instances, although some yes-instances may be false.

RE+co-RE=R, when the author is only referencing RE.

Failure as negation does have its place but it also is what will probably block strong-AI.

Also note that presburger arithmetic, FoL with (+,=) or (*,=) is fully decidable. But proofs require double exponential time. Superfactorial time isn't practical.

Same with attention in feed forward networks requiring exponential time with a reduction in expressability.

This is a lower bound for negation through exhaustion in zero order logic through the strong exponential time theorem.

The loss of reliable negation: (¬) in all but the weakest logic forms in the general case has huge practical implications for all but the most trivial problems.

Unless you are lucky enough to have a fully decidable model aka recursive, the costs are high.

PA was shown to be consistent with transfinite induction BTW. But that is because the natural numbers are well ordered.

But transformers with soft attention being limited to solving problems in TC^0 with failure as negation is a very real cost of Gödels (in) completeness theorems.


> Failure as negation does have its place but it also is what will probably block strong-AI.

Interesting, could you please explain more on this?


The deduction principle turns out to be invalid for weak rejections and strong AI requires it.

Consider strong negation:

Did Homer write the Iliad? No, he did not.

VS a too computationally expensive system with true,false,other with no knowledge of Homer:

Did Homer write the Iliad? No, Homer did not exist.

Vs negation as failure of binary threshold ANNs:

Did Homer write the Iliad? No.

Transformers explicitly can find known unknowns, unknowable unknowns (eg future unknowns), etc..

But exhaustive unknowns may or may not be valid.

There is a lot more to it, but strong AI requires universal quantification, ML is existential quantification.

The above is just one way to think about why Word sense disambiguation and ATP are though to be AI-complete.


How relevant are these considerations for practical use cases?

Specifically, for all practical purpose, it is sufficient to have probabilistic guarantees. Suppose an AI is able to generate mathematical proofs as well as humans, it wouldn't really matter if in theory "they are limited to solving problems in TC^0 with failure as negation"


While TC^0 can divide or multiply through say FFT, it actually can't do counters or addition which I am pretty sure requires TC^1 or Log-depth Threshold Circuits.

LTSM is much more powerful.


> Same with attention in feed forward networks requiring exponential time with a reduction in expressability.

Huh? Can you say more about that? What takes exponential time with transformers?


> We prove that the time complexity of self-attention is necessarily quadratic in the input length, unless the Strong Exponential Time Hypothesis (SETH) is false. This argument holds even if the attention computation is performed only approximately, and for a variety of attention mechanisms.

https://proceedings.mlr.press/v201/duman-keles23a/duman-kele...


Thanks! But, quadratic time complexity (which I had heard that attention requires, which doesn't surprise me) is not exponential time.

I acknowledge that they related this to the assumption of the SETH (which surprises me a little). But, this doesn't mean that transformers take exponential time. I don't think I understood what you meant by the "with a reduction in expressability" part of the statement, so maybe that is the reason behind me not following.


Though note that was a question of possible redundancy, not possible inconsistency.

Still, the outcome is probably roughly the same. We'd replace the inconsistent axiom with something similar and 99.99% of existing practical math would still follow, and only a few intentionally specious constructions would fail. (It kind of begs the question of whether math follows from the axioms we want or axioms follow from the math we want). Plus perhaps some new math would start to unfold as we begin to explore the inconsistent axiom's subtleties.


> (It kind of begs the question of whether math follows from the axioms we want or axioms follow from the math we want). Plus perhaps some new math would start to unfold as we begin to explore the inconsistent axiom's subtleties.

Only tangentially related, but the same idea comes to mind reading Terence Tao's masterpiece on "Smoothed asymptotics" for divergent infinite sums (e.g. the infamous 1+2+3+4+… = -1/12):

https://terrytao.wordpress.com/2010/04/10/the-euler-maclauri...

Our intuitive interpretation (Σn must be infinite! and surely positive! never -1/12) fails miserably for such infinite series, in the sense that "practical experiments" (QM) hint at reality preferring that bizarro -1/12 interpretation instead. Who is at fault here – our seemingly iron-clad intuition or the experiments? And why the disconnect?

Like you say, what new math unfolds once we accept and internalize this new interpretation and adjust our intuition? Tao's piece offers an excellent basis for that. While we may come up with any interpretations and axioms we like, experiment is the final arbiter on which of these "math worlds" are real.


Would anyone happen to have a recommendation for someone hoping to make Gödel's incompleteness theorem "click"? It feels like every time I reapproach it, I have to start the intuition building all over again.


Why are you interested? I recommend the short book "Godel's proof".

Godel's work is wrapped up in historical context which is both interesting and distracting from the core idea. For example, Bertrand's Russells' work and book isn't really essential, it's just the system which Godel worked in to do his proof.


Gödel Without (Too Many) Tears, by Peter Smith. Also his An Introduction to Gödel’s Theorems.

https://www.logicmatters.net/books/


It's very entry-level (my level :)) but I found Veritaseum's video about it really clear and instructive: https://www.youtube.com/watch?v=HeQX2HjkcNo


There’s a wonderful book on the subject by Raymond Smullyan, of knights and knaves recreational math puzzle fame which helped make it click for me.

https://lib.undercaffeinated.xyz/get/pdf/5823


K&K i would recommend as a first read on logic. i found GEB to be a bit long winded.


The corresponding section in the beginning of Scott Aaronson’s “Quantum computing since Democritus” (the online version works[1]; see also the related blog post[2]). It’s short and to the point (unlike GEB) and enough to understand why something in this vein must be true. It’s not detailed enough, though, to understand each of the technical conditions in the standard statement individually (no explanation of ω-consistency and the like).

[1] https://www.scottaaronson.com/democritus/lec3.html

[2] https://scottaaronson.blog/?p=710


Is there an entry-level explanation that explicitly goes over the following points? Let me illustrate:

>The Incompleteness Theorem says that, given any consistent, computable set of axioms, there's a true statement about the integers that can never be proved from those axioms.

Upon reading something like this I immediately have questions like: if this is so, then how do we know that this statement about the integers is true at all? What does it mean for something to be true within a set of axioms when you can't prove it? Why don't we say that the truth of this statement, within those axioms, is undetermined? If we, on the outside, know that it's true, why can't we forcefully plug that truth back into the theory?

OK, I know that last one, you can do it but then you can also do an incompleteness proof for the new theory. But still, if the "problem" is only with self-referential statements, why can't we somehow isolate all self-referential statements and have a theory that's complete and consistent except for some caveats, which seems vastly better than just inconsistent, period?

Sorry if that makes no sense, I know this topic is famous for attracting cranky discourse.. It just feels like all the popular explanations stop just short of really grappling with the real weirdness of the theorem.


Right, so definitely check Aaronson, the entirety of what I linked is a couple dozen pages at most.

But the statement you quoted, while true, is deliberately provocative. There are two standard statements:

- The one for the First Incompleteness Theorem implies the existence of a statement P independent from the theory, i.e. one that can be neither proved nor disproved using it. (That is, theory proves false iff theory + P proves false iff theory + not-P proves false.) Here we don’t opine on whether P is true or not.

- The one for the Second Incompleteness Theorem points to a specific statement: the consistency of the theory (that you can state that within the theory itself is nontrivial, but at the same time much of the point of and the reason for the whole hubbub). Here we know it’s true because we assumed it! That is, in the metalogic which we’re (implicitly, unless you dive in very deep) using to reason about all this, we have the disjunction:

- Either the theory proves false, then it proves anything at all (boring);

- Or it does not prove false, then it does not prove th(e true statement th)at it does not prove false.

Why we can’t just cut out self-referential stuff in our logic... for the same reason we can’t (usefully) just cut out infinite loops in our programming language, I think: it doesn’t take much to have either of those. The statements are not self-referential in any exotic way. What you need is—AFAIU—for the theory to be able to be able to talk about (at least in terms of halting or not) programs in a Turing-complete language, a proof verifier for the theory written in that language and an interpreter for itself in it. Encoding all of that in Peano arithmetic is predictably awful (especially if your name is Gödel, you have never seen a programming language, and Turing completeness hasn’t been invented yet). But none of it is exactly advanced functionality for a logic system; the self-interpreter isn’t even a part of it, it’s just something all Turing-complete languages can do. (This part is more wishy-washy than I’d like, but that’s what you get for asking an amateur.)


> all the popular explanations stop just short of really grappling with the real weirdness of the theorem.

No offense, but if I’m reading your comment correctly you’re making it out that nobody familiar with the proof has ever considered what “truth” really is. That’s…well, there’s a saying amongst physicists that, “you’re not even wrong.” The semantics of language and math have a copious amount of literature behind them. Not to mention that even asking the question is, forgive me, a tad juvenile.

Also, recursively applying known unknowns back into the statement (? If I understood that correctly) is itself incomplete: how could a system be “complete” if there are unknowns?

Forgive me if it seems I, too, have ventured into the cranky side of the discourse.


"Godel, Escher, Bach" was my first introduction to it - you might find it an interesting book


Speaking as a PhD in pure math: It's a great book, but I would not recommend it at all for someone trying to get the gist of the theorem. GEB is a book for those who already have a strong self-interest in or prediliction to the intricacies of mathematical logic. It's a long book and is likely to be off-putting for those wanting a practical approach.

Raymond Smullyan's book (another reply to the OP) is a much better choice.


Fair point. I don't have a mathematical grounding, but found it a thought-provoking and accessible intro. But different people will find different approaches compelling.


Read gödel, escher, bach by Douglas Hofstaedter


As the late Martin Gardiner opined:

"Every few decades an unknown author brings out a book of such depth, clarity, range, wit, beauty and originality that it is recognised at once as a major literary event. This is such a work."

Nevertheless, Infinity and the Mind: The Science and Philosophy of the Infinite by Rudy Rucker is, in my opinion, a better choice for the mathematically inclined layman interested in Goedel's discoveries and plenty of related mathematics. Rucker's book even includes an account of his (somewhat over-enfusive) meeting with the great man.


I can prove a subset of peano arithmetic is complete and consistent though. And that’s usually enough.


Not using that same subset, however (unless working with a Self-Verifying Theory [0]).

[0] https://en.wikipedia.org/wiki/Self-verifying_theories


I just can't get this dumb question out of my head:

How could any "incompleteness theorem" ever be completely true?


In the same way as a theorem that says "the sequence of prime numbers is infinite" can be finitely long, or a theorem that says "the square root of 2 is irrational" can be rational :-).

If the theorem just said "everything is incomplete", maybe that would be a problem, but that isn't what it says.

Goedel's (first) incompleteness theorem says: if you have a formal system with such-and-such properties, then there have to be statements it can neither prove nor disprove. The theorem itself isn't a formal system with such-and-such properties. It isn't talking about itself. (Amusing though it would be if it were, since one of the neat things about Goedel's proof is the way it gets mathematical statements to kinda-sorta talk about themselves.)


The correct way to understand it is this:

"We can't pack (i.e. compress) all the mathematical knowledge in a finite set of axioms and rules."

This statement doesn't look paradoxical.


"Before this, logic had been strictly syntactical and proof theoretic"

Frege disagrees:

"Just as the concept point belongs to geometry, so logic, too, has its own concepts and relations; and it is only in virtue of this that it can have a content. Toward what is thus proper to it, its relation is not at all formal."

https://plato.stanford.edu/entries/frege/#FreConLog


TFA uses sloppy phrasing, but it's correct. The claim is not that earlier logicians such as Frege viewed logical concepts as meaningless. It's that, before Tarski and Gödel, no one clearly distinguished syntax and semantics, and so earlier logicians lacked the conceptual resources to describe the difference between truth and provability, let alone investigate their relationship. (Frege only ever considers a single semantics for any formal system.)


"It's that, before Tarski and Gödel, no one clearly distinguished syntax and semantics"

I think this is right, but Hilbert came already close much earlier.

But "Before this, logic had been strictly syntactical and proof theoretic" is at least misleading.

Traditional logic was more conceptional. Frege's "formalism" was a great innovation, but he remained within the conceptual tradition (Frege-Hilbert controversy).

"What Hilbert offers us, in 1899, is a systematic and powerful technique that can be used across all formalized disciplines to do just this: to prove consistency and independence. In doing so, he lays the groundwork, in concert with various of his contemporaries, for the emergence of contemporary model-theoretic techniques."

https://plato.stanford.edu/entries/frege-hilbert/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: