Unless you can prove that humans exceed the Turing computable, the headline is nonsense unless you can also show that the Church-Turing thesis isn't true.
Since you don't even appear to have dealt with this, there is no reason to consider the rest of the paper.
If I'm understanding correctly, they are arguing that the paper only requires that an intelligent system will fail for some inputs and suggest that things like propaganda are inputs for which the human intelligent system fails. Therefore, they are suggesting that the human intelligent system does not necessarily refute the paper's argument.
I'm not sure if this will help, but happy to elaborate further:
The set of Turing computable functions is computationally equivalent to the lambda calculus, is computationally equivalent to the generally recursive functions. You don't need to understand those terms, only to know that these functions define the set of functions we believe to include all computable functions. (There are functions that we know to not be computable, such as e.g. a general solution to the halting problem)
That is, we don't know of any possible way of defining a function that can be computed that isn't in those sets.
This is basically the Church-Turing thesis: That a function on the natural numbers can be effectively computable (note: this has a very specific meaning, it's not about performance) only if it is computable by a Turing machine.
Now, any Turing machine can simulate any other Turing machine. Possibly in a crazy amount of time, but still.
The brain is at least a Turing machine in terms of computabilitity if we treat "IO" (speaking, hearing, for example) as the "tape" (the medium of storage in the original description of the Turing machine). We can prove this, since the smallest Turing machine is a trivial machine with 2 states and 3 symbols that any moderate functional human is capable of "executing" with pen and paper.
(As an aside: It's almost hard to construct a useful computational system that isn't Turing complete; "accidental Turing completeness" regularly happens, because it is very trivial to end up with a Turing complete system)
An LLM with a loop around it and temperature set to 0 can trivially be shown to be able to execute the same steps, using context as input and the next token as output to simulate the tape, and so such a system is Turing complete as well.
(Note: In both cases, this could require a program, but since for any Turing machine of a given size we can "embed" parts of the program by constructing a more complex Turing machine with more symbols or states that encode some of the actions of the program, such a program can inherently be embedded in the machine itself by constructing a complex enough Turing machine)
Assuming we use a definition of intelligence that a human will meet, then because all Turing machines can simulate each other, then the only way of showing that an artificial intelligence can not theoretically be constructed to at least meet the same bar is by showing that humans can compute more than the Turing computable.
If we can't then "worst case" AGI can be constructed by simulating every computational step of the human brain.
Any other argument about the impossibility of AGI inherently needs to contain a something that disproves the Church-Turing thesis.
As such, it's a massive red flag when someone claims to have a proof that AGI isn't possible, but haven't even mentioned the Church-Turing thesis.
> then the only way of showing that an artificial intelligence can not theoretically be constructed to at least meet the same bar is by showing that humans can compute more than the Turing computable.
I would reframe: the only way of showing that artificial intelligence can be constructed is by showing that humans cannot compute more than the Turing computable.
Given that Turing computable functions are a vanishingly small subset of all functions, I would posit that that is a rather large hurdle to meet. Turing machines (and equivalents) are predicated on a finite alphabet / state space, which seems woefully inadequate to fully describe our clearly infinitary reality.
Given that we know of no computable function that isn't Turing computable, and the set of Turing computable functions is known to be equivalent to the lambda calculus and equivalent to the set of general recursive functions, what is an immensely large hurdle would be to show even a single example of a computable function that is not Turing computable.
If you can do so, you'd have proven Turing, Kleen, Church, Goedel wrong, and disproven the Church-Turing thesis.
No such example is known to exist, and no such function is thought to be possible.
> Turing machines (and equivalents) are predicated on a finite alphabet / state space, which seems woefully inadequate to fully describe our clearly infinitary reality.
1/3 symbolically represents an infinite process. The notion that a finite alphabet can't describe inifity is trivially flawed.
Function != Computable Function / general recursive function.
That's my point - computable functions are a [vanishingly] small subset of all functions.
For example (and close to our hearts!), the Halting Problem. There is a function from valid programs to halt/not-halt. This is clearly a function, as it has a well defined domain and co-domain, and produces the same output for the same input. However it is not computable!
For sure a finite alphabet can describe an infinity as you show - but not all infinity. For example almost all Real numbers cannot be defined/described with a finite string in a finite alphabet (they can of course be defined with countably infinite strings in a finite alphabet).
Non-computable functions are not relevant to this discussion, though, because humans can't compute them either, and so inherently an AGI need not be able to compute them.
The point remains that we know of no function that is computable to humans that is not in the Turing computable / general recursive function / lambda calculus set, and absent any indication that any such function is even possible, much less an example, it is no more reasonable to believe humans exceed the Turing computable than that we're surrounded by invisible pink unicorns, and the evidence would need to be equally extraordinary for there to be any reason to entertain the idea.
Humans do a lot of stuff that is hard to 'functionalise', computable or otherwise, so I'd say the burden of proof is on you. What's the function for creating a work of art? Or driving a car?
You clearly don't understand what a function means in this context, as the word function is not used in this thread in the way you appear to think it is used.
For starters, to have any hope of having a productive discussion on this subject, you need to understand what "function" mean in the context of the Church-Turing thesis (a function on the natural numbers can be calculated by an effective method if and only if it is computable by a Turing machine -- note that not just "function" has a very specific meaning there, but also "effective method" does not mean what you're likely to read into it).
My original reframing was: the only way of showing that artificial intelligence can be constructed is by showing that humans cannot compute more than the Turing computable.
I was assuming the word 'compute' to have broader meaning than Turing computable - otherwise that statement is a tautology of course.
I pointed out that Turing computable functions are a (vanishingly) small subset of all possible functions - of which some may be 'computable' outside of Turing machines even if they are not Turing computable.
An example might be the three-body problem, which has no general closed-form solution, meaning there is no equation that always solves it. However our solar system seems to be computing the positions of the planets just fine.
Could it be that human sapience exists largely or wholly in that space beyond Turing computability? (by Church-Turing thesis the same as computable by effective method, as you point out). In which case your AGI project as currently conceived is doomed.
For example learning from experience (which LLMs cannot do because they cannot experience anything and they cannot learn) is clearly an attribute of an intelligent machine.
LLMs can tell you about the taste of a beer, but we know that they have never tasted a beer. Flight simulators can't take you to Australia, no matter how well they simulate the experience.
If that is true, you have a proof that the Church-Turing thesis is false.
> LLMs can tell you about the taste of a beer, but we know that they have never tasted a beer. Flight simulators can't take you to Australia, no matter how well they simulate the experience.
For this to be relevant, you'd need to show that there are possible sensory inputs that can't be simulated to a point where the "brain" in question - be it natural or artificial - can't tell the difference.
Which again, would boil down to proving the Church-Turing thesis wrong.
>If that is true, you have a proof that the Church-Turing thesis is false.
We're talking the physical version right? I don't have any counter examples that I can describe, but I could hold that that's because human language, perception and cognition cannot capture the mechanisms that are necessary to produce them.
But I won't as that's cheating.
Instead I would say that although I can't disprove PCT it's not proven either, and unlike other unproven things like P!=NP this is about physical systems. Some people think that all of physical reality is discrete (quantized), if they are right then PCT could be true. However, I don't think this is so as I think that it means that you have to consider time as unreal, and I think that's basically as crazy as denying consciousness and free will. I know that a lot of physicists are very clever, but those of them that have lost the sense to differentiate between a system for describing parts of the universe and a system that defines the workings of the universe as we cannot comprehend it are not good at parties in my experience.
>For this to be relevant, you'd need to show that there are possible sensory inputs that can't be simulated to a point where the "brain" in question - be it natural or artificial - can't tell the difference.
I dunno what you mean by "relevant" here - you seem to be denying that there is
a difference between reality and unreality? Like a Super Cartesian idea where you say that not only is the mind separate from the body but that the existence of bodies or indeed the universe that they are instantiated in is irrelevant and doesn't matter?
Wild. Kinda fun, but wild.
I stand by my point though, computing functions about how molecules interact with each other and lead to the propagation of signals along neural pathways to generate qualia is only the same as tasting beer if the qualia are real. I don't see that there is any account of how computation can create a feeling of reality or what it is like to. At some point you have to hit the bottom and actually have an experience.
I think that may depend on how someone defines intelligence. For example, if intelligence includes the ability to feel emotion or appreciate art, then I think it becomes much more plausible that intelligence is not the same as computation.
Of course, simply stating that isn't in of itself a philisophically rigorous argument. However, given that not everyone has training in philosophy and it may not even be possible to prove whether "feeling emotion" can be achieved via computation, I think it's a reasonable argument.
I think if they define intelligence that way, it isn't a very interesting discussion, because we're back to Church-Turing: Either they can show that this actually has an effect on the ability to reason and the possible outputs of the system that somehow exceeds the Turing computable, or those aspects are irrelevant to an outside observer of said entity because the entity would still be able to act in exactly the same way.
I can't prove that you have a subjective experience of feeling emotion, and you can't prove that I do - we can only determine that either one of us acts as if we do.
And so this is all rather orthogonal to how we define intelligence, as whether or not a simulation can simulate such aspects as "actual" feeling is only relevant if the Church-Turing thesis is proven wrong.
There are lots and lots of things that we can't personally observe about the universe. For example, it's quite possible that everyone in New York is holding their breath at the moment. I can't prove that either way, or determine anything about that but I accept the reports of others that no mass breath holding event is underway... and I live my life accordingly.
On the other hand many people seem unwilling to accept the reports of others that they are conscious and have freedom of will and freedom to act. At the same time these people do not live as if others were not conscious and bereft of free will. They do not watch other people murdering their children and state "well they had no choice". No they demand that the murderers are punished for their terrible choice. They build systems of intervention to prevent some choices and promote others.
It's not orthogonal, it's the motivating force for our actions and changes our universe. It's the heart of the matter, and although it's easy to look away and focus on other parts of the problems of intelligence at some point we have to turn and face it.
Church-Turing doesn't touch upon intelligence nor consciousness. It talks about "effective procedures". It claims that every effectively computable thing is Turing computable. And effective procedures are such that "Its instructions need only to be followed rigorously to succeed. In other words, it requires no ingenuity to succeed."
Church-Turing explicitly doesn't touch upon ingenuity. It's very well compatible with Church-Turing that humans are capable of some weird decision making that is not modelable with the Turing machine.
Assuming the Church-Turing thesis is true, the existence of any brain now or in the past capable of proving it is proof that such a program may exist.
If the Church-Turing thesis can be proven false, conversely, then it may be possible that such a program can't exist - it is a necessary but not sufficient condition for the Church-Turing thesis to be false.
Given we have no evidence to suggest the Church-Turing thesis to be false, or for it to be possible for it to be false, the burden falls on those making the utterly extraordinary claim that they can't exist to actually provide evidence for those claims.
Can you prove the Church-Turing thesis false? Or even give a suggestion of what a function that might be computable but not Turing computable would look like?
Keep in mind that explaining how to compute a function step by step would need to contain at least one step that can't be explain in a way that allows the step to be computable by a Turing machine, or the explanation itself would instantly disprove your claim.
The very notion is so extraordinary as to require truly extraordinary proof and there is none.
A single example of a function that is not Turing computable that human intelligence can compute should be low burden if we can exceed the Turing computable.
> Assuming the Church-Turing thesis is true, the existence of any brain now or in the past capable of proving it is proof that such a program may exist.
Doesn't that assume that the brain is a Turing machine or equivalent to one? My understanding is that the exact nature of the brain and how it relates to the mind is still an open question.
If the Church-Turing thesis is true, then the brain is a Turing machine / Turing equivalent.
And so, assuming Church-Turing is true, then the existence of the brain is proof of the possibility of AGI, because any Turing machine can simulate any other Turing machine (possibly too slowly to be practical, but it denies its impossibility).
And so, any proof that AGI is "mathematically impossible" as the title claims, is inherently going to contain within it a proof that the Church-Turing thesis is false.
In which case there should be at least one example of a function a human brain can compute that a Turing machine can't.
Given what I see in these discussions, I suspect your use of the word "spontaneously" is a critical issue for you, but also not for me.
None of us exist in a vacuum*, we all react to things around us, and this is how we come to ask questions such as those that led Gödel to the incompleteness theorems.
* unless we're Boltzmann brains, in which case we have probably hallucinated the existence of the question in addition to all evidence leading to our answer
An accurate-enough physical simulation of Kurt Gödel's brain.
Such a program may exist- unless you think such a simulation of a physical system is uncomputable, or that there is some non-physical process going on in that brain.
Since you don't even appear to have dealt with this, there is no reason to consider the rest of the paper.