I imagine a group of dogs sitting around and asking "How are we so good at thinking about fun ways to play with squeeky toys?".
The truth is, that our ability to reason about ourselves is limited by our ability to reason. Perhaps there are aliens out there who would laugh our cognitive abilities--their's being so much better than ours.
Less complicated systems successfully reason about more complicated systems all the time. Ditto for self-reasoning. See: bootloaders, update systems, and package managers.
In order to prove that some kind of meta-cognition is inherently beyond our grasp, you don't just have to prove that the system we are attempting to reason about is more complex than ourselves, you also have to prove that the problem isn't meaningfully reducible. Otherwise we can and will eventually figure out the mental tools we need to tackle the problem, and tackle it.
The same applies to brute physical strength. Humans have no problem building machines vastly stronger, tougher, larger, more precise etc than ourselves even though narrow-minded reasoning might lead you to believe that this was impossible ("a tool can only cut something less hard/strong than itself," "a ruler can only measure less precisely than itself" etc).
I think you're describing an analogue of turing-completeness. It's not (to me) a question of whether we can reason about something: it's a question of how long it takes, and how much knowledge is involved with the process.
What you're describing sounds like asking a PDP-11 to run GPT-3. Technically possible, in the broadest sense of the word. But a computer that can run GPT-3 successfully will look at that PDP-11 in much the same way that we look at a dog playing with a chew toy.
On the contrary, I think your example proves my point quite well. I understand very little about PDP-11s and only slightly more about GPT-3's inner workings, yet I have no trouble reasoning about whether or not a PDP-11 is suitable for running GPT-3 or something even more difficult to formally reason about, say Microsoft Windows. I have a mental model of computer performance and compute requirements that simplifies the question from a difficulty of "Oh, it's Turing complete, halting problem, let's throw our arms in the air like this is an infomercial!" through "You need to understand literally everything about PDP-11s and Windows" all the way to "50 years of exponential growth is a hella large factor to try squeezing down anything by." It's a trivial question hiding in the skin of an intractable question, and it perfectly exemplifies why it's silly to believe that human cognition will forever remain intractable.
In order for a problem to forever remain in "let's throw our arms in the air like it's an infomercial" territory, it must not merely be difficult in its most pedantically defined complete form, it also must stymie the search for useful relaxations and workarounds. Nobody fears running a program on account of being unable to prove that it will halt: they just kill the program if it locks up, or (equivalently) set a timeout. Personally, I'd just avoid throwing my arms in the air like an infomercial altogether.
EDIT: substituted GPT3 -> Windows because arguments about GPT-3 and/or a set of incarnations being Turing Complete would be irrelevant to the main point.
To make a very flat surface plate, you can grind three less-flat surface plates against each other (three because if you only have two, the common surface guaranteed by symmetry can still have curvature). To make a very precise cylinder, you can grind a less-precise cylinder in a "V" formed by two flat surfaces. To make a very regular lead screw, you can grind a less regular lead screw with a less regular reversible nut. Now you can construct a micrometer, and from there your mill/lathe and you're off to the races :)
That's how you bootstrap precision, but taking precision from a basic form and putting it into a complex form is a whole other art. These days we use numerical techniques, but historically geometric construction would have been the ticket. For instance, take a string, use a "standard twig" or something to mark 3+4+5 sections of equal length, cut & tie it into a loop, tension it into a triangle with sides of length 3, 4, and 5 using the marks, and now you've got a right angle.
Modern metrology looks a bit different because time symmetry started beating spatial symmetry. Badly. Absurdly badly. Like "you get twice the digits for the same price" badly. You get 6 digits for pennies in a quarz oscillator and I know the metrologists have chased their clock stability out to at least 19 digits, probably more by now. Also, you can just transmit the reference over the air to globally coordinate accuracy for dirt cheap, you don't even have to ship blocks of platinum-iridium in inert atmosphere. Modern metrology is basically the art of "rebasing" other types of measurement onto measurements of time because time measurements are so ridiculously great.
Ah, now. See. You're mistake is thinking you can reason better than a dog. The reason a dog brings the toy to the human is because they know that the human is better at throwing and the dog is better at fetching. Teamwork, y'see.
Now go forth and learn, and one day you too may be as smart as a dog ;-)
Maybe also dogs can do everything humans can but they decide not to because they see the stressful lives we live and want no part of that. I welcome our dog overlords.
>>The reason a dog brings the toy to the human is because they know that the human is better at throwing and the dog is better at fetching. Teamwork, y'see.
Honestly I always thought that the dog was just being diligent and making sure that its humans did his daily exercise routine by throwing a toy.
>Perhaps there are aliens out there who would laugh our cognitive abilities--their's being so much better than ours.
Yeah, but their brains would either be much bigger and/or use a lot more energy, or they will have a fundamentally different architecture (i.e. they are manufactured instead of evolved).
For the given amount of perceptions/calculations that our brain makes, and the hard constraint of being a biological process, we have pretty much fantastically efficient brains.
My computer, extremely slow when compared to the likes of DeepMind, has a power source of 750 watts, while human brains consume in average 12 watts.
The truth is, that our ability to reason about ourselves is limited by our ability to reason. Perhaps there are aliens out there who would laugh our cognitive abilities--their's being so much better than ours.