It’s a beautiful theory, and I’d like to believe in it, but with ubiquitous virtual particles, the last particle in the universe would never be alone to reject time
Hawking radiation is a very slow process; one can acquire additional matter (e.g., hydrogen atoms from interstellar or intergalactic space) to compensate for the matter loss
Both explanations are merely simplifications of a truly complex phenomenon, so both are valid. Reducing space-time curvature is equivalent to the absorption of a virtual particle with negative energy
Evaporation isn’t as bad as false vacuum decay. I thought the news would be about that. Fast vacuum decay would be much worse, as civilization can withstand matter loss but not the ultimate false vacuum decay
Who knows? Maybe when and if that happens, civilizations will be advanced enough to try to reverse this: manipulating the local Higgs field, pocket universes, and counter-decay waves
The problem is that false vacuum decay spreads at the speed of light, leaving no time for preparation, and the laws of physics would be completely altered without warning. Perhaps future civilizations could resist it, but with our current understanding, it’s a doomsday in a blink of an eye
NP is interesting because it is about the cost of computation, and LLMs, are computation. A DTM can simulate a NTM, just not in poly time.
It is invoked because LLM+CoT requires a polynomial amount of scratch space to represent P, which is in NP.
I didn't suggest that it was a definition of Intelligence.
The Church–Turing thesis states that any algorithmic function can be computed by a Turing machine.
That includes a human with a piece of paper.
But NP is better though of the set of decision problems verifiable by a TM in polynomial time. Any TM or equivalently lambda calculus or algorithm can solve the Entscheidungsproblem, which was used by Turing to define Halt.
PAC Learning depends on set shattering, at some point it has to 'decide' if an input is a member of a set, no matter how complicated the parts are on top of that set, it is still a binary 'decison'
We know that is not how biological neurons work exclusively. They have many features like spike trains, spike retiming, dendritic compartmentalization etc...
Those are not compatible with the fundamental limits of computation we understand today.
HALT generalizes to Rice's theorm, which says all non-trivial symantic properties of programs are undecidable.
Once again, as NP is the set of decision problems verifiable by a DTM in poly time, that is why NP is important.
Unfortunately the above is also a barrier to formal definition of the class of AI-complete.
While it may not be sufficient to prove anything about the vague concept of intelligence, understanding the limits of computation is important.
We do know enough to say that the belief that AGI being obtainable without major discoveries is blind hope.
But that due to the generalization concept, which is a fundamental limit of computation.
Konqueror no longer uses its unique KHTML engine and has switched to working on top of WebKit/Safari, making it just a wrapper, similar to Brave. It’s a pity that the last truly independent player in the browser engine market is gone, but such are the realities.
reply