That kind of sucks, because there's AI LLM's just about everywhere else now. Even those customer service "live chat" windows are typically AI first. What are Ask Jeeves doing?
I agree with this to some extent. Another perspective: think about the element [(1, 2, 3, 4, ...)] in the ultrafilter; let's call this omega. On some level, all of these questions are really just questions about what properties omega has: is it even or odd, prime or composite, etc. Simultaneously deciding all of these questions in a coherent way is equivalent to specifying an ultrafilter. Similarly, when we ask about some function f(x) being > g(x) asymptotically, we are basically asking if f(omega) > g(omega). This is just a different view of the same thing.
For instance, your question happens to be equivalent to asking whether sin(omega) > cos(omega), and thus if tan(omega) > 1. This is true iff the fractional part the hyperreal number omega/(2*pi) is between 1/8 and 5/8. Thus we have reduced the asymptotic statement to a question about an arithmetical property of one particular hyperreal number.
Choosing an ultrafilter basically involves simultaneously determining all properties of omega. There are different ultrafilters, each providing a different coherent "universe" which decides all possible predicates in a coherent way. That this is possible (with the axiom of choice) is highly interesting. However, it doesn't seem necessary for asymptotic analysis.
Of course, if there is some "canonical" or "most natural" ultrafilter to choose from, with some magical property universally deemed important, then it would settle your question and all such questions in a natural way.
<iframe> is different from what the author is asking for, it has its own DOM and etc. He wants something like an SSI but client side. He explains some of the problems right after the part you cut off above
"We’ve got <iframe>, which technically is a pure HTML solution, but they are bad for overall performance, accessibility, and generally extremely awkward here"
The point is that the author would not really be much happier if Microsoft had added a few lines admitting substantial portions of code were taken from Spegel. They probably will do this, but I doubt he will be satisfied with the result either way.
The comment above, which I mostly agree with, is that the point of the MIT license to permit anyone, including large corporations, doing this kind of thing. Since this doesn't seem like an outcome the author is happy with, maybe a different license would be better.
I think the idea in the original post was to adjust the goalposts of the original halting problem to get something easier to solve. Instead of looking for programs that "eventually" halt while reproducing the required outputs, one can look for programs that halt "in some reasonable amount of time." The time-bounded halting problem is easier to solve than the original (it is decidable). As one increases the amount of time one views as "reasonable," one gets a sequence of time-bounded halting problems that can be viewed as successively good approximations of the original. There are many similarly "approximate" or "relaxed" versions of the halting problem that are "good enough" in real life and are perfectly decidable.
Not only is there nothing here that approximates a "solution" to an unsolvable problem, but that there are more programs you can decide the more time you simulate them is the actual statement of generalised halting theorems. Indeed, this is literally how they are summarised: "Informally, these theorems say that given more time, a Turing machine can solve more problems." [1]
When someone says that you can, in practice, sort-of get around the halting theorem by noting you can solve more problems given more time, do you see how it's not an approximation of a "solution" that gets around any theorem but the very point of the theorems? If you see you can solve more things in more time you are observing the halting theorem and its generalisations at work, not getting around them in practice.
But that's not the end of things because now we can ask what it means to "get around" or "approximately get around" a theorem that basically says that you can buy a bike for $50 but you can't buy a car for that price. Saying, "I don't mind riding a bike" is clearly not getting around anything, but saying "most people wouldn't mind riding a bike" could, indeed, be a way to state that the limitations of the theorem don't affect many people in practice. But such a statement also isn't true.
If someone wants to claim that in practice we rarely come across intractability where it matters, that would, indeed, be interesting, but I also don't think it's true. So if someone wants to say that most problems that are interesting in practice are decidable, that may well be true, but even Turing realised that decidability isn't a very interesting property. Decidability was replaced by tractability in the 70s, as halting was replaced by the more general and precise time hierarchy in 1965. We more precisely discovered that not only are there problems that cannot be solved in any time bound, but that there are problems that can be solved in 2^N that cannot be solved in N steps.
And if someone wants to say that we don't frequently come across intractable problems in practice, then that is not true.
If anything, the limits we run across in practice are more constrained than the known theoretical ones. Even though we have yet to prove a hierarchy between P and NP, and even between P and PSPACE, we do find it much harder, in practice, to solve problems that are complete in NP than problems known to be in P, and problems that are complete in PSPACE are harder in practice than those that are complete in NP.
The halting problem can be approximated by a sequence of increasingly accurate computable functions - "partial halting oracles" which give the right answer on "many" inputs, with each better than the last.
The sequences "converge to" or "approximate increasingly well" the true halting function in that for any input, there is some point in the sequence such that all subsequent partial halting oracles analyze its behavior correctly.
The halting problem is "unsolvable" because the goalposts are very high. An algorithm to "decide" the halting problem can have no "failure modes" of any kind, even if the probability of failure is vanishingly small. It must work on every single program. As soon as you limit the scope of the programs you care about analyzing in any reasonable way, like "deterministic programs without randomness that use at most 32GB RAM," the proof no longer applies.
The complexity classes you refer to don't conflict with any of this. In the general case, it is undecidable to analyze what complexity class an algorithm (or decision problem) is in, for instance, but this isn't usually summarized as "computational complexity analysis is an unsolvable problem."
My point is that focusing on the halting theorem is silly, because we have much more precise and less binary generalisations of it since 1965. Finding "practical approximations" that are easier than the hard limits is not only not easy, but a huge deal.
> The sequences "converge to" or "approximate increasingly well" the true halting function in that for any input, there is some point in the sequence such that all subsequent partial halting oracles analyze its behavior correctly.
This is irrelevant. It is obviously the case that a "there exists X such that for all Y" is very different from "for all Y there exists X", yet the latter is by no means an "effective approximation" of the former.
At the end of the day, we're always looking for some algorithm that is useful for a large class of inputs, and we know that any such algorithm cannot violate the time hierarchy in any way. It will be able to efficiently solve problems that are easy and unable to efficiently solve problems that are hard. Having any algorithm solve more problems will require it to run longer.
It may be the case that a large set of practical problems are easy, but it is also the case that a large set of practical problems are hard. Only a world-changing discovery such that P = PSPACE or that there are tractable and useful approximations for all of PSPACE will change that.
That doesn't mean, of course, that there may be many interesting easy problems that we're yet to solve.
> This is irrelevant. It is obviously the case that a "there exists X such that for all Y" is very different from "for all Y there exists X", yet the latter is by no means an "effective approximation" of the former.
If there's an algorithm that gives the correct answer to the halting problem for "lots of inputs," I think that's very relevant - particularly if we can get a sequence of such algorithms that get closer and closer to the behavior of a true halting oracle!
> At the end of the day, we're always looking for some algorithm that is useful for a large class of inputs, and we know that any such algorithm cannot violate the time hierarchy in any way. It will be able to efficiently solve problems that are easy and unable to efficiently solve problems that are hard. Having any algorithm solve more problems will require it to run longer.
I don't think it's clear that a time hierarchy even exists for randomized Turing machines, but even if it does, this is again only true in an asymptotic sense...
> It may be the case that a large set of practical problems are easy, but it is also the case that a large set of practical problems are hard. Only a world-changing discovery such that P = PSPACE or that there are tractable and useful approximations for all of PSPACE will change that.
Figuring out if an algorithm is in P/PSPACE/etc to begin with is much harder than solving the halting problem!
> They are much better at the thing they are good at, and there are some new capabilities that are big, but they are still fundamentally next-token predictors.
I don't really get this. Are you saying autoregressive LLMs won't qualify as AGI, by definition? What about diffusion models, like Mercury? Does it really matter how inference is done if the result is the same?
They will, we just need meatspace people to become dumber and more predictable. Making huge strides on that front, actually. (In no small part due to LLMs themselves, yeah.)
reply