That doesn't matter.
You could imagine a system that accumulates two terms: (actual result, junk). Instead of deleting something it just adds it to the junk part of the pair.
Maybe the junk part itself has computation which never ends, but it doesn't matter because you just extract your result from the left part of the pair.
cvoss, ezwoodland, tromp and v64 made a good point. As v64 points I was thinking of a Combinatory Completeness not Turing Completeness. Dropping that requirement as cvoss, ezwoodland, tromp point you can simulate deletion (that's what quantum computing does btw see No-deleting theorem).
> S combinator always duplicates its last parameter, never deletes it. That's why K is needed for universality.
This is somewhat covered in the first link of the background, a Modern Introduction to Combinators:
> So how might this work for S combinator expressions? Basically any sophisticated computation has to live on top of an infinite combinator growth process. Or, put another way, the computation has to exist as some kind of “transient” of potentially unbounded length, that in effect “modulates” the infinite growth “carrier”.
> One would set up a program by picking an appropriate combinator expression from the infinite collection that lead to infinite growth. Then the evolution of the combinator expression would “run” the program. And one would use some computationally bounded process (perhaps a bounded version of a tree automaton) to identify when the result of the computation is ready—and one would “read it out” by using some computationally bounded “decoder”.
I understand it as "you don't need the S combinator application sequence to halt for it to compute something".
as far as I understand it, turing completeness is a weaker property than combinatory completeness, which Craig's theorem is addressing. The nonexistence of a singleton combinatory basis doesn't necessarily imply the nonexistence of a turing complete combinator.
I don't follow your argument. Why must we have the ability to "delete" sub-expressions?
Consider a computational model that, rather than work by successively rewriting an expression over and over in a way that honors some equivalence relation over expressions, it works by explicitly building the sequence of such expressions. In that kind of system, every computational state properly contains the previous state. Things grow and grow and never get "deleted". Yet such a system can clearly be universal.
> The decline in allocative efficiency should be more of a main focus — we need to throw more of our intellectual capital at understanding how to increase competitiveness and the market potential for innovative firms and technologies, in the same way that we've focused on understanding how to make better technologies.
I love programming as theory building. One key thing: if the program is built for a business then the theory must revolve around the business. For example, it should include things as "easiness to hire devs for this code base”, "stability of current business model” and “how important each feature is for the business”.
Invariant is the property that is preserved on every iteration.
Proof of termination in imperative languages can be done by proving that a natural number decreases with every step.
Dafny implements this at the compiler level (and a curly braces syntax!).
Coq uses other methods more tailored towards recursion.
You are right that if every loop must terminate, it is not Turing complete.
So some valid programs will not compile.
There are some interesting programs that potentially never terminate (like servers, daemons, OS, games, etc) formal methods can be applied too. For instance to prove that they preserve certain property or that they don't terminate or terminate only under certain conditions.
I find the theory extremely elegant and pleasurable, but it´s obviously not everyone's cup of tea, as shown by its lack of widespread use.
LLM's might create a revival in the coming years for the following reasons:
1) cost of formalization goes down
2) cost of proving goes down
3) cost of programming goes down
4) provable code quality becomes a differentiation among a sea of programs
This can be proved by induction. Or you can cite Craig's theorem (the less known one) for that. See [1]
Honestly, I don't see the endgame here.
[1] https://math.stackexchange.com/questions/839926/is-there-a-p...