If I may chip in, I wouldn't call it obvious or straight-forward, but multiset rewriting[1] can be implemented in terms of multiplication alone(like in Fractran), and multiplication can be implemented in origami[2], so there might be something there.
I nearly did as well but apparently it's just an article discussing the pros and cons of AI. Seems appropriate to head such an article with an AI picture.
Yeah but the article ends up defending gen AI for game development and also confuses video game AI (a giant switch statement that drives an NPC's state machines) with gen AI. This dude just has red flags everywhere.
I think you might be better off getting rid of the "AI slop" entirely. Without getting into the whole ethical debate (it's worth having, but not here), putting it front and center on the website for a new retrocomputing magazine is kind of like putting an article about new features in Microsoft Word front and center on a website for mechanical typewriter enthusiasts.
I disagree. Making high quality niche publications like this economically feasible is exactly how AI could be used to actually benefit society. I see no evidence that this site is a plagiarism mill.
All the retro computing people I know are computer nerds, and like playing with new shiny software, including llms.
The generated images take away much more than they add for me unfortunately. Attempting to harken back to an era of retro computing while using something that screams modern corporate slop is an easy way to kill the vibe. I'd recommend against it, good luck though.
Henry G. Baker wrote this paper titled "The Thermodynamics of Garbage Collection" in the 90s about linear logic, stack machines, reversibility and the cost of erasing information:
A subset of FRACTRAN programs are reversible, and I would love to see rewriting computers as a potential avenue for reversible circuit building(similar to the STARAN cpu):
> Unfortunately, it seems that they went against the project's very principle by inventing a new language, new VM and toolchain instead of simply targeting one of the existing platforms.
The intentions of Uxn are not directly in line with using say, a commodore 64, for preservation and as a portability layer, which is a monumental project to implement for each new system. The project's core principle is to design something perfectly tailored at hosting a handful of specific programs, document it in a way that if needed, others could create their own systems based on their own vision, and not centralize all preservation efforts around a handful of retrocomputing emulation.
It's more akin to using brainfuck or subleq, or Another World's VM or even Alan Kay's Chifir where the goal is to target a virtual machine that is so small(< 100 loc) that it can be easily ported. As opposed to a system so complex that it might take someone months to implement a passable C64, Amiga, or ST80 emulator.
It can be really hard to get accurate emulation with a somewhat loosely defined, high level VM such as some of your examples. If it's that small and simple, programmers might accidentally create a dependency on implementation idiosyncrasies. Just see what happens on the retro computing scene when emulators aren't perfectly cycle and hardware accurate - applications, the "demoscene" being a good example of them, that rely on very low-level details of the architecture don't work on certain emulators.
If we want to create a new "forever VM", that VM would have to very strictly define behaviors across I/O, graphics, audio and other areas. I don't want the application to stutter or run too fast on future emulators. I want the emulation to be perfectly cycle accurate.
Exactly, you get it. That's the goal of the project, no undefined behavior, no hazy specifications. :) I've dabbled in this space for quite a while now, I can assure you that dependency on implementation idiosyncrasies get increasingly worse with complex VMs.
To add a bit to this, although Dusk OS doesn't have the same goals as stage0, that is to mitigate the "trusting trust" attack, I think it effectively does it. Dusk OS kernels are less than 3000 bytes. The rest boots from source. One can easily audit those 3000 bytes manually to ensure that there's nothing inserted.
That being said, the goal of stage0 is to ultimately compile gcc and there's no way to do that with Dusk OS.
That being said (again), this README in stage0 could be updated because I indeed think that Dusk is a good counterpoint to this critique of Forth.
Oh, amazing! I've heard of DuskOS before but I didn't realize its C compiler was written in Forth.
Looks like it makes quite a few changes to C so it can't really run unmodified C code. I wonder how much work it would take to convert a full C compiler into something DuskCC can compile.
One of my goals with Onramp is to compile as much unmodified POSIX-style C code as possible without having to implement a full POSIX system. For example Onramp will never support a real fork() because the VM doesn't have virtual memory, but I do want to implement vfork() and exec().
It can't compile unmodified C code targeting POSIX. That's by design. Allowing this would import way too much complexity in the project.
But it does implement a fair chunk of C itself. The idea is to minimize the magnitude of the porting effort and make it mechanical.
For example, the driver the the DWC USB controller (the controller on the raspberry pi) comes from plan 9. There was a fair amount of porting to do, but it was mostly to remove the unnecessary hooks. The code itself, where the real logic happens, stays pretty much the same and can be compiled just fine by Dusk's C compiler.
Here's a little zine on multiset rewriting(unordered term rewriting), John Conway said(about Fractran in The Book of Numbers) that it is such a simple paradigm of computation that no book is needed to learn it, and it can be taught in 10 seconds.
No, it can operate on a data structure as well. There's string rewriting which does operate on text (but this can be stored in a structure amenable to applying rewrite rules versus brute force copying it or something silly). For term rewriting, there are plenty of efficient ways to store and operate on the information besides just textually.
Hmm maybe I misunderstand. All the rules must be applied to fixpoint or elimination, for every input right? And the larger the program (rule set) the worse the performance since more rules must be evaluated at each “tick” of the program, unless you play tricks with ordering rules.
Yes, that's often the objective. If they're properly written they will terminate, but not all sets of rules may terminate. It's possible for rules to cause divergence or cycles.
A -> A B (1)
A -> B (2)
B -> A
(1) never terminates, always adding a new B on application but not removing the A. (2) doesn't grow, but never terminates since each term is replaced with the other.
Strand is one of the languages that particularly blew my mind,
I think it was Armstrong that said that Strand was "too parallel"
Strand is a logic programming language for parallel computing,
derived from Parlog[2] which itself is a dialect of Prolog[3].
While the language superficially resembles Prolog, depth-first
search and backtracking are not provided. Instead execution of a
goal spawns a number of processes that execute concurrently.
[1] https://www.cs.mcgill.ca/~jking/papers/origami.pdf
[2] https://www.pythabacus.com/Origami%20Fractions/folding.htm