RISC OS is still available for the Raspberry Pi. You get to basic by pressing F12 and typing ‘BASIC’, and it has a built-in ARM assembler already - the manuals are all available online, there’s a BASIC manual and the Programmers Reference Manual for the whole OS.
(It’s exactly like a BBC B because this is the same version of BASIC, ported to ARM, it even emulates some of the hardware because back in the day they wanted to make software run without porting)
In theory typing ‘*CONFIGURE LANGUAGE 20’ (BASIC being module 20) should make it into a machine that boots straight to BASIC but when I tried it just now it just ignores the setting because the keyboard drivers aren’t loaded soon enough.
I started off with a BBC Micro, followed by an Acorn A3000. My first 'PC' was a 486 card for the RISC PC - now there's an interesting architecture: the machine had two processor slots, but didn't require that the processors to have the same architecture. You could use the 486 as a very janky floating point accelerator for the ARM chip as well as to run DOS and Windows.
An interesting thing is that RISC OS is still available for the Raspberry Pi and it's a direct descendant from the operating system of the BBC Micro - not emulated. It still has the same level of direct hardware access, so if you ever wanted to use peek and poke (well, those are the ! and ? operators in BBC BASIC) on some modern graphics hardware, there's a way to do it. There's a built-in ARM assembler in there too.
What I think was really different about the time was the quality of the documentation. Nothing modern has the same sense of empathy for the user or achieves the same combination of conciseness and comprehensiveness. For instance, here's the BBC Micro's Advanced User Guide: https://stardot.org.uk/mirrors/www.bbcdocs.com/filebase/esse... (it's of particular historical note, because today's ARM architecture grew out of this system). You could build the entire computer from parts using just this 500 page manual, and you'll note that it's not actually a huge amount more complicated than Ben Eater's 6502 breadboard computer.
Weird thing: RISC OS actually has backwards compatibility with some of the old APIs so some of the stuff in the advanced user guide still works today on a Raspberry Pi (plus it comes with a BBC Micro emulator which was originally written because Acorn didn't want their new machine to fail due to a lack of software). These days there's also https://bbcmic.ro of course :-)
The Programmers Reference Manual for RISC OS is similarly well written, and surprisingly quite a lot of it is still relevant: most things still work on a Raspberry PI, and even modern operating systems still work pretty much the same way on the architecture. While things like MEMC, IOC and VIDC are long dead, there's a pretty direct lineage for the modern hardware to these older chips too.
Alan Kay has done a few lectures about exactly this phenomenon: his talk 'Normal Considered Harmful' in particular is worth a look as it pretty much goes straight to the heart of the issue: https://www.youtube.com/watch?v=FvmTSpJU-Xc
I wrote a parser generator quite a long time ago that I think improves the syntax quite a lot, and which has an interesting approach to generalisation: you can write conditions on the lookahead (which are just grammars that need to be matched in order to pick a given rule when a conflict needs to be resolved). This construct makes it much easier to write a grammar that matches how a language is designed.
Here's an ANSI-C parser, for example: https://github.com/Logicalshift/TameParse/blob/master/Exampl... - this is an interesting example because `(foo)(bar)` is fully ambiguous in ANSI C: it can be a cast or a function call depending on if `foo` is a type or a variable.
An advantage over GLR or backtracking approaches is that this still detects ambiguities in the language so it's much easier to write a grammar that doesn't end up running in exponential time or space, plus when an ambiguity is resolved by the generalisation, which version is specified by the grammar and is not arbitrary (backtracking) or left until later (GLR).
I was working on improving error handling when I stopped work on this, but my approach here was not working out.
(This is a long-abandoned project of mine but the approach to ambiguities and the syntax seem like they're novel to me and were definitely an improvement over anything else I found at the time. The lexer language has a few neat features in it too)
I've been working a 2D rendering toolkit that increasingly looks to me like it probably deserves a mention on these lists: https://github.com/logicalshift/flo_draw (but I'm not on Reddit...). Layers, vector sprites, dynamic textures and a streaming API that fits well with 'reactive' designs are amongst the features that make it stand out from what else is out there. It's super simple to get going too.
Started life as a rendering layer for FlowBetween so I could put in whatever looked like it was 'winning' later on but wound up writing my own renderer as there wasn't anything quite there yet. Still has that design so another unique thing is that it's possible to use the same API with whatever rendering layer you want.
Speaking of FlowBetween, one thing I have wanted to do for ages is to get rid of the platform-specific GUIs and use something universal. It should be easy because FlowBetween sends straightforward instructions to an independent GUI layer, but I keep bouncing off for a few reasons:
- it's a big ole task so I definitely want to pick something that's stable and also lets me hedge my bets in terms of being easy to migrate away from
- most commonly, FlowBetween needs pressure data from tablets and a lot of frameworks just don't do that (this is also in a terrible state in browsers)
- lots of GUI crates are designed as frameworks and so try to dictate the entire design of any app that uses them, which is no good for FlowBetween which tries to keep its internal design choices independent of its choice of GUI
At the moment, I suspect that some sort of imgui framework is best along with an entirely manual implementation of tablet pointer data: fits with my existing design and isn't 'contagious' in a way that could make it hard to migrate to something else later on.
Finding the intersections between individual curves is only the first part of this operation: you also need a way to determine which edges are on the outside of the new path (flo_curves uses raycasting for this, same operation that the OP focuses on, essentially) and deal with a fairly large pile of edge cases - literally edge cases here. Things like overlapping edges, nearly overlapping edges, what happens if a ray passes through an intersection point or directly across a straight edge, precision issues, curves with loops, etc.
The main difference between the ARM1 (which was never sold in any quantity) and the ARM2 was the addition of the multiply instruction (and the multiply and accumulate instruction). They were the only multi-cycle arithmetic operation they had, and what's more you couldn't load arbitrary constants into a register with the MOV instruction or use constants with the multiply instruction itself so they were still a bit inconvenient as well as comparatively slow.
But you could write:
MOV R1, #3
MUL R0, R0, R1
In spite of the limitations, this gave the instruction set a certain 68000 quality to it (except much faster for a given clock speed).
To multiply the number in R0 by 3, which was pretty convenient. The ARM had a thing called the barrel shifter though, which let you add an arbitrary shift to the last operand of any arithmetic operation. All arithmetic ops take 1 processor cycle, so you could write this instead to multiply by 3 in a single cycle:
ADD R0, R0, R0, LSL #1
Ie, add R0 to itself multiplied by 2. Constant divisions could be constructed with the SUB instruction too. Some constants required multiple instructions (but I think the maximum was something like 4 or 5 instructions for any constant? I wrote an assembler that could figure this out for you automatically in the mid-90s so I used to know for sure).
This is basically a single-instruction version of the 6502 trick (handy, because the first OS for the ARM was a hurried port of a 6502 operating system), which sort of fits with the ARM's original inspiration as being a 32-bit version of the 6502. As each instruction completes in one CPU cycle, the ARM could have fairly monstrous integer performance for the mid to late 80s if you knew how to program it.
If Rust's your language, I wrote a library that should be pretty good at 2D things: https://github.com/logicalshift/flo_draw - I wrote it while working on another project (FlowBetween) where I found debugging would be easier if I could just render something on-screen but rendering stuff on screen always required a ridiculous amount of setup.
It has some nice options for feeding its own output back into itself as it uses streams rather than callbacks so it's quite good for procedural rendering type tasks (the 'Wibble' example is a good place to start with that)
I've been building out some backend stuff lately so there's a bunch of new features waiting to go in. https://github.com/Logicalshift/flo_draw has some demonstrations of the sort of procedural animation features I'm planning on adding, for instance.
Well, my main side project is the same as it's been for the last couple of years, an animation/vector editing tool written in Rust: https://github.com/logicalshift/flowbetween
It's sort of starting to make the transition between a pile of ideas and an actually useful tool at the moment. The whole idea is to be a vector editing application that works more like a bitmap tool when it comes to painting, so there's a flood-fill tool and a way to build up paths just by drawing on the canvas rather than having to manually mess around with control points.
The way I built the UI is unique too I think. Choices for UI librarys for Rust were quite limited when I started so I built it to be easy to move to different libraries. I don't think there's any other UI library in existence that is as seamless for switching between platforms (or which can turn from a native app to a web app with a compiler flag without resorting to something like Electron)
I suspect that https://github.com/hecrj/iced would now be another UI library that’s as seamless for switching between platforms. Flutter might qualify too (or might not).
I’ve been keeping an eye on the various UI libraries when they come up: right now it seems to take me around a month to add a new one so I’m waiting for one to get traction.
Something else that’s a problem is that as a drawing app, FlowBetween wants to be able to get access to data from a digitizer: pen pressure and tilt in particular. A lot of UI libraries don’t think to pass that through from the operating system, or have an awkward API (browser support is also very spotty for this)
Yeah, lack of support for different input media has been a real pain point for me—most of the developers of these things have mice only, and don’t stop to bother about touch or pen input. I use a Surface Book which has mouse, touch and pen, and I like to use all three forms at various times.
If you’re trying to do touch and pen on non-web platforms, things tend to be very messy if you want to handle all three types of pointers optimally.
But browser support spotty? I find the pointers events API a marvellous abstraction over platform differences, doing the right thing automatically for >99% of cases, and making the remaining cases possible. The only thing I feel it actually lacks is standardised gesture support for touch. I wrote a simple pressure-capable drawing app a couple of years back in the very early days of pressure-sensitivity (back when Edge was the only browser on Windows that supported it, so I targeted Edge only until other browsers got it), and I found it a refreshingly straightforward system to work with. And since then, everyone implements things like tilt and pressure.
So I’m curious to hear what you’re quibbling over, as someone that’s been using this stuff in anger more recently than I.
I suspect some of my experience is now out of date, as it's now spread out over quite some time. The most recent issue I had to deal with was Chrome: when drawing the canvas at high-res it was being a bit slow at blitting some bitmaps and so was running at 30fps. Something is tied to the framerate with the pointer events implementation and so the events also lagged behind, which made drawing on the canvas quite difficult as the display was 250-500ms behind the user. Eventually 'fixed' by turning the resolution down, but it was a real pain finding what part of the application had got behind (FlowBetween being designed not to lag but to catch up when the display can't keep up). That's quite a subtle one and the pointer events lagging is easily mistaken for the frame rate lagging.
Other browsers don't do this, but they've had a few other issues: what I remember in particular - some only support pressure information using the touch API, and some seemed to support pressure information on different APIs on different platforms, so both pointer events and touch events were needed.
All of these are maturity issues rather than real problems with the API, though and I haven't re-checked some of the older issues recently - that Chrome issue was still happening back in January so might still be around, but the others I last encountered over a year ago so may have been fixed by now.
If you haven’t been using it, make sure to use PointerEvent.getCoalescedEvents where available, which unlinks the events from the display frame rate. Anything using pointer events for drawing should use it. (But remember that events can come in at any speed, e.g. a 240fps pen should coalesce four events per 60fps frame—so make sure you can cope with lots of events.)
I believe that the pointer events API is in current browsers now uniformly superior in functionality to the touch events API which it obsoletes.
I like to draw, and I suppose my frustrations with other animation packages that I’ve tried were the main inspiration. It’s quite nice to have something that combines two hobbies into one.
When I started I picked Rust because I’d been learning it and wanted to try using it on a more substantial project. I’m very happy with it as a choice of language: it definitely has a difficult learning curve especially with the way borrowing looks similar to references in a garbage-collected languages but works very differently. However, it’s a very expressive language: something about it makes it very easy to write code quickly that’s still very easy to follow later on.
Awesome. I dabbled with Rust several years ago, but have been thinking about diving back into it... Do you have any recommendations about where to start?
The official Rust Programming Language book is excellent and was all I really needed to learn the language. I had a small project to work on that I didn't mind rewriting after my first attempt (my build server is a NUC and I wanted to write some software flash the LEDs on the front to indicate build state)
I suspect everyone goes through a phase of hating borrowing when learning Rust: it's helpful to know that it's something that eventually 'clicks' and really stops being an issue. It didn't exist when I was learning, but 'Learning Rust with Entirely Too Many Linked Lists' looks like it would have helped a lot.
(It’s exactly like a BBC B because this is the same version of BASIC, ported to ARM, it even emulates some of the hardware because back in the day they wanted to make software run without porting)
In theory typing ‘*CONFIGURE LANGUAGE 20’ (BASIC being module 20) should make it into a machine that boots straight to BASIC but when I tried it just now it just ignores the setting because the keyboard drivers aren’t loaded soon enough.