Thanks for taking the time for the explanation and the rundown on the architecture. Sounds a bit like an LMAX disruptor for DB, which honestly is quite a natural implementation of performance. Kudos for the Zig implementation as well, I've never seen a project as serious in it.
Personally, I still see challenges in developing on top of a system with data in two places unless there's a nice way to sync between them, and I would have seen the mutable/immutable classification as more of unlogged vs changes fully logged in DB, but I'm just doing armchair analysis here.
Exactly, the Martin Thompson talk I linked above is about the LMAX architecture. He gave this at QCon London I think in May 2020 and we were designing TigerBeetle in July 2020, pretty much lapping this up (I'd been a fan of Thompson's Mechanical Sympathy blog already for a few years by this point).
I think the way to see this is not as "two places for the same type of data" but rather as "separation of concerns for radically different types of data" with different compliance/retention/mutability/access/performance/scale characteristics.
It's also a natural architecture, and nothing new. How you would probably want to architect the "core" of a core banking system. We literally lifted the design for TigerBeetle directly out of the central bank switch's internal core, so that it would be dead simple to "heart transplant" back in later.
The surprising thing though, was when small fintech startups, energy and gaming companies started reaching out. The primitives are easy to build with and unlock significantly more scale. Again, like using object storage in addition to Postgres is probably a good idea.
Personally, I still see challenges in developing on top of a system with data in two places unless there's a nice way to sync between them, and I would have seen the mutable/immutable classification as more of unlogged vs changes fully logged in DB, but I'm just doing armchair analysis here.