Titanium fires sure are scary. But there's a good amount of chicken and egg here: expensive material limits demand, which limits progress on manufacturing techniques, which keeps part prices high. I would expect that significant manufacturing method progress would be made if there was a step change in the price of titanium stock.
And I wouldn't overstate the machining difficulty. Sure, it's a pain in the rear, and expensive, but can be done on regular machines with the right tools, techniques, and processes. I've made a couple of titanium parts myself.
There’s a significant history of government effort to improve working with titanium. Construction physics wrote a nice review [0].
The current level of workability and cost and alloying is after that chicken and egg. Titanium is expensive because it is hard to manufacture, not just hard to work with, which limits demand. Titanium, to what we now know, is what it is. It’s the nature of the material not a lack of investment.
More realistically, the ROI isn’t there for most applications. Good aluminum is pretty darn good, massively easier to work, cheaper, etc. newer super steels have even made serious inroads on titanium parts because of workability and toughness.
Titanium - Chlorine fires are even more magnificent than titanium-oxygen fires. Wet chlorine (>150ppm water) is too corrosive for ferrous metals and titanium is often used for pipes carrying wet chlorine.
If something happens that ignites one of these pipelines there’s absolutely no way to put it out - it has the fuel (titanium) and oxidizer (chlorine) and burns mega-hot until one of them is fully consumed along the entire length of the pipeline. The pipelines can sometimes be shockingly long (1 mile-ish).
But there’s also the base chemistry: titanium doesn’t behave like steel, and the chemical differences are why it is such a pain to work with, not inexperience.
The chemical difference between titanium and steel is mainly that titanium has a much higher reactivity with oxygen and nitrogen, the main constituents of air.
Like with aluminum, this high reactivity is masked in finite products made of titanium, because any titanium object is covered by a protective layer of titanium dioxide.
What is worse in titanium than in aluminum is that titanium has a low thermal conductivity, so a small part of the titanium can become very hot during processing, which does not happen with aluminum, where the remainder of the aluminum acts like a heatsink.
The hot spots that exist on titanium during processing, which do not exist on aluminum during processing, make titanium much more susceptible to reacting with the air or even to starting a fire.
Titanium, even as "commercially pure", has a much higher strength than aluminum, which requires higher forces for machining and increases even more the chances for overheating.
> Like with aluminum, this high reactivity is masked in finite products made of titanium, because any titanium object is covered by a protective layer of titanium dioxide.
My understanding is that rust fails to protect iron the same way. Is that right? If so, why the difference?
Yes, it is right. The difference is that in the case of aluminium and titanium (but also stainless steel), the oxide grows in a uniform way, covering all the metal. These protective layers are very thin and act as barriers stopping oxygen from reaching the metal underneath.
In case of iron, oxidation occurs at different points on the surface and the oxide layer initially leaves most of the metal exposed. The oxide is also not effective at stopping oxygen, so the rust layers keeps growing until it forms flakes that fall, exposing more of the metal. The process repeats until all the metal is consumed.
Once rust starts, it is porous & flaky and allows more oxygen to infiltrate and hit the next layer of iron. The reason it is porous & flaky is due to creating a mix of FeO and Fe2O3 which have different crystal structures so it doesn't create a nice protective barrier.
Rust can protect iron in that way, bluing is a common process to create a protective rust coating. However rust is fragile and often flakes off thus allowing the process to continue. Other metals their oxide is strong enough to protect the pure inner layers.
This depends on the alloy involved as well. In general though rust is not a good iron protection.
Doing distributed systems work in Lean is possible, but right now is much harder than something like TLA+ or P. It's possible that a richer library of systems primitives in Lean ('mathlib for systems') could make it easier. Lean is a very useful tool, but right now isn't where I'd start for systems work (unless I was doing something specific, like trying to formalize FLP for a paper).
Hey Will. I'm a huge fan of the work you all are doing, and of FoundationDB, but I don't believe it's accurate that DST was invented at FoundationDB (or, maybe it was, but was also used in other places around the same time or before).
For example, the first implementations of AWS's internal lock service (Alf) used DST as a key part of the testing strategy, sometime around 2009. Al Vermeulen was influential in introducing it at AWS, and I believe it built on some things he'd worked on before.
Still, Anithesis is super cool, and I really admire how you all are changing the conversation around systems correctness. So this is a minor point.
Also a huge proponent of Antithesis and their current work, but there definitely were some notable precedents at or around that time e.g. MODIST from 2009 (https://www.usenix.org/legacy/event/nsdi09/tech/full_papers/...), which similarly tried to build a "model checker for distributed systems".
As another interesting historical side note, I have wondered about the similarities between Antithesis and "Corensic", a startup spun out of UW around a similar time period (circa 2009). Apparently, they "built a hypervisor that could on-demand turn a guest operating system into deterministic mode" (see "Deterministic Multiprocessing" at
https://homes.cs.washington.edu/~oskin/). My impression is that their product was not a significant commercial success, and the company was acquired by F5 Networks in 2012 (https://comotion.uw.edu/startups/corensic/).
Overall, I don't over-index on novelty, and think it is generally good for ideas to be recycled/revived/re-explored, with updated, modern perspectives. I believe that most rigorous systems designers/engineers likely converge to similar ideas (model checking, value of determinism, etc.) after dealing with these types of complex systems for long enough. But, it is nevertheless interesting to trace the historical developments.
Corensic was impressive tech. I actually debriefed with one of their founders years ago. IIRC, their product was focused on finding single-process concurrency bugs.
Deterministic hypervisors are by no means new. Somebody once told me that VMWare used to support a deterministic emulation mode (mostly used for internal debugging). Apparently they lost the capability some time ago.
Hi Marc, thank you for the correction! We started doing it around 2010, and were not aware of any prior art. But I am not surprised to hear that others had the idea before us. I will give Al credit in the future.
I (one of the authors) did some distributed systems work with Promela about a decade ago, but it never felt like the right fit in the domain. It's got some cool ideas, and may be worth revisiting at some point.
> Maybe one can transform slow code from high level languages to low level language via LLMs in future.
This is one of the areas I'm most excited for LLM developer tooling. Choosing a language, database, or framework is a really expensive up-front decision for a lot of teams, made when they have the least information about what they're building, and very expensive to take back.
If LLM-powered tools could take 10-100x off the cost of these migrations, it would significantly reduce the risk of early decisions, and make it a ton easier to make software more reliable and cheaper to run.
It's very believable to me that, even with today's model capabilities, that 10-100x is achievable.
I remember many years back one of Go language author wrote C to Go trasformer and used that to convert all compiler, runtime, GC etc into Go.
Now in today's time some experts like above could create base transformer for high level language and frameworks to low level language and frameworks and this all get exposed via llm interfaces.
One can say why all this instead of generating fast binary directly from high level code. But generating textual transformation would give developers opportunity to understand, tweak and adjust transformed code which generating direct binary would not.
Which features would you like to see the team build first? Which limits would you like to see lifted first?
Most of the limitations you can see in the documentation are things we haven't gotten to building yet, and it's super helpful to know what folks need so we can prioritize the backlog.
indexes! vector, trigram and maybe geospatial. (some may be in by now I didn't follow the service as closely as others)
note, doesn't have to be pg_vector pg_trgm or PostGIS, just the index component even if it's a clean room implementation would make this way more useful.
My understanding is the way Aurora DSQL distributes data widely makes bulk writes extremely slow/expensive. So no COPY, INSERT with >3k rows, TRUNCATE etc
This makes sense, especially with the move of OSDI to being annual, and NSDI accepting more and more general systems-y work (e.g. we published the Firecracker paper at NSDI, which wouldn't have made sense even 5 years earlier). ATC was left in a difficult niche, but still a valuable venue for "hackier" systems work, industry systems papers of the less quantitative kind, and a few others.
I'd love to see OSDI evolve to accept more of this work, and look at the ATC work that has stood the test of time and accept more work like that. Maybe SOSP and Eurosys too. I bet USENIX is going to figure this out - they're generally a smart and well-run organization.
Fun fact: we won best industry paper at ATC'23 for "On-demand container loading in AWS Lambda" (https://www.usenix.org/conference/atc23/presentation/brooker). I was super happy with it as a paper and got a ton of good feedback on it. About six months earlier, it'd been desk-rejected by the chair of another systems conference who asked that we don't submit such low-quality work to them.
This is great, really worth reading if you're interested in transactions.
I liked it so much I wrote up how the model applies to Amazon Aurora DSQL at https://brooker.co.za/blog/2025/04/17/decomposing.html It's interesting because of DSQL's distributed nature, and the decoupling between durability and application to storage in our architecture.
DSQL is so cool - have been following since the release and once it supports more of the postgres feature set + extensions it’ll be a killer. Fantastic architecture deep dive at ReInvent as well.
And I wouldn't overstate the machining difficulty. Sure, it's a pain in the rear, and expensive, but can be done on regular machines with the right tools, techniques, and processes. I've made a couple of titanium parts myself.