Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What exactly is a "type-related disadvantage"?

As far as I'm aware, Ada has a much more expressive type system and not by a hair. By miles. Being able to define custom bounds checked ordinals, being able to index arrays with any enumerable type. Defining custom arithmatic operators for types. adding compile and runtime typechecks to types with pre/post conditions, iteration variants, predicates, etc... Discriminant records. Record representation clauses.

I'm not sure what disadvantages exist.



References and lifetimes are where Rust wins over Ada, although I agree that Ada has many other great type features.

Access types are unable to express ownership transfer without SPARK (and a sufficiently recent release of gnatprove), and without it the rules for accessibility checks are so complex they're being revamped entirely. And access types can only express lifetimes through lexical scope, which combined with the lack of first-class packages (a la OCaml) means library code just can't define access types with reasonable lifetimes.

Also, I appreciate that Rust is memory safe by default and without toying with a dozen pragmas. Ada needs a profile that guarantees that code can't be the source of erroneous execution and constrains bounded errors further.


... and formal verification is where Ada / SPARK wins over Rust.

I mean, we can go on but I think it quite ends there, as far as safety goes. :D

There is a reason for why Ada is used in industries that are mission-critical.

> Ada needs a profile that guarantees that code can't be the source of erroneous execution and constrains bounded errors further.

Not really, you can just use Ada / SPARK and it is all checked at compile-time. Look for my comments where I mention Ada.


Only if SPARK is powerful enough to verify the code you need to write in the first place, which from my experience with SPARK is not a given as soon as access types are involved. Also Rust doesn't need to exclude high level language features just to prove memory safety, whereas SPARK can't even verify any use of controlled types.

If SPARK really were enough, I'd just write all Ada in SPARK of course.


You're partly right that SPARK is conservative, but the claim that it "can't handle access or controlled types" is misleading.

SPARK does support Ada access types (pointers) under a formal permission/ownership model added to GNATprove in recent years. This system - drawn from Rust's borrow-checker ideas - enforces a Concurrent Read, Exclusive Write (CREW) discipline. It enables verification of typical pointer-based structures (singly-linked lists, trees, other acyclic data) via ownership transfers and borrows that the prover can reason about automatically[1]. So long as you follow the ownership rules (no uncontrolled aliasing, no cycles), GNATprove can verify pointer-based code, ensuring the absence of memory leaks and dangling references.

As for controlled types, SPARK excludes them in the verifiable subset because `Initialize`, `Adjust`, and `Finalize` introduce implicit operations that complicate sound reasoning for the automated prover[2]. However, that doesn’t prevent you from using controlled types - you just need to isolate them. The standard pattern is to define them in a `private` part under `pragma SPARK_Mode(Off)` and expose only SPARK-safe wrappers[3]. The controlled operations still run at runtime, but they are hidden from the proof engine. This way, you retain deterministic finalization and prove correctness for the SPARK-visible interface.

In short: SPARK doesn't "exclude high-level features just to prove memory safety" - it modularizes them. It restricts what the prover sees so proofs remain sound and automatic, but you can still use these features through encapsulation. Rust enforces memory safety through its type system at compile time; SPARK aims to prove memory safety plus richer functional properties (absence of runtime errors, contractual correctness) via formal verification. Their goals differ, and SPARK's restrictions are a deliberate trade-off that enable stronger, machine-checked guarantees for high-assurance software.

If you want me to elaborate on the specifics, let me know. Additionally, see https://news.ycombinator.com/item?id=45494263.

Additionally, I am willing to provide more information about what SPARK typically can prove automatically and why some gaps exist.

[1] Jaloyan, Moy, Paskevich - Borrowing Safe Pointers from Rust in SPARK (permission-based alias analysis)

[2] SPARK 2014 User’s Guide, Language Restrictions (implicit operations, excluded features)

[3] AdaCore SPARK tutorials/blogs - exposing controlled types behind SPARK-safe interfaces


> permission/ownership model added to GNATprove in recent years.

So recently that last time I tried Alire failed to install me a version of gnatprove that supported it. I hope next time I try it actually works though, because I do like the semantics they came up with from the examples I'd seen.

> that doesn’t prevent you from using controlled types - you just need to isolate them. [...] `pragma SPARK_Mode(Off)`

No need to elaborate, I know that SPARK_Mode => Off is the SPARK equivalent of unsafe in Rust; that is, regular Ada. So yes, it does exclude controlled types, and trying to twist SPARK's lack of semantics for them into a pro is not a good argument IMO.


> SPARK_Mode => Off is the SPARK equivalent of unsafe in Rust

As I said before: structurally, yes - both mark an implementation as opaque to the verifier/compiler. Semantically, though, SPARK is proof-driven: the verified interface still generates obligations for all callers, and the off-proof body is modularized. Rust's `unsafe` lifts restrictions, but does not automatically enforce correctness across the boundary. That distinction is why SPARK can provide whole-program, machine-checked guarantees while `unsafe` only shifts responsibility to the programmer.

... but I have already said this.


There is more than one formal verification library for Rust, although none is as integrated as SPARK I believe.


How does that work?

Why can't other languages have a "formal verification library"?


> How does that work?

Usually taking the IR (MIR) from rustc and translating it to a verifier engine/language, with the help of metadata in the source (attributes) when needed. E.g. Kani, Prusti, Creusot and more.

> Why can't other languages have a "formal verification library"?

I don't think there is a reason that prevents that, and perhaps some have, however it turns out that modelling shared mutability formally is really hard, and therefore the shared xor mutable rule in Rust really helps verification.


Wouldn't C have such a library already if it really was "possible"? You can turn off strict aliasing.


Not a library, but it kinda does: Frama-C will formally verify C code using special comments to write the contracts in, and I hear it can prove more pointer properties than SPARK, although it takes more more effort to maintain a Frama-C codebase due to all the other missing language features.


Does that mean I can use C + Frama-C so I won't have to use Rust (or Ada / SPARK)? :P


Well, I'm not an expert in formal verifications, but there are such libraries, I listed few of them above, you can go and check them. So it is possible.

C doesn't have the shared xor mutable rule - with strict aliasing or without.


I checked them, but I am not convinced and I am not sure why it was brought into this thread.

SPARK has industrial-strength, integrated verification proven in avionics (DO-178C), rail (EN 50128), and automotive (ISO 26262) contexts. Rust's tools are experimental research projects with limited adoption, and they're bolted-on and fundamentally different from SPARK.

SPARK is a designed-for-verification language subset. Ada code is written in SPARK's restricted subset with contracts (Pre, Post, Contract_Cases, loop invariants) as first-class language features. The GNAT compiler understands these natively, and GNATprove performs verification as part of the build process. It's integrated at the language specification level.

Rust's tools retrofit verification onto a language designed for different goals (zero-cost abstractions, memory safety via ownership). They translate existing Rust semantics into verification languages after the fact - architecturally similar to C's Frama-C or VCC (!). The key difference from C is that Rust's type system already guarantees memory safety in safe code, so these tools can focus on functional correctness rather than proving absence of undefined behavior.

Bottom line is that these tools cannot achieve SPARK-level verification for fundamental reasons: `unsafe` blocks create unverifiable boundaries, the trait system and lifetime inference are too complex to model completely, procedural macros generate code that can't be statically verified, interior mutability (`Cell`, `RefCell`) bypasses type system guarantees, and Rust can panic in safe code. Most critically, Rust lacks a formal language specification with mathematical semantics.

SPARK has no escape hatches, so if it compiles in SPARK, the mathematical guarantees hold for the entire program. SPARK's formal semantics map directly to verification conditions. Rust's semantics are informally specified and constantly evolving (async, const generics, GATs). This isn't tooling immaturity though, it's a consequence of language design.


> SPARK has no escape hatches

It does, you can declare a procedure in SPARK but implement it in regular Ada. Ada is SPARK's escape hatch.

This is just as visible in source code as unsafe is in Rust, a procedure body will have SPARK_Mode => Off.


Yes, SPARK provides an explicit escape hatch, but it is meaningfully different from Rust's `unsafe`.

In SPARK: you keep the spec in the verifiable world (a SPARK-visible subprogram declaration) and place the implementation outside the proof boundary (for example in a body or unit compiled with `pragma SPARK_Mode(Off)`). The body is visible in source but intentionally opaque to the prover; callers are still proved against the spec and must rely on that contract.

In Rust: `unsafe` is a lexical, language-level scope or function attribute that disables certain compiler/borrow checks for the code inside the block or function. The unchecked code remains inline and visible; the language/borrow-checker no longer enforces some invariants there, so responsibility shifts to the programmer at the lexical site.

Practical differences reviewers should understand:

- Granularity: `unsafe` is lexical (blocks/functions); SPARK's hatch is modular/procedural (spec vs body).

- Visibility: Rust's unchecked code is written inline and annotated with `unsafe`; SPARK's unchecked implementation is placed outside the prover but the spec remains visible and provable.

- Enforcement model: Rust's safety holes are enforced/annotated by the compiler's type/borrow rules (caller and callee responsibilities are explicit at the site). SPARK enforces sound proofs on the interface; the off-proof body must be shown (by review/tests/docs) to meet the proven contract.

- Best practice: keep off-proof bodies tiny, give strong pre/postconditions on the SPARK spec, minimize the exposed surface, and rigorously review and test the non-SPARK implementation.

TL;DR: `unsafe` = "this code block bypasses language checks"; SPARK's escape hatch = "this implementation is deliberately outside the prover, but the interface is still what we formally prove against."


I'm not sure I see much of a difference still. Here's what I think is a more correct Rust comparison:

> In [Rust]: you keep the spec in the verifiable world (a [type checker-visible function signature]) and place the implementation outside the proof boundary (for example in a body or unit compiled with `unsafe`). The body is visible in source but intentionally opaque to the [type checker; callers are still proved against the spec and must rely on that contract.

These "practical differences" seem questionable to me as well.

> Granularity: `unsafe` is lexical (blocks/functions); SPARK's hatch is modular/procedural (spec vs body).

`unsafe` can trivially be expanded from a block to a function body, thereby transforming it to a "modular/procedural" hatch. `unsafe` on function signatures is more akin to SPARK's `Assume`, albeit with the precise assumption (hopefully) in the documentation.

> Visibility: Rust's unchecked code is written inline and annotated with `unsafe`

This... doesn't seem like a difference to me? According to you, you can place SPARK's unchecked code "in a body [] compiled with `pragma SPARK_Mode(Off)`". That sure sounds like writing unchecked code "inline" and annotated with SPARK's equivalent of `unsafe` to me.

> SPARK's unchecked implementation is placed outside the prover but the spec remains visible and provable.

You can say an analogous thing about Rust: The unchecked implementation is placed outside the [type checker] but the [function signature] remains visible and [checkable].

> Enforcement model: Rust's safety holes are enforced/annotated by the compiler's type/borrow rules (caller and callee responsibilities are explicit at the site).

This is... confusing? I'm honestly not sure what you're trying to get at.

> SPARK enforces sound proofs on the interface; the off-proof body must be shown (by review/tests/docs) to meet the proven contract.

The Rust equivalent is a non-`unsafe` function signature with an `unsafe` body, and what you say is equally as applicable.

> Best practice: keep off-proof bodies tiny, give strong pre/postconditions on the SPARK spec, minimize the exposed surface, and rigorously review and test the non-SPARK implementation.

This... doesn't describe a difference at all? It's equally as applicable to the analogous Rust construct.

> `unsafe` = "this code block bypasses language checks"

Technically wrong? Bit of a nitpick, but `unsafe` technically does not disable any language checks; it gives you access to unchecked capabilities. It's a slight difference but an important one, since it means that if you have code that doesn't compile and doesn't use unchecked capabilities (e.g., a borrow checking error with regular references), wrapping the code in `unsafe` will not allow your code to compile.

> SPARK's escape hatch = "this implementation is deliberately outside the prover, but the interface is still what we formally prove against."

Again, you can say basically the same thing for non-`unsafe` functions in Rust.


I think you're conflating surface similarity with semantic equivalence - Rust's `unsafe` is not the same as SPARK's off-proof/`SPARK_Mode(Off)` hatch.

> `unsafe` can trivially be expanded from a block to a function body, thereby transforming it to a "modular/procedural" hatch. `unsafe` on function signatures is more akin to SPARK's `Assume`, albeit with the precise assumption (hopefully) in the documentation.

Not exactly. The key difference is semantic, not syntactic. SPARK's modular/procedural hatch is proof-driven: the spec (pre/postconditions, invariants) remains fully visible to the prover, generating verification conditions for callers. The off-proof body is intentionally opaque and becomes an external assurance artifact. Rust's `unsafe` merely lifts compiler restrictions on the enclosed code; it does not automatically generate proof obligations, regardless of whether it is block- or function-scoped.

> This... doesn't seem like a difference to me? According to you, you can place SPARK's unchecked code "in a body [] compiled with `pragma SPARK_Mode(Off)`". That sure sounds like writing unchecked code "inline" and annotated with SPARK's equivalent of `unsafe` to me.

Superficially yes, but the semantics differ. In SPARK, the spec is verified for all callers; the body is hidden but the interface guarantees are enforced. In Rust, the `unsafe` body is visible to the compiler and only relaxes type/borrow checks. No automatic verification of functional behavior occurs across the boundary without external tools.

> You can say an analogous thing about Rust: The unchecked implementation is placed outside the [type checker] but the [function signature] remains visible and [checkable].

Not really. Rust enforces type and ownership correctness, not behavioral correctness. SPARK's prover generates semantic proof obligations for each call based on the spec. Rust's function signature alone does not provide that level of guarantee.

> This is... confusing? I'm honestly not sure what you're trying to get at.

The point is that SPARK enforces caller correctness automatically via the prover using the spec. The off-proof body is a separate assurance item. In Rust, `unsafe` transfers responsibility to the implementer; the compiler does not enforce semantic contracts. The difference is enforcement model: SPARK's model gives machine-checked guarantees for safe clients, Rust does not.

> The Rust equivalent is a non-`unsafe` function signature with an `unsafe` body, and what you say is equally as applicable.

Not quite. Without an external verifier and additional annotations, Rust does not automatically prove functional correctness across an `unsafe` boundary. SPARK's off-proof body still guarantees that callers satisfy the verified spec, which is enforced at compile time.

To reiterate in a way that may be understood better: without additional external verification, Rust does not automatically prove anything about functional behavior across the `unsafe` boundary. SPARK does, by design: callers are verified against the spec even if the body is off-proof. Rust requires a separate verifier (Prusti, Creusot, Kani, Verus) plus annotations to reach the same level of guarantee[1].

> This... doesn't describe a difference at all? It's equally as applicable to the analogous Rust construct.

Best practices may overlap (small unchecked regions, rigorous review), but that doesn't make them equivalent. SPARK's off-proof body is part of a proof-integrated model; Rust's `unsafe` is merely a compiler capability marker. The guarantees differ.

> Technically wrong? Bit of a nitpick, but `unsafe` technically does not disable any language checks; it gives you access to unchecked capabilities. It's a slight difference but an important one, since it means that if you have code that doesn't compile and doesn't use unchecked capabilities (e.g., a borrow checking error with regular references), wrapping the code in `unsafe` will not allow your code to compile.

Correct, that nuance is valid. It does not change the core distinction though: SPARK produces verifier-checked obligations for the interface, whereas Rust's `unsafe` simply gives the programmer unchecked access without automated semantic proofs.

> Again, you can say basically the same thing for non-`unsafe` functions in Rust.

No. Non-`unsafe` Rust functions are type-safe, but they do not automatically generate semantic verification conditions for callers. SPARK does, and this is the fundamental difference: proof-driven interface vs. capability marker.

TL;DR: SPARK's "escape hatch" is a proof-driven verification boundary: the spec is machine-checked and callers are proved against it; the implementation may be off-proof but the interface guarantees remain enforced. Rust's `unsafe` is a language capability marker that grants low-level operations and transfers responsibility to the programmer; it does not, by itself, produce verifier-checked behavioral contracts (you need external tools like Prusti/Creusot/Kani/Verus for that).

I can re-explain the differences in a more digestible manner if that would help, instead of going back and forth.

[1] See https://news.ycombinator.com/item?id=45508022 as well.


There's a reason I keep saying "analogous" and apparently I didn't explain it well enough. I'm quite aware that Rust doesn't have the same ability to verify behavior that SPARK does, and I'm not trying to argue that `unsafe` is literally equivalent to `SPARK_Mode(Off)` or that Rust is capable of everything that SPARK is. What I'm trying to get at is that `unsafe` is not "meaningfully different" because it serves a similar purpose to `SPARK_Mode(Off)` for the respective language constructs. Or perhaps put another way, my claim is that if Rust hypothetically gained a proof system with appropriate information in its function signatures the way `unsafe` would be used in function bodies would be both pretty much exactly the way it's currently used and also morally equivalent to the way `SPARK_Mode(Off)` is used.

I'm not going to respond directly to most of what you say because most of those responses would basically be "I said 'analogous' for a reason", but there are a few additional notes:

> SPARK's modular/procedural hatch is proof-driven: the spec (pre/postconditions, invariants) remains fully visible to the prover, generating verification conditions for callers. The off-proof body is intentionally opaque and becomes an external assurance artifact. Rust's `unsafe` merely lifts compiler restrictions on the enclosed code; it does not automatically generate proof obligations, regardless of whether it is block- or function-scoped.

The proof bit is missing the point. What I was trying to get at is that `unsafe` in the function body, like `SPARK_Mode(Off)`, does not affect the part of the function that interacts with external code. `SPARK_Mode(Off)` does not affect how the prover uses the spec, and `unsafe` does not affect how rustc uses the function signature. The use of `SPARK_Mode(Off)`/`unsafe` is contained entirely within the function body; i.e., they is "opaque".

> Superficially yes, but the semantics differ. In SPARK, the spec is verified for all callers; the body is hidden but the interface guarantees are enforced. In Rust, the `unsafe` body is visible to the compiler and only relaxes type/borrow checks. No automatic verification of functional behavior occurs across the boundary without external tools.

I'm not sure this is entirely accurate. Again, `unsafe` in a function body does not affect how the compiler uses the function signature in the rest of the program; in this way, `unsafe` works similarly to `SPARK_Mode(Off)` in that "the [function signature] is verified for all callers; the body is hidden but the interface guarantees are enforced". Obviously SPARK can guarantee more than Rust, but I'm not trying to argue otherwise.

> Not quite. Without an external verifier and additional annotations, Rust does not automatically prove functional correctness across an `unsafe` boundary. SPARK's off-proof body still guarantees that callers satisfy the verified spec, which is enforced at compile time.

This is a bad wording on my part; "The Rust equivalent" should have been "The analogous Rust construct". Again, I'm trying to show that `unsafe` is not that different from `SPARK_Mode(Off)` for analogous language constructs.


> I'm quite aware that Rust doesn't have the same ability to verify behavior that SPARK does ... what I'm trying to get at is that `unsafe` is not "meaningfully different" because it serves a similar purpose to `SPARK_Mode(Off)` for the respective language constructs.

I understand your point about the structural analogy that in both languages you can have a function body where the usual automated checks are relaxed or treated as opaque. That said, the resemblance is syntactic/structural, not semantic. SPARK's off-proof body is embedded in a proof-driven model: the spec generates verification conditions for callers automatically, and those obligations are enforced by the prover. Rust's `unsafe` merely lifts type/borrow restrictions for the implementer; there are no automatic, prover-enforced obligations for callers, and correctness must be established by review/tests or by external verification tools.

> `unsafe` in the function body does not affect how the compiler uses the function signature in the rest of the program; in this way, `unsafe` works similarly to `SPARK_Mode(Off)` in that "the [function signature] is verified for all callers; the body is hidden but the interface guarantees are enforced".

Structurally, yes, both expose an interface and localize unchecked operations but the enforced guarantees are not comparable. In SPARK, the spec is visible to the prover and the implementation is opaque to the prover; the prover enforces machine-checked contracts for all callers. In Rust, the function body remains visible to `rustc` (and `unsafe` only grants access to unchecked capabilities); the compiler enforces type/ownership rules but does not ensure higher-level behavioral correctness across `unsafe` boundaries. That semantic difference is what makes SPARK "proof-driven" vs. Rust `unsafe` a "capability marker"[1].

TL;DR: They may look analogous at the interface/body level, but only SPARK actually enforces semantic guarantees. `unsafe` is a marker that shifts responsibility to the programmer; it does not create verifier-checked obligations for callers.

[1] By "capability marker" (privileged/block-scoped permission, escape/privilege annotation, whatever) I mean that `unsafe` is an annotation that grants permission to perform low-level/unchecked operations; it does not, by itself, create machine-checked behavioral contracts for callers.


I still feel you're missing my point. What you're describing as differences are generally differences between SPARK and Rust, not really a difference between `unsafe` and `SPARK_Mode(Off)`. For example:

> Rust's `unsafe` merely lifts type/borrow restrictions for the implementer

This is a commonality (for respective language constructs), not a difference, since you can just as easily say "`SPARK_Mode(Off)` merely lifts [SPARK] restrictions for the implementer". That is one of the primary reasons both `unsafe` and `SPARK_Mode(Off)` exist!

> there are no automatic, prover-enforced obligations for callers

To the extent that we are considering proof obligations beyond what is expressible in Rust's type system, this is a difference between Rust and SPARK that is completely independent of whether `unsafe` is used or not.

> and correctness must be established by review/tests or by external verification tools.

Again, this is common to both `unsafe` and `SPARK_Mode(Off)`. Neither Rust's compiler nor the SPARK verifier can guarantee that the implementing code covered by `unsafe`/`SPARK_Mode(Off)` actually fulfills the contract specified in the respective interfaces. The onus of doing so is placed on the programmer.

> Structurally, yes, both expose an interface and localize unchecked operations but the enforced guarantees are not comparable.

Again, this is a difference between Rust and SPARK that is independent of the use of `unsafe`/`SPARK_Mode(Off)`, not a difference between `unsafe` and `SPARK_Mode(Off)`.

> `unsafe` is a marker that shifts responsibility to the programmer

And once again, this also applies to `SPARK_Mode(Off)`. Straight from the Ada docs (emphasis added):

> When a non-SPARK completion is provided for a SPARK declaration, the user has an obligation to ensure that the non-SPARK completion is consistent (with respect to the semantics of SPARK) with its SPARK declaration.

That's just like what `unsafe` does in function bodies!

> [1] By "capability marker" (privileged/block-scoped permission, escape/privilege annotation, whatever) I mean that `unsafe` is an annotation that grants permission to perform low-level/unchecked operations; it does not, by itself, create machine-checked behavioral contracts for callers.

I don't see how this doesn't describe `SPARK_Mode(Off)`. You can say "`SPARK_Mode(Off)` is an annotation that grants permission to perform [Ada operations not in the SPARK subset; i.e., "unchecked" operations]."

Furthermore, I don't think anyone is claiming that `unsafe` in function bodies "create[s] machine-checked behavioral contracts for callers" (not to mention `SPARK_Mode(Off)` doesn't do that either). That would be kind of contrary to its purpose, after all.

One other thing:

> In Rust, the function body remains visible to `rustc`

This is wrong in this context. When compiling code rustc looks at function bodies in isolation - i.e., only when compiling that specific function body. Function bodies are opaque in the context of checking function calls - in other words, callee function bodies are not visible to the compiler. The presence or absence of `unsafe` in the function body makes zero difference in how the function signature is treated, just like how `SPARK_Mode(Off)


You're right that `unsafe` and `SPARK_Mode(Off)` are structurally analogous escape hatches - both let implementers use operations outside the verified/safe subset. I accept that the VCs-for-callers difference exists whether or not `SPARK_Mode(Off)` is used - it's a SPARK-vs-Rust language difference, not a construct difference.

I was conflating the constructs with their language contexts. The constructs are analogous; the languages are not.

That said: the original thread is comparing Ada/SPARK vs Rust for high-assurance systems, and in that context, the language-level difference is what matters. When you use `SPARK_Mode(Off)`, you still get integrated caller verification. When you use `unsafe`, you don't - you need external tooling. That ecosystem difference drives language choice for formal verification work, even though the constructs themselves are analogous.


> That said: the original thread is comparing Ada/SPARK vs Rust for high-assurance systems, and in that context, the language-level difference is what matters.

Sure, and I don't think anyone is generally disputing that Rust and SPARK have different compile-time capabilities. The only point I was trying to make in this specific subthread is that `unsafe` and `SPARK_Mode(Off)` are not as different as you originally claimed.


I think I did say:

> although none is as integrated as SPARK I believe

And yes, they're experimental (for now). But some are also used in production. For example, AWS uses Kani for some of their code, and recently launched a program to formally verify the Rust standard library.

Whether the language was designed for it does not matter, as long as it works. And it works well.

> `unsafe` blocks create unverifiable boundaries

Few of the tools can verify unsafe code is free of UB, e.g. https://github.com/verus-lang/verus. Also, since unsafe code should be small and well-encapsulated, this is less of a problem.

> the trait system and lifetime inference are too complex to model completely

You don't need to prove anything about them: they're purely a type level thing. At the level these tools are operating, (almost) nothing remains from them.

> procedural macros generate code that can't be statically verified

The code that procedural macros generate is visible to these tools and they can verify it well.

> interior mutability (`Cell`, `RefCell`) bypasses type system guarantees

Although it's indeed harder, some of the tools do support interior mutability (with extra annotations I believe).

> Rust can panic in safe code

That is not a problem - in fact most of them prove precisely that: that code does not panic.

> Most critically, Rust lacks a formal language specification with mathematical semantics

That is a bit of a problem, but not much since you can follow what rustc does (and in fact it's easier for these tools, since they integrate with rustc).

> Rust's semantics are informally specified and constantly evolving (async, const generics, GATs)

Many of those advancements are completely erased at the levels these tools are operating. The rest does need to be handled, and the interface to rustc is unstable. But you can always pin your Rust version.

> This isn't tooling immaturity though, it's a consequence of language design.

No it's not, Rust is very well amenable to formal verification, despite, as you said, not being designed for it (due to the shared xor mutable rule, as I said), Perhaps even more amenable than Ada.

Also this whole comment seems unfair to Rust since, if I understand correctly, SPARK also does not support major parts of Ada (maybe there aren't unsupported features, but you not all features are fully supported). As I said I know nothing about Ada or SPARK, but if we compare the percentage of the language the tools are supporting, I won't be surprised if that of the Rust tools is bigger (despite Rust being a bigger language). These tools just support Rust really well.


> Whether the language was designed for it does not matter, as long as it works. And it works well.

It matters fundamentally. "Works well" for research projects or limited AWS components is not equivalent to DO-178C Level A certification where mathematical proof is required. The verification community distinguishes between "we verified some properties of some code" and "the language guarantees these properties for all conforming code."

> Few of the tools can verify unsafe code is free of UB

With heavy annotation burden, for specific patterns. SPARK has no unsafe - the entire language is verifiable. That's the difference between "can be verified with effort" and "is verified by construction."

> You don't need to prove anything about [traits/lifetimes]: they're purely a type level thing

Trait resolution determines which code executes (monomorphization). Lifetimes affect drop order and program semantics. These aren't erased - they're compiled into the code being verified. SPARK's type system is verifiable; Rust's requires the verifier to trust the compiler's type checker.

> The code that procedural macros generate is visible to these tools

The macro logic is unverified Rust code executing at compile time. A bug in the macro generates incorrect code that may pass verification. SPARK has no equivalent escape hatch.

> some of the tools do support interior mutability (with extra annotations I believe)

Exactly - manual annotation burden. SPARK's verification is automatic for all conforming code. The percentage of manual proof effort is a critical metric in formal verification.

> That is not a problem - in fact most of them prove precisely that: that code does not panic

So they're doing what SPARK does automatically - proving absence of runtime errors. But SPARK guarantees this for the language; Rust tools must verify it per codebase.

> you can follow what rustc does (and in fact it's easier for these tools, since they integrate with rustc)

"Follow the compiler's behavior" is not formal semantics. Formal verification requires mathematical definitions independent of implementation. This is why SPARK has an ISO standard with formal semantics, not "watch what GNAT does."

> Rust is very well amenable to formal verification [...] Perhaps even more amenable than Ada

Then why doesn't it have DO-178C, EN 50128, or ISO 26262 certification toolchains after a decade? SPARK achieved this because verification was the design goal. Rust's design goals were different - and valid - but they weren't formal verifiability.

> SPARK also does not support major parts of Ada

Correct - by design. SPARK excludes features incompatible with efficient verification (unrestricted pointers, exceptions in contracts, controlled types). This is intentional subsetting for verification. Claiming Rust tools "support more of Rust" ignores that some Rust features are fundamentally unverifiable without massive annotation burden.

The core issue: you're comparing research tools that can verify some properties of some Rust programs with significant manual effort, to a language designed so that conforming programs are automatically verifiable with mathematical guarantees. These are different categories of assurance.


> Lifetimes affect drop order and program semantics. These aren't erased - they're compiled into the code being verified.

mrustc is existential proof that this statement is wrong. mrustc is a bootstrap compiler for Rust written in C++ that notably ignores lifetimes since it assumes that the code it is compiling (old rustc versions) will have correct lifetimes. And despite ignoring lifetimes, its output is bit-for-bit identical to what rustc produces, so clearly those lifetimes can't have been that important :P

(More seriously, lifetimes affect program correctness, but do not affect program semantics, let alone drop order. They are definitely erased as well. Not sure how you got this so wrong).

[0]: https://github.com/thepowersgang/mrustc


I could be wrong, but the Ferrocene Language Specification, FLS, adopted by Rust as its specification, may be severely incomplete.

For instance, it describes atomics, but I am not sure that it defines Rust's memory model or the semantics of atomics anywhere.

This Ferrocene page looks very short, and has a lot of links to Rust API documentation.

https://public-docs.ferrocene.dev/main/specification/concurr...

Conversely, the Rustonomicon has this page that says that Rust just uses C++20's memory model for its atomics. Yet I do not believe that memory model is defined anywhere in FLS.

https://doc.rust-lang.org/nomicon/atomics.html

I do not know if FLS defines unwinding of panics anywhere either.

Is FLS severely incomplete regarding rustc and Rust, despite being adopted by Rust as its specification? It almost seems fake.


It is certainly incomplete. Virtually all specifications for programming languages are. It is good enough for safety critical work, which is a higher bar than most.

The idea is that you adopt them and improve them over time. It is more complete than the reference, which is the previous project in this area.


Rust does have a qualified compiler: Ferrocene. It supports ISO 26262 (ASIL D), IEC 61508 (SIL 4) and IEC 62304 currently, with more on the way, including plans for DO-178 I believe. It’s being driven by actual adoption, so they’ve started in automotive and will move to other industries eventually.


Do you happen to know how Ferrocene relates to AdaCore's Rust compiler?

https://www.adacore.com/press/adacore-announces-the-first-qu...

> ISO 26262 (ASIL D)

Isn't that only for a very small subset of Rust and its standard library?

Also, do you happen to be able to explain this comment?

https://reddit.com/r/rust/comments/1nhk30y/ferrous_systems_j...

> I think it's important to note, the certification is only for a subset of the run-time, which means some language features will not be available. Also, the certification is only to SIL-2 level, so any projects requiring SIL-3 or 4 will still not be able to use the Ferrocine compiler!


I know that Ferrocene and AdaCore were working together, but then parted ways. I am assuming they're both doing roughly the same thing: qualifying the upstream compiler with some patches. I know that Ferrocene's patches are mostly adding a new platform, they've upstreamed all the other stuff.

> Isn't that only for a very small subset of Rust and its standard library?

It is currently for the compiler only. This ties into the next bit, though:

> Also, do you happen to be able to explain this comment?

Yeah, you can see me posting in that thread, though not that specific sub-thread. Rust has a layered standard library: core, alloc (which layers memory allocation on top of core), and std (which layers OS specific things on top of that). There's three parts to this comment:

First, because it's only core, you don't get the stuff from alloc and std. So, no dynamic memory allocation or OS specific things like filesystem access. That's usually not a problem for these kinds of projects. But that's what they mean by 'some language features will not be available', but they're mistaken: all language features are available, it's some standard library features that are not. No language features require allocation, for example.

Second, they qualified a subset of libcore for IEC61508. A founder of Ferrous mentions that IS 26262 is coming next, they just had a customer that needed IEC61508 quickly, so they prioritized that. This is how it relates to the above, for ISO 26262, it's just the compiler currently.



> It matters fundamentally. "Works well" for research projects or limited AWS components is not equivalent to DO-178C Level A certification where mathematical proof is required

As said in a sibling comment, certification to Rust starts to appear. Rust is young and its usage in regulated industries is just barely beginning. Ada and SPARK are old and mature. It's not a fair comparison - but that doesn't mean Rust couldn't get there.

> > Few of the tools can verify unsafe code is free of UB > > With heavy annotation burden, for specific patterns

> > some of the tools do support interior mutability (with extra annotations I believe) > > Exactly - manual annotation burden.

SPARK does not support the equivalent (shared mutable pointers) at all. Rust verifies do with a heavy annotation burden. What's better?

> Trait resolution determines which code executes (monomorphization). Lifetimes affect drop order and program semantics. These aren't erased - they're compiled into the code being verified. SPARK's type system is verifiable; Rust's requires the verifier to trust the compiler's type checker.

Has the Ada compiler formally verified? No. So you're trusting the Ada type checker just as well.

The Ada specification was, if I understand correctly, formally defined. But there are efforts to do that to Rust as well (MiniRust, a-mir-formality, and in the past RustBelt).

> The macro logic is unverified Rust code executing at compile time. A bug in the macro generates incorrect code that may pass verification. SPARK has no equivalent escape hatch.

And if you have a script that generates some boilerplate code into your Ada project, is the script logic verified? The outputted code is, and that's what important. Even with full formal verification, proving that the program fulfills its goals, you don't need to verify helpers like this - only the code they generate. If it works, then even if the script is buggy, who cares.

> So they're doing what SPARK does automatically - proving absence of runtime errors

Exactly - that's the point, to prove free of runtime errors.

I'm not sure what you mean by "SPARK guarantees this for the language; Rust tools must verify it per codebase" - does SPARK not need to verify separate codebases separately? Does it somehow work magically for all of your codebases at once?

It's clear at this point that neither of us will get convinced, and I think I said everything I had about this already.

Have fun developing with Ada!


rustc and Rust have some rather serious type system holes and other kinds of issues.

https://github.com/Speykious/cve-rs

https://github.com/lcnr/solver-woes/issues


That first one is an implementation bug that's never been discovered in the wild.

Regardless, all projects have bugs. It's not really germane to qualification, other than that qualification assumes that software has bugs and that you need to, well, qualify them and their impact.


> Certification to Rust starts to appear. Rust is young and its usage in regulated industries is just barely beginning.

Ferrocene has ISO 26262 qualification for the compiler, not verification tools. That's compiler correctness, not formal verification of programs. SPARK's GNATprove is qualified for proving program properties - fundamentally different.

> SPARK does not support the equivalent (shared mutable pointers) at all. Rust verifies do with a heavy annotation burden. What's better?

SPARK supports controlled aliasing through ownership aspects and borrow/observ annotations - but these are designed into the verification framework, not bolted on. The key difference: SPARK's aliasing controls are part of the formal semantics and verified by GNATprove automatically. Rust's unsafe shared mutability requires external verification tools with manual proof burden. SPARK deliberately restricts aliasing patterns to those that remain efficiently verifiable: it's not "can't do it" it's "only allow patterns we can verify".

> Has the Ada compiler formally verified? No. So you're trusting the Ada type checker just as well.

Qualified compilers (GNAT Pro) undergo qualification per DO-178C/ED-12C. The difference: SPARK's semantics are formally defined independent of the compiler. Rust verification tools must trust rustc's implementation because Rust has no formal specification. When rustc changes behavior (happens frequently), verification tools break. SPARK's specification is stable.

> And if you have a script that generates some boilerplate code into your Ada project, is the script logic verified?

External build scripts are different from language features. Procedural macros are part of Rust's language definition and can access compiler internals. If you use external code generation with SPARK, you're explicitly stepping outside the language's guarantees - and safety standards require justification. Rust embeds this escape hatch in the language itself.

> I'm not sure what you mean by "SPARK guarantees this for the language; Rust tools must verify it per codebase"

SPARK: If your code compiles in the SPARK subset, overflow/division-by-zero/array bounds are automatically proven impossible by language rules. You can add contracts for functional correctness.

Rust tools: You must annotate code, write invariants, and run verification per program. The language provides memory safety via the type system, but not freedom from arithmetic errors or functional correctness. These must be proven per codebase with tools.

The distinction: language-level guarantees vs. per-program verification effort.

> It's clear at this point that neither of us will get convinced

Fair enough, but the fundamental difference is whether verification is a language property or a tool property. Both approaches have merit for different use cases.


> No it's not, Rust is very well amenable to formal verification, despite, as you said, not being designed for it (due to the shared xor mutable rule, as I said), Perhaps even more amenable than Ada.

I would like to add a few clarifications that I may not have mentioned in my other reply.

You are correct that Rust's ownership/borrow model simplifies many verification tasks: the borrow checker removes a great deal of aliasing complexity, and that has enabled substantial research and tool development (RustBelt, Prusti, Verus, Creusot, Aeneas, etc.). That point is valid.

However, it is misleading to claim Rust is plainly more amenable to formal verification than Ada. SPARK is a deliberately restricted subset of Ada designed from the ground up for static proof: it ships with an integrated, industrial-strength toolchain (GNATprove) and established workflows for proving AoRTE and other certification-relevant properties. Rust's type system gives excellent leverage for many proofs, but SPARK/Ada today provide a more mature, production-proven path for whole-program safety and certification. Which is preferable therefore depends on what you need to verify - research or selected components versus whole-program, auditable certification evidence.

SPARK/Ada is used in many mission-critical industries (avionics, rail, space, nuclear, medical devices) for a reason: the language subset, toolchain, and development practices are engineered to produce certifiable evidence and demonstrable assurance cases.

Rust brings superior language ergonomics and strong compile-time aliasing guarantees, but it faces structural barriers that make SPARK's level of formal verification fundamentally unreachable. These are not matters of tooling immaturity, but of language design and semantics:

- Rust allows pervasive unsafe code, which escapes the borrow checker's guarantees. Every unsafe block must be modelled and verified separately, defeating whole-program reasoning. SPARK forbids such unchecked escape hatches within the verified subset.

- Rust's semantics include undefined behavior and panics, which cannot be statically ruled out by the compiler. SPARK, by contrast, can prove statically that such run-time errors are impossible.

- Rust's rich features (lifetimes, traits, interior mutability, macros, async, etc.) greatly complicate formal semantics. SPARK deliberately restricts such constructs to preserve provable determinism.

- Rust lacks a single, formally specified, stable verification subset. SPARK's subset is precisely defined and stable, with a formal semantics that proofs can rely on across versions.

- Rust's verification ecosystem is fragmented and research-oriented (Prusti, Verus, Creusot, RustBelt), whereas SPARK's GNATprove toolchain is unified, production-proven, and already qualified for use in DO-178C, EN-50128, and IEC-61508 workflows.

- Certification for Rust toolchains (qualified compilers, MC/DC coverage, auditable artifacts) is only beginning to emerge; SPARK/Ada toolchains have delivered such evidence for decades.

In short, Rust's design - allowing unsafe code, undefined behavior, and a complex evolving feature set - makes SPARK-level whole-program, certifiable formal verification structurally impossible. SPARK is not merely a verifier bolted onto Ada: it is a rigorously defined, verifiable subset with an integrated proof toolchain and an industrial certification pedigree that Rust simply cannot replicate within its current design philosophy.

If your objective is immediately auditable, whole-program AoRTE proofs accepted by certifying authorities today, SPARK is the practical choice.

I hope this sheds some light on why SPARK's verification model remains unique and why Rust, by design, cannot fully replicate it.


I'm a long term Ada developer who has also used SPARK in the past (SPARK was/is great). I've been looking into Rust out of curiosity. I will dispute that "Rust brings superior language ergonomics". To me, Rust's syntax is unnecessarily cryptic and ugly. In particular, Ada's syntax and semantics for exact, low-level data representation and hardware interfacing is much more ergonomic than Rust's.


Big agree. Overly terse and symbolic languages feel very cryptic and ugly to me as well.

I very much agree with you. It is indeed cryptic and ugly to me as well. I would rather have Ada's verbosity, or C's simplicity.


> However, it is misleading to claim Rust is plainly more amenable to formal verification than Ada. SPARK is a deliberately restricted subset of Ada designed from the ground up for static proof

Hold on, there's some slight of hand going on here. The person you're responding to was comparing Rust and Ada, both languages not intentionally originally designed for formal verification. You, on the other hand, are comparing Rust, a language that was not originally designed for formal verification, and Ada/SPARK, which is designed for formal verification by definition. That's not a proper comparison for what GP was claiming!

A correct comparison would probably involve identifying "a deliberately restricted subset of [Rust] designed from the ground up for static proof" and comparing its capabilities against what SPARK offers as well as (somehow) comparing how "easy"/"hard" producing that subset was compared to defining SPARK from Ada.


> Hold on, there's some slight of hand going on here. [...] A correct comparison would probably involve identifying "a deliberately restricted subset of [Rust] designed from the ground up for static proof" and comparing its capabilities against what SPARK offers as well [...]

I understand your point, comparing a hypothetical Rust subset specifically designed for formal verification to SPARK is a fairer apples-to-apples framing. My earlier point was contrasting full Rust and SPARK to highlight that SPARK’s integrated, industrial-strength proof model is unmatched in terms of whole-program, auditable guarantees.

Rust's ownership and borrow system indeed provides leverage for verification tools, and subsets verified with Prusti, Creusot, Kani, or Verus can achieve interesting results. However, the structural design of Rust (unsafe blocks, interior mutability, macros, undefined behavior, evolving semantics) means that these subsets require careful annotations and external verification; they do not automatically produce the same machine-checked, whole-program, caller-verified obligations that SPARK/Ada guarantees today.

TL;DR: Yes, restricted Rust subsets can be verified and are amenable to research-level proofs, but SPARK provides industrial-strength, auditable, whole-program proofs by design, which Rust cannot replicate without extensive external tooling. The comparison isn't about Rust being "bad" at verification - it's about the integrated guarantees SPARK provides that Rust's language design fundamentally does not.

The takeaway for the readers here is that SPARK's off-proof body is like a verified contract with an opaque implementation; Rust's unsafe body is like a permission slip that gives the programmer access without automatic enforcement.


> My earlier point was contrasting full Rust and SPARK to highlight that SPARK’s integrated, industrial-strength proof model is unmatched in terms of whole-program, auditable guarantees.

Sure, but that's basically entirely unrelated to what the person you're replying to said, since they were specifically talking about Ada, not SPARK. I don't think anyone is disputing that a language not originally designed for formal verification is comparable to a language that is, and that kind of comparison isn't really interesting because it's apples and oranges.


To be honest, I disagree that this is sleight of hand. Ada itself includes contracts (pre/postconditions, type invariants, Global/Depends annotations) as first-class language features. These work at runtime in full Ada and are verified statically in SPARK. Rust has no comparable built-in contract system.

So when comparing "Ada vs Rust" for formal verification, it's entirely relevant that Ada's design includes verification primitives that Rust lacks. SPARK isn't a separate language - it's a subset of Ada that leverages Ada's existing contract features for static proof. The contracts are part of Ada whether you use SPARK or not.

The GP's claim about Rust being "more amenable to formal verification" ignores that Ada was designed with verification in mind (built-in contracts), while Rust requires external tools and annotations to achieve similar capabilities. That's not apples-to-oranges; it's the core architectural difference between the languages.


The sleight of hand was changing a comparison of Rust and Ada to a comparison of Rust and SPARK. If you had actually compared Rust and not-SPARK Ada then as in this comment here, I probably wouldn't have said anything.

That being said, now that I'm spending time on this anyways I think there's still at least a few bits of nuance...

> Ada itself includes contracts (pre/postconditions, type invariants, Global/Depends annotations) as first-class language features.

For what it's worth, I believe none of these features were part of the first few Ada specifications. As far as I can tell, Pre, Post, and Type_Invariant (and design-by-contract features in general) were added in Ada 2012. In a similar vein, Global/Depends annotations appear to be SPARK features (!), so as explained above they're basically irrelevant for a Rust vs. non-SPARK Ada comparison.

Why does this matter? SPARK first appeared in 2009, before the features you reference were added to Ada. I think it's a some interesting discussion to be had as to the extent Ada was originally designed to be amendable to formal verification as opposed to how much it evolved to be amendable to formal verification - and in the case of the latter, to what extent Rust could evolve in a similar manner.

And kind of piggy-backing off of that:

> SPARK isn't a separate language - it's a subset of Ada that leverages Ada's existing contract features for static proof.

My understanding is that current SPARK uses Ada's contract features, but since SPARK released before said contract features existed in Ada special comments were used for earlier versions of SPARK instead, which effectively means that before SPARK 2014 Ada "require[d] external tools and annotations" for formal verification.

> ignores that Ada was designed with verification in mind (built-in contracts)

As stated above, built-in contracts were added to Ada, which means that by definition Ada was not originally designed with those features.


You're correct on the technical details: Pre/Post/Type_Invariant were added in Ada 2012, and Global/Depends are SPARK-specific annotations. Fair enough.

But saying "SPARK first appeared in 2009" is incorrect. SPARK dates to the mid-1980s at the University of Southampton.[1][2][3] SPARK was already in industrial use by the early 1990s, selected for Risk Class 1 systems on the Eurofighter programme.[3] The 2009 date is when Praxis/AdaCore released SPARK Pro under GPL[4] - that's commercialization, not creation.

This completely undermines your [irrelevant] evolution argument. SPARK didn't appear after Ada 2012 added contracts. It existed 25 years before them using special comments. When Ada 2012 added contracts, SPARK adopted the native syntax.

And there's no sleight of hand. My original comment explicitly said "Ada / SPARK" - not Ada alone. When discussing safety-critical development, you use them together. That's the deployed ecosystem.

Whether Ada's contract syntax was in the 1983 spec or added in 2012 is irrelevant. Ada 83 was designed for safety-critical embedded systems with strong typing, clear semantics, and explicit data flow - design goals that made formal verification feasible. That's why SPARK could exist starting in 1987.

The practical reality is that Ada/SPARK provides production-ready formal verification with qualified toolchains for DO-178C, EN-50128, and ISO 26262. Rust has experimental research projects (Prusti, Verus, Creusot, Kani) with no certification path and limited industrial adoption. SPARK-level formal verification in Rust is largely theoretical - Rust's design (unsafe blocks creating unverifiable boundaries, no formal specification, evolving semantics, procedural macros, interior mutability) makes whole-program certifiable verification structurally impossible, not just immature. It's almost a dream at this point.

[1] B.A. Carre and T.J. Jennings, "SPARK - The Spade Ada Kernel", University of Southampton, 1988, https://digital-library.theiet.org/content/journals/10.1049/...

[2] B. Carre and J. Garnsworthy, "SPARK - an annotated Ada subset for safety-critical programming", TRI-ADA 1990, https://dl.acm.org/doi/10.1145/255471.255563

[3] https://cacm.acm.org/research/co-developing-programs-and-the...

[4] https://www.adacore.com/press/spark-pro


> But saying "SPARK first appeared in 2009" is incorrect.

Yes, you're correct and that claim of mine is wrong. Sorry about that. The reason I claimed 2009 is that that is what Wikipedia said and I couldn't easily find sources saying otherwise. You seem to have had more success in finding good sources, although your source link for [1] seems to be wrong?

> The 2009 date is when Praxis/AdaCore released SPARK Pro under GPL

Not exactly sure that this is where the Wikipedia article pulled the date from. Any reasonable reading should conclude that this is not the same thing as when SPARK "first appeared" and the Wikipedia page doesn't have a citation for that date.

> This completely undermines your [irrelevant] evolution argument.

Not really? Consider again what I said:

> Why does this matter? SPARK first appeared in 2009, before the features you reference were added to Ada. I think it's a some interesting discussion to be had as to the extent Ada was originally designed to be amendable to formal verification as opposed to how much it evolved to be amendable to formal verification - and in the case of the latter, to what extent Rust could evolve in a similar manner.

I think this comment would still be applicable if you change "2009" to "1988"/"1990"/etc. Keep in mind that this comment was in response to your claim that "it's entirely relevant that Ada's design includes verification primitives that Rust lacks" where "verification primitives" were contract facilities, so no matter if SPARK were developed in 2009 or 1988/1990 it would still have been developed before Ada gained contracts.

Somewhat of a sidenote, but your wording seems to imply you have two of my arguments mixed up? The only place I said "irrelevant" has nothing to do with where I discussed evolution, so it's unclear to me why you inserted "irrelevant". Unless it's commentary on my evolution argument, in which case it seems needlessly confusing?

> SPARK didn't appear after Ada 2012 added contracts.

I never claimed that?

> It existed 25 years before them using special comments. When Ada 2012 added contracts, SPARK adopted the native syntax.

Yes, and I said just that. The reason that is an important distinction is because before SPARK 2014 adopted contracts it was "just" an external tool - i.e., "[Ada] requires external tools and annotations to achieve [formal verification]."

> And there's no sleight of hand. My original comment explicitly said "Ada / SPARK" - not Ada alone.

The sleight of hand is the change of topic, not the exact words you used.

> Whether Ada's contract syntax was in the 1983 spec or added in 2012 is irrelevant.

It's relevant because you explicitly claimed "Ada was designed with verification in mind (built-in contracts)". When contracts were added directly impacts the accuracy of your claim - if contracts were added later, then it obviously cannot have been originally designed for verification using contracts.

> Ada 83 was designed for safety-critical embedded systems with strong typing, clear semantics, and explicit data flow

OK, so now we're (sort of) getting to what the comment I originally responded to should have been about! How do those compare against what Rust offers? I think it's easy to claim that Rust also has strong typing. The other two items are a bit vague, but I think there could be an argument that Rust has the third item and depending on what exactly you mean by the second it either can already exist (if you stick to well-defined parts of the language) or nothing prevents it from existing in the future (if you're talking about a formal spec, in which case I'd also argue that the existence of a formal spec is generally independent of the design of a language).

Good further discussion might involve looking at the Ada 83 rationale to see if you can find support for a claim that it was designed for verification. It's a fair bit of text to look through and interpret and a quick search didn't turn up anything obvious, but you might be better equipped to handle that than me.

> Rust's design (unsafe blocks creating unverifiable boundaries, no formal specification, evolving semantics, procedural macros, interior mutability) makes whole-program certifiable verification structurally impossible, not just immature. It's almost a dream at this point.

And here again you're comparing Rust and SPARK instead of Rust and Ada! The former comparison is entirely uninteresting because it's basically a tautology - SPARK intentionally excludes the parts of Ada that it can't formally verify, so obviously the stuff that remains is well-suited for formal verification.

Seriously, consider an Ada analogue:

> Ada's design (access types, side effects in expressions, aliasing etc.) makes whole-program certifiable verification structurally impossible, not just immature. It's almost a dream at this point.

And yet SPARK was able to make formal verification work for Ada by excluding those unverifiable constructs (and in fact later added (partial?) support for one of them back in, inspired by Rust!). Why would a similar operation not be possible for Rust (especially given the fact that verification tools for Rust already exist)

Not to mention the argument is not entirely consistent either. SPARK has "[`SPARK_Mode(Off)`] creating unverifiable boundaries" (i.e., the verifier can't check the "boundary" between spec and implementation). The lack of a formal specification is not a "design" issue, it's a prioritization issue. Nothing prevents Rust from gaining a formal spec in the future, and indeed that work is already taking place, both via prose (e.g., the work done by Ferrocene) and via a real formal spec (e.g., MiniRust). "Evolving semantics" is a bit vague for me to discuss in too concrete terms - on one hand, there's parts of Rust where "evolving semantics" is wrong (consider Rust's backwards compatibility and the edition mechanism), and parts of Rust where "evolving semantics" is correct in two different ways (gaining new capabilities/semantics vs. deciding how to deal with generally underspecified bits of the language). SPARK can also be said to have "evolving semantics" since it gains new capabilities over time, but I suspect that's not what you meant.


You made several good points. I agree with some of them and I think the remainder is a matter of framing. Below is a hopefully direct, precise reply that (a) acknowledges the valid objections, (b) restates the crux of your original argument cleanly, and (c) points the discussion toward the practical/architectural distinction that matters for verification and certification.

Chronology does not overturn the substantive claim I was making. My position is not: "SPARK is merely an older tool and therefore superior".

My position is:

  1. Ada and SPARK represent a continuous, coherence-of-purpose: Ada was designed from early on for high-integrity, safety-critical systems (strong typing, conservative semantics, explicitness) and SPARK is the engineering of that design into a verifiable subset and toolchain. SPARK was not an afterthought bolted on to make Ada proveable; it is the outgrowth of Ada's long-standing high-integrity philosophy and of work done in the 1980s/1990s to make that philosophy formally checkable[1][2][3][4][5].
  
  2. The practical distinction that matters in real-world certification and high-assurance engineering is not merely "does the language have an escape hatch" or "did a verification subset exist earlier or later". It is: does the language + toolchain provide an integrated, auditable, certifiable workflow that produces machine-checked caller-facing proofs and supplier artifacts acceptable to certification authorities today? SPARK + GNATprove + Ada toolchain do that now. The Rust ecosystem offers "promising" research tools and engineering efforts, but it does not yet provide the same integrated, certification-proven toolchain and supplier artifacts out of the box.

  3. From an academic or R&D viewpoint, your point about "you could build a Rust subset with verification primitives" is correct and constructive: people are doing exactly that. But the practical, operational gap between "research-verified components" and "industry-auditable, whole-program certification" is material. That gap is where the claim "SPARK is still the practical option for whole-program certifiable verification today" rests.
A few clarifications that may help collapse confusion in the thread:

  - When I say SPARK provides "industrial-strength, auditable" verification I mean the whole artifact chain: language contracts or annotations, a prover that produces verification conditions and discharge evidence, and vendor/supplier materials practitioners can use in DO-178C / EN-50128 / ISO 26262 certification efforts. That is different from saying "SPARK can verify X and Rust cannot verify X in any research setting".

  - When you observe that SPARK originally used annotations/comments and later adopted Ada 2012 syntax, that is historically correct and expected. The fact that the syntax later became native to Ada strengthens the point that Ada and SPARK are conceptually aligned, not orthogonal.

  - When you say "nothing prevents Rust from gaining a formal spec or evolving toward verification", that is also true; there are active efforts in that direction. The question I keep returning to is: how much engineering, process, and qualification effort is required before Rust + tools equals SPARK's current production story? My answer is: a lot. May not be impossible, but it is definitely not trivial, and not the same as "Rust already provides that".
TL;DR:

The relevant point remains that Ada+SPARK are an integrated, production-ready verification ecosystem today, whereas Rust is a promising base for verification research that has not yet produced the same integrated, certifiable toolchain."

---

> Good further discussion might involve looking at the Ada 83 rationale to see if you can find support for a claim that it was designed for verification. It's a fair bit of text to look through and interpret and a quick search didn't turn up anything obvious, but you might be better equipped to handle that than me.

Ada 83 Rationale: it explains the design goals (reliability, maintainability, suitability for real-time/embedded and defense systems) and the language design decisions (strong typing, modularity, explicitness)[1].

Origin and requirements driving Ada: describes the DoD process (HOLWG / Steelman) that produced a set of requirements targeted at embedded, safety-critical systems and explains why a new language (Ada) was commissioned. It shows Ada was created to serve DoD embedded-system needs.[2]

Standardization process and the language rationale across revisions (shows continuity of safety/verification goals)[3].

Guide for the Use of the Ada Programming Language in High-Integrity / Safety-Critical Systems (technical guidance / conformance): this guide and related validation/ACATS materials describe expectations for Ada compilers and use in safety-critical systems and explain the certification-oriented aspects of Ada toolchains. It is useful to show the ecosystem and qualification emphasis, as per your request.

[1] https://archive.adaic.com/standards/83rat/html/ratl-01-01.ht...

[2] https://archive.adaic.com/pol-hist/history/holwg-93/holwg-93...

[3] https://www.adaic.org/ada-resources/standards/ada-95-documen...

[4] https://www.open-std.org/JTC1/SC22/WG9/n350.pdf

[5] See my other links with regarding to SPARK's history.


> You made several good points. I agree with some of them and I think the remainder is a matter of framing. Below is a hopefully direct, precise reply that (a) acknowledges the valid objections, (b) restates the crux of your original argument cleanly, and (c) points the discussion toward the practical/architectural distinction that matters for verification and certification.

...I had my suspicions, and this and the rest of the comment really isn't helping.

> My position is not: "SPARK is merely an older tool and therefore superior".

I don't believe I was claiming that that was your position either?

> My position is:

Most of these points are basically completely tangential to the rest of the thread.

> The fact that the syntax later became native to Ada strengthens the point that Ada and SPARK are conceptually aligned, not orthogonal.

This completely misses the point I was trying to make.

> The question I keep returning to is: how much engineering, process, and qualification effort is required before Rust + tools equals SPARK's current production story?

As does this, because that's certainly not what I was responding to.

> My answer is: a lot. May not be impossible, but it is definitely not trivial

That's absolutely not what you claimed earlier!

> and not the same as "Rust already provides that".

Did anyone claim that in the first place?

> The relevant point remains that Ada+SPARK are an integrated, production-ready verification ecosystem today, whereas Rust is a promising base for verification research that has not yet produced the same integrated, certifiable toolchain."

And for at least a third time, you're comparing SPARK and Rust, which is not an interesting topic of discussion here.

> Ada 83 Rationale: it explains the design goals (reliability, maintainability, suitability for real-time/embedded and defense systems) and the language design decisions (strong typing, modularity, explicitness)[1]. <snip>

I'm not sure this or the rest of the comment is relevant for what I was getting at. I was wondering whether the Ada 83 rationale contained discussion about design elements specifically for validation. Not just things that are usable for validation such as strong types; I'm looking for decisions specifically made with formal validation in mind.


> I'm not sure this or the rest of the comment is relevant for what I was getting at. I was wondering whether the Ada 83 rationale contained discussion about design elements specifically for validation. Not just things that are usable for validation such as strong types; I'm looking for decisions specifically made with formal validation in mind.

Quotes to look at from my previous comment:

> Origin and requirements driving Ada: describes the DoD process (HOLWG / Steelman) that produced a set of requirements targeted at embedded, safety-critical systems and explains why a new language (Ada) was commissioned. It shows Ada was created to serve DoD embedded-system needs.[2]

> Standardization process and the language rationale across revisions (shows continuity of safety/verification goals)[3].

> Guide for the Use of the Ada Programming Language in High-Integrity / Safety-Critical Systems (technical guidance / conformance): this guide and related validation/ACATS materials describe expectations for Ada compilers and use in safety-critical systems and explain the certification-oriented aspects of Ada toolchains. It is useful to show the ecosystem and qualification emphasis, as per your request.

Check out the links, specifically [2] and [3]. Find the URLs in previous comment.


Again, I'm looking for decisions specifically made with formal verification in mind (e.g., "we made this decision specifically because it would be useful for formal verification"), not just decisions that are usable for validation but whose reasoning did not specifically consider the possibility of formal verification. Barely anything in those quotes even starts to address that, let alone provide any detail on how.

Link [2] does not seem to have any directly relevant bits based on a quick search. There is a short section which seems potentially relevant, but that work was not officially adopted and it's unclear to what extent it influenced Ada 83, especially since some of that work extended through 1987. I'm also not entirely sure how on point it is; unfortunately, it lacks the detail needed for me to tell for sure.

Link [3] is not directly relevant since it concerns Ada 95, whereas my initial comment was specifically about Ada 83. In addition, it doesn't really say anything on its own; it's basically an index of other documents, and I'm not currently well-equipped to go trawling through them for relevant quotes.

The last paragraph is not relevant because I'm looking for information on design decisions. A guide on how to use a language covers a very different subject matter.

I'm not even sure why I am the one doing the digging, really...


> Ada needs a profile that guarantees that code can't be the source of erroneous execution and constrains bounded errors further.

I forgot to add a couple of things in my other comment.

You're conflating a few distinct concepts here.

Ada already provides memory safety by default: bounds checks, range checks, and null checks are all on unless explicitly suppressed. Dereferencing a null access value or indexing out of range raises `Constraint_Error`; it never leads to undefined behavior. So Ada does not need a special profile for that kind of safety.

What you seem to be describing - "a profile that guarantees that code can't be the source of erroneous execution" - is closer to what the SPARK subset provides. SPARK can prove the absence of run-time errors (AoRTE) statically using GNATprove, including ownership and lifetime correctness for access types. That is a verification discipline, not a tasking profile.

Ravenscar, on the other hand, is a concurrency profile. It constrains tasking and protected object behavior to make real-time scheduling analyzable and deterministic; it has nothing to do with preventing run-time errors or erroneous execution. It just limits what forms of tasking you can use.

So: Ada itself is already memory-safe by construction, Ravenscar ensures analyzable concurrency, and SPARK provides formal proofs of safety properties. Those are three orthogonal mechanisms, and it sounds like you're blending them together in your comment. Ada performs its safety checks at run-time by default, whereas SPARK allows proving them at compile-time[1].

I encourage you to read the website for which the link can be found below. If you are serious, then you could go through https://learn.adacore.com as well.

[1] https://docs.adacore.com/spark2014-docs/html/ug/en/usage_sce...


(Disclaimer: I know next to nothing about Ada).

> Being able to define custom bounds checked ordinals

That Rust doesn't have (builtin, at least).

> being able to index arrays with any enumerable type

In Rust you can impl `std::ops::Index` and index types, including arrays, with whatever you want.

> Defining custom arithmatic operators for types

Again, definitely possible by implementing traits from `std::ops`.

> adding compile and runtime typechecks to types with pre/post conditions, iteration variants

If you refer to the default runtime verification, that's just a syntax sugar for assertions (doable in Rust via a macro). If you refer to compile-time verification via SPARK, Rust's formal verification libraries usually offer this tool as well.

> predicates

Doable via newtypes.

> Discriminant records

That's just generic ADTs if I understand correctly?

> Record representation clauses

Bitfields aren't available but you can create them yourself (and there are ecosystem crates that do), other than that you can perfectly control the layout of a type.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: