Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Interview about Austral, a systems programming language with linear types (lambdaclass.com)
111 points by unbalancedparen on Dec 25, 2023 | hide | past | favorite | 43 comments


As someone working on a language myself, I found this to be very high-quality material! It aligns with a lot of my thinking, and it’s educational.

A random part of interest: I followed the link about how linear types and exceptions don’t mix: https://borretti.me/article/linear-types-exceptions In it, the author explains how linear types always need to be explicitly destroyed, and if you end up in a “catch” block, you can’t easily go back and destroy things that are now out of scope, which control flow got interrupted while doing things with. “Why not just destroy things when they go out of scope?” I thought. The author addresses this, saying that types that you have to “consume” “at most once” are called affine types, and the compiler can clean them up for you, and it solves the problem with exceptions. Rust has affine types, and you don’t have to explicitly “destruct” every string/object/resource using the appropriate destructor for that type, manually, as you do with linear types (of course, the compiler will make sure you do it, but you have to do it).

So why does Austral use linear types, not affine types? The reasons given do not really resonate with me (but I’m not a systems programmer); it’s that the compiler would have to insert hidden function calls, and secondarily, for temporary/intermediate values, the order of invocation might not be obvious; plus, maybe the programmer not doing something with a value is a mistake. I’m really glad the reasons are written out, however.

In other areas I feel very aligned with the author, such as on the value of required type annotations and local inference rather than global inference, and I’ve saved the link to refer back to the way the opinion is stated.


>So why does Austral use linear types, not affine types?

Affine types give a safety guarantee: you can't use it more than once. The bad thing (double free, use after free) does not happen.

Linear types give that same safety guarantee, plus a liveness guarantee: you must use it, possibly in some nontrivial way.

A function that takes an affine value as an argument is enforcing a contract about the past behaviour of the caller, leading up to the call: having the affine value is proof that certain other functions were called in the right way to produce it. But returning an affine value gives you weaker guarantees about future behaviour, because you can use it zero times. At most you know that it will get Dropped. But maybe you want to enforce more interesting things than the Drop trait can express. Returning a linear value lets you do this: maybe the linear Foo you return can only be disposed of in conjunction with a linear Bar, like

    fn consume(x: Foo, y: Bar) -> Baz
And perhaps now Baz itself is linear, which has to be consumed in some other way ... at any rate, returning a linear value is proof that in the future, the program will advance through a particular state machine of function calls, where the states and transitions are defined by the available signatures. If Foo is linear but there's no simple function like

    fn drop(x: Foo) -> Unit
then buckle in, the compiler says you're not getting off the ride until it's over.


It seems like a language could support both. That is, a type could either be automatically droppable or not.


Yes, this. The notable thing in Austral is not that linear types are used somewhere, it’s that it’s all linear, even when destruction is just deallocation and could easily be done by the compiler (even in the presence of exceptions).


You might want finer-grained control over when deallocation happens.


Linear types are more powerful than affine in terms of implementing code that cannot go wrong as enforced by the type system. State machines reified in application code.

Affine is fine if there's a catch all operation available for when the value drops out of scope which the compiler inserts. You can call deallocate or similar when an exception comes through the call stack.

If the final operation is some function that returns something significant, takes extra arguments, interacts with the rest of the program in some sort of must-happen sense, then calling a destructor implicitly doesn't cover it.

There's some interesting ideas around associating handlers with functions to deal with exceptions passing through but I think I've only seen that in one language. The simple/easy approach is to accept that exceptions and linear types are inconsistent.


One way you can think of the exception unwinding stuff is that each call provides not just one continuation (the usual return address) but two (also one for the cleanup), and that these are combined using the & connective of linear logic.

This means both paths must use the same set of resources, and exactly one of them must be invoked. Interestingly, in linear logic this is equivalent (using De Morgan dualities) to a single continuation that expects a sum type: A^⊥ & B^⊥ = (A ⊕ B)^⊥.


That's exactly the challenge though. The puzzle isn't in the compiler internals, it's in the language syntax.

The happy case is you have an instance of a linear type, you call some unrelated function which doesn't use that instance, then lexically later in the caller you do something with it. All typechecks easily, all is well.

If that unrelated function wants to throw an exception, reinstate some other continuation, or in any other way not execute the use of the instance after said function call, you have a design puzzle.

Affine types solve it by saying the object didn't need to be used anyway, so whatever. No type problems. Whatever deals with garbage collection will handle it.

Linear type systems usually solve it by refusing to let the called function branch to somewhere other than the nominal return site, aka no call/cc or exceptions in the language. Possibly function colouring to say no call/cc or exceptions in anything branched to within the lifetime of a linear variable.

The obvious answer to a compiler dev is that call/return is sugar anyway - all the variables "live across a call site" are implicitly arguments to the branch, so they're available tn whatever continuation the callee invokes. The plumbing is all fine. Typechecking is abstract interpretation so that's all good too, one can totally build the thing.

The missing piece is how you get the programmer to describe what they intend to happen to the linear instance on any continuations which are not the one denoted by the code immediately lexically after the function call 'instruction'. Where the syntax in question is an exception, it means "branch to somewhere implicitly recorded in a side table", so for the linear type use to work out you need to somehow annotate what is supposed to happen to the thing in that case. I don't know of a good notation for that.

(functions taking sum types are prone to being implemented as a branch on which field as active so you're definitely right that success+failure continuations can be transformed into one taking a sum - it's a reversible transform - but I don't think it helps the notation challenge).


I was talking about language syntax, and in particular the way proof terms for linear logic (which linear type systems come from) already provide a syntax for this.

But in fact we also have multiple options on the more programmer-familiar side of this- `try/finally` blocks, `defer`, etc. The problem isn't limited to linear types so it's already been pretty thoroughly explored.


Linear types and handlers are even more inconsistent, given the continuations could be run more than once if not careful.


This is why, in Koka, there exists initially and finally blocks, to control resource cleanup https://koka-lang.github.io/koka/doc/book.html#sec-resource

I think linear types and exceptions can play well together if you use controlled effects like this.


You can destroy them if you know they weren't destroyed in the try block. If they are destroyed in the the try block, then they would be in a dreaded "maybe destroyed" state. What you really want to do is somehow destroy resources in finally blocks. This is true if you are using linear types are not actually, and need to work with things that are explicitly destroyed.


> So why does Austral use linear types, not affine types?

Because linear types are simpler. The major reason here is that you don't need a "fancy" inference engine of lifetimes like the Rust borrow checker.


You don't need lifetimes or lifetime inference for affine types, either. Lifetimes/regions/borrowing are an orthogonal extension that you can add on top of either affine or linear types.

(In fact Austral includes a region/borrowing system as well! It is a bit more explicit than Rust, along the lines of Rust's pre-NLL borrow checker, and with concrete binding forms instead of inference for regions, but this is also unrelated to the affine/linear distinction.)

One reason for linear types over automatic scope-based destruction is that the final destruction can take arguments and produce results in a more streamlined way. This is nice for e.g. handling errors on file close.


One thing with explicit drops that Rust is having is that the thread can get SIGKILLed at any point without running destructors, which can complicate sync primitives and cause deadlocks in other threads if RAII is used for that. People do use it for that effectively but even if you have support for explicit drops it's really hard to ensure they actually run.


I can't quite parse your first line or two. Are you saying that explicit drops make SIGKILL a problem because the compiler can't automatically add in the right cleanups? Whereas if the compiler is in charge of adding all the drops, it can insert those into a signal handler?


Rust certainly doesn't insert automatic cleanup in signal handlers. I don't think there's actually any meaningful difference between linear types and automatic drop here.


> One reason for linear types over automatic scope-based destruction is that the final destruction can take arguments and produce results in a more streamlined way. This is nice for e.g. handling errors on file close.

Couldn't the language allow something like Zig's `defer` op except tie that explicit destructor to the type?


Sure, this would just desugar to normal control flow and be checked for linearity in the usual way.

I'm not sure what you mean exactly by "tie to the type," though? Just some kind of standard name for it? The benefit of linearity here is that you can change its signature while still getting the compiler to enforce its usage, so you probably wouldn't want it to look like e.g. Rust's Drop trait.


Because it doesn't have to be something you'd ordinarily think of as a "destructor". e.g. you might have multiple possible functions that consume the value, because you're modelling a state machine with several outgoing edges from the type-state. You get to choose the one you call.


`defer` allows for executing arbitrary expressions based on scope (the "destructor" called depends on the code path)


Curious, aren't lifetimes and borrowing there in order to control aliasing that linear type preclude?

I was seeing linear types as a kind of: manual ssa + definite assignment analysis + unused variable analysis with the goal to facilitate some typestate analysis by the compiler.

I know barely anything on the subject, just wondering.


Lifetimes and borrowing are a convenient thing to have together with linear or affine types, but you can use them on their own as well- e.g. Rust lets you borrow Copy types.

At their core, the definition of linear and affine type systems don't reference SSA or dataflow analysis either. Type systems are generally described using the notation of logic (see this intro if you're unfamiliar: https://langdev.stackexchange.com/a/2693/910), and then to get linearity you define the inference rules such that a use of a resource removes it from the context (one way to write this: https://plato.stanford.edu/entries/logic-linear/#SeqCal).

Things like definite assignment are more user-facing convenience features that are also built on top of the core linear logic.


Oops, I still don't understand. With linear types, you don't have any aliasing so I don't see the value for borrowing. (maybe I'm misunderstanding the idea behind linear types?)

Seems to me, perhaps wrongly, that since there is no aliasing and values are always used, once a value is not returned by a function to be consumed, it's lifetime has effectively ended. So it's automatic?

The point I was making about ssa, dfa, etc. was that it was putting these aspects into userland via the type system.


With linear types alone you don't have any aliasing. Borrowing is a controlled way to recover some aliasing while retaining the benefits of linearity.


I believe I read australs borrow checker is rusts old version, possibly verbatim?


I haven't seen anything like that. I don't think Rust ever had linear types. This blog post has an explanation of Austral's linear type checking:

https://borretti.me/article/how-australs-linear-type-checker...


I very much doubt that linear types are simpler in a lot of cases.

Do they look simpler in cheesy anecdotes? Sure.

But again, graphs and linked structures come in the to fray and it’s immediately noticeable that there’s be some problems here. Or maybe I am crazy in thinking that “being forced to update every single link just cause one node updated” is a problem.

Even if this is only superficial, I foresee there being not insignificantly annoying issues in how one needs to manage graph and linked structures.


I think the point is they're simpler to specify and implement, and easier for the programmer to understand- not that they result in simpler code necessarily. From the language rationale:

> Is it simple? Yes: the type system rules fit in a napkin. There’s no need to use an SMT solver, or to prove theorems about the code, or do symbolic execution and explore the state space of the program. The linearity checks are simple: we go over the code and count the number of times a variable appears, taking care to handle loops and if statements correctly. And also we ensure that linear values can’t be discarded silently.

https://austral-lang.org/spec/spec.html#rationale-linear-typ...


My reading was that it is simpler to implement..


Related:

Austral Programming Language - https://news.ycombinator.com/item?id=36898612 - July 2023 (118 comments)

What Austral proves - https://news.ycombinator.com/item?id=34845895 - Feb 2023 (21 comments)

Austral: A systems language with linear types and capabilities - https://news.ycombinator.com/item?id=34168452 - Dec 2022 (120 comments)


> Just as Pascal introduced modules

This is wrong, the notion of modules predated the units from UCSD Pascal units.

https://en.m.wikipedia.org/wiki/Modular_programming

I would assume an interview would do proper fact checking.


I thought the concept of "linear types" as defined here was simply another name for "uniqueness types"[1]. But the Wikipedia article claims there's a difference.

[1] https://en.wikipedia.org/wiki/Uniqueness_type


Linear and uniqueness types sort of collapse into the same thing when an object is required to stay linear or unique for its entire life cycle.

They become distinct, and sort of dual to each other, when you relax this restriction: linearity ensures that no copies or aliases are produced going forward, while uniqueness ensures that no copies or aliases have ever been produced in the past.

In other words, if you call a function that is linear in its parameter, you know it won't form any additional copies of the argument, but the function can't assume it has the only reference to that argument, so it can't e.g. update it destructively. Conversely, a function that takes a unique parameter can make that assumption, but its caller can no longer assume that the argument it passed in is unique.

See also this recent paper on the two: https://granule-project.github.io/papers/esop22-paper.pdf


Very informative, thank you.


The main thing I want from a language with a borrow checker is faster compilation time than Rust's.


It’s exciting seeing new languages popup offering memory safety.


Wasn't Rust created for systems programming, and strict types.

Can I get a cliff notes on what this is doing to solve some pinch point that Rust isn't?


Async drop is a problematic case in Rust. Somehow grafting in linear types is one of the ideas that has been floated to solve this.

There is also a subset of developers (typically those used to the control of C and/or in the extremes of fault taulerance) that don't want the hiden control flow of implicit Drop / RAII so Linear types offers an alternative.

Personally, I'd also like a best-effort linear types so I can catch errors from closing file handles open for write. I can manually close to get the error but I want help to ensure I do it.


> But Rust is a very pragmatic language, and the problem with pragmatism is that it never ends*

I'll bite: why can't pragmatism feel when it's hitting the diminishing returns curve and, you know, fight for a modicum of principle?

That is: pragmatic pragmatism should fall short of dogmatism.


You can absolutely stop being pragmatic when it's hitting diminishing returns, but that means that you stop being pragmatic not that pragmatism has ended.


Seems a bit of a paradox, no? Is it not pragmatic to go easy on the pragmatism when appropriate?


I think what he means is that there will always be new special interests and use cases that could get pragmatic language support. If you're invested in a culture that provides this support, it will never end.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: