Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> we are finally getting low level software that no longer needs it

Ada has had memory safety for decades – not to mention Lisp, Java, etc. if you can live with garbage collection. Even PL/I was better than C for memory safety, which is why Multics didn't suffer from buffer overflows and Unix did. But the Linux kernel (along with lots of other software which we would like to be reliable) is still mostly written in C, for better or worse.





> Ada has had memory safety for decades

Only with spark (i.e. formal verification). Which similar to other projects of this age (e.g. Rocq/How the compcert C compiler was implemented and proved correct) seems not to be low enough friction to get widescale adoption.

> not to mention Lisp, Java, etc. if you can live with garbage collection.

Like I said, high level languages that won't benefit from this at all have existed for ages... and the majority of software is written in them. This is one of the stronger arguments against it...

> But the Linux kernel (along with lots of other software which we would like to be reliable) is still mostly written in C, for better or worse.

Fil-C shows this can be solved at the software layer for things in this category that can afford the overhead of a GC. Which does mean a larger performance penalty than the hardware proposal, but also more correctly (since hardware changes can never solve unintended compilations resulting from undefined behavior).

The linux kernel is probably an example of an actual long tail project that would benefit from this for a reasonably long time though, since it's not amenable to "use a modified C compiler that eliminates undefined behavior with GC and other clever tricks" and it's unlikely to get rewritten or replaced with a memory safe thing quickly due to the number of companies collaborating on it.


> Fil-C shows this can be solved at the software layer for things in this category that can afford the overhead of a GC.

Mainline clang and g++ are also getting better with things like -fbounds-safety and -fsanitize=address. As I understand it, they typically have some overhead, but I'm willing to accept that overhead to have a kernel, web browser, etc. without memory errors. The decision that memory safety is too costly seems to have been made when CPUs were orders of magnitude slower than they are today. Hopefully hardware support will reduce the overhead to negligible proportions and enable memory safety as a default rather than an esoteric add-on or proprietary feature.


Nah, it was only a UNIX thing and C, see the world of systems languages and OS written in them outside Bell Labs.

Had UNIX and C a price tag on their source code comparable to the competition, instead of a symbolic price and an annotated source code book, history would have played a different music.


What competing OSes/languages do you think might have/should have surpassed Unix and C?

VMS could have been one, for example.

PL/I variants were being widely used, Apple was a Pascal fan, and is the company that actually created Object Pascal, not Borland.

Modula-2 was around as well, unfortunely without an OS to go to market alongside it.

The real question is, all things on equal footing regarding price, without access to source code to just type in/copy, what operating systems would people be willing to pay for.


> Only with spark (i.e. formal verification)

Ada's original memory safety was still a lot better than C's. As noted, PL/I was not 100% memory safe, but it was good enough to prevent buffer overflows in Multics.


AFAIK, most of windows and OSX (and iOS) are in memory unsafe languages as well (c, c++, and objective c)

Still, the keyword is still.

Hence why Objective-C got GC, after its failure to play well with C semantics, got replaced with ARC, and afterwards Swift came to be.

Microsoft now has a new policy in place, via the Secure Future Initiative, that only exiting code bases should be kept in C and C++, all new projects are to either use managed languages or Rust.


macOS is (a) Unix (officially even!) and inherits many of its features and issues.

To Apple's credit though they seem to be using a memory-safe language (Swift) for new code and libraries (at least at user level) and may be rewriting old code as well, and they have also added MIE/EMTE to Apple Silicon. They also ship clang/clang++ with support for -fbounds-safety and -fsanitize=address.

Objective-C also supports Automatic Reference Counting, which helps with memory management. (Apple also implemented a garbage collector for Objective-C 2.0, but abandoned it in favor of ARC. I am aware that reference counting is technically a form of garbage collection.)


The reason being, as can be seen on the archives, the conservative tracing GC had several gotchas to work with existing code, thus segfaults were common.

The way ARC works in Objective-C, by automating retain/release call patterns already required by existing frameworks was much safer to implement, without such crashes.

Similar to all those C++ smart pointers automating COM reference counting.


Sure but nobody is actually writing foundational software (as we are now calling it) in Lisp, Java or Ada (and it also has no good answer for use-after-free which is a huge class of vulnerabilities).

This is the first point in history where it's actually feasible in real non-research life to write an entire system in only memory safe languages. (And no, the existence of `unsafe` doesn't invalidate this point.)


I see plenty of foundational software in the biggest mobile OS, IoT devices and cloud computing infrastructure.

Ada only has use-after-free if unchecked deallocation is used, since we are way beyond Ada83, alternatives do exist in Ada 2022.

If anything we will only get more foundational software in safer languages, when the generation that only accepts C and C++ for specific domains is no longer among us.

Unfortunately to me as well, it isn't something I will be able to witness.


> Ada only has use-after-free if unchecked deallocation is used

You mean if you just never deallocate? Or is there a third option? Genuine question; I don't follow Ada closely.

> If anything we will only get more foundational software in safer languages, when the generation that only accepts C and C++ for specific domains is no longer among us.

I'm more optimistic - the Rust in Linux people are making progress and that's probably the thickest den of naysayers. Uutils is actually being used in Ubuntu (and sudo-rs I think?).

It'll probably take a long time until Rust outweighs C but I think we're talking 10-20 years not 30-40.


In practice, Unchecked_Deallocation should be used as much as unsafe in Rust.

Ada provides a series of features towards that goal.

First one, already present in Ada 83, is that stack allocation is dynamic, a bit like C99 VLAs, with the differenc that it is bounds checked and and exception is thrown if there is not enough space, instead of corrupting the stack.

Also Ada pointers (access types), have some type constraints, so already with that one can make some kind of arena like storage that doesn't depend on using pointers all over the place.

Ada 95 introduced controlled types, which is basically RAII in Ada, providing yet another way not to use Unchecked_Deallocation directly on "userspace" code.

Ada 2005 introduced bounded and unbounded container types, further extended and improved in later versions, which allow to write many algorithms and data structures that build upon them, without having to go into low level memory allocation approaches.

With Ada 2012 formal proofs, coupled with SPARK 2014 tooling, you can additionally ensure specific conditions are met before doing whatever with specific resources, including ownership.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: