Hacker News new | past | comments | ask | show | jobs | submit | more transitory_pce's comments login

AWS outages started not long after this date.


And so the plot thickens...


Title should be ".. of investment bank python", trading and risk has little in common with a retail digital bank like say N26.

The problem with these projects is that the folks leading them have never built a real trading system in entire their lives (the ones who have been there for many years worked with end-of-day batch systems) and there is a layer of useless and incompetent "business analysts" who hide behind their incompetence by finding ways to malign developers..

Pro tip: Dont work for a bank before assessing its open source repos. They have none? Run in the opposite direction.


When you say "The problem with these banks..." you mean the ones like N26 (not investment banks)? (Seems like coffee hasn't kicked in for me yet)


Edited.


Thx


Remember: Facebook grew from a core product that was essentially a "better" copy of numerous existing products. Same for the Google.

Starting entirely new things is not their strength. And it is not the strength of people working in mega corps. So you get nonsense like this, or if they do only after many years by hiring from scrappy hungry startups who solve the problem first..

If you build anything remotely Metaverse related and it works you will get acquired, because now.. the Metaverse _needs to_ succeed. And they're not going to do it with the people who work there.


Facebook/Meta is a consumer company. All the worthwhile applications of remote presence are going to be in industrial spaces that combine robotics with VR. I see no way that Facebook/Meta will do anything worthwhile or useful in that space.


> Facebook/Meta is a consumer company.

There's not a single consumer that ever spent 1 dollar on Facebook. It's an advertising company, doing websites and chat apps on the side.


How is advertising not about consumption?

Facebook enables consumer activity so it's a consumer company. It does not build industrial robots or enable industrial production of any form.


Its neither. Their core product is surveillance.


Larry, from his lair in Fiji: “I should sell some shares”


The problem likely isn't the company -- is the reporting line and the inexperienced middle manager he was reporting to.

Dont suffer these people. Esclate to N+1. Eventually you reach someone sensible.


> There are a few genuinely 'bad people' out there

Do you want to count the number of violent rapes by year? Violent premeditated murders? Lets add violent robbery and assault by people have food and a place to stay, but are looking to level up and buy shiny things?

This is not a circumstance, and there are many 'bad people' out there.


The comment you’re responding to suggests that implementing certain societal changes would dramatically reduce crime. Your reply completely ignores their considerations and offers the status quo, I.e. a world completely lacking the changes they mention, as a supposed counter-argument.


How about a legal and regulated sex industry.

I live in a country with next to no violent premeditated murders, next to no violent robbery. And a LOT of very poor people. But there is a public social safety net which you can't fall below. So no one is ever really desperate.

I understand if you're based in the US why you think this way, but it's a self-fulfilling prophecy.


Most of these are mental health issues. ESPECIALLY violent rape. Really every cop will tell you that vast majority of crime is not committed by some cold-blooded, well-planning, career criminals - but by mentally unstable or sick people and/or people suffering from addictions, that do ridiculous things being clearly out of sane mind. Sadly, psychiatry is really in its infancy and its success record at both diagnosis and treatment plain out sucks.


What about cold blooded well planning rapists who work as cops?


Most of these are mental health issues.


Basically, is there any "rational" reason to be a rapist AT ALL? Especially in today's world which is full of easy to get sex. I think every rapist is mentally not OK.


> Basically, is there any "rational" reason to be a rapist AT ALL?

Are you equating taking irrational actions with being mentally ill? By that logic, everyone is mentally ill.

> Especially in today's world which is full of easy to get sex.

What makes you think sex is easy to get in today’s world? The evidence would contradict that. https://www.psychologytoday.com/us/blog/all-about-sex/202106...

> I think every rapist is mentally not OK.

Sure, but is it an illness which can be alleviated?


>Sure, but is it an illness which can be alleviated?

That's exactly my point. There isn't. So we criminalise it instead. It's inhumane and will be most certainly seen by our grandchildren in a way similar as we see chattel slavery today, but it's just simply that we have no other way today.


How do you imagine our grandchildren will solve it?


By doing a lot more research into psychiatry and finding treatments for mental health issues that actually works.


It seems that you could say that about any socially undesirable behavior.


I agree, there is a problem with this. We've been there before: being gay was considered a sickness and/or a crime in most countries till recently, for example. But, medical science can adapt to changing social norms, it did before, and was hardly ever a blocker here - i can't recall if there was ever a point where medical science or healthcare industry insisted on labelling something a sickness when society or state claimed it wasn't.


My own feeling is that many of these people were not born evil, but their life circumstances as children (e.g. extreme poverty or abuse by family members) set them on a bad trajectory that's quite hard to correct for in adulthood.

I think giving every child a stable home life will go a long way in reducing crime. Of course some people will slip through the cracks. But I'm in agreement with OP that the number is quite small compared to the number of people currently incarcerated.


>Do you want to count the number of violent rapes by year? Violent premeditated murders?

People do keep count of this, and the (per-capita) rate of violent crime has been falling fairly monotonically since at least the end of WW2. Our genetics haven't changed in that time, but circumstances have.


So what do you do with the violently inclined people? Put them in therapy and hope they get better?


I mean, it depends on what you mean by "violently inclined".

I'm not a sociologist, so I don't know specifically what works and what doesn't, but clearly whatever we've been doing for the last 50 years has led to a smaller percentage of children born with inherently aggressive/antisocial personalities going on to commit violent crimes as an adult.

If you're talking about adults with violent inclinations that have acted on them and committed horrific crimes, then the obvious answer is to segregate them from the rest of society as they're clearly dangerous.

It looks like we're speaking across purposes. I'm not trying to absolve rapists and murderers of blame, nor am I trying to downplay the danger they pose to the rest of us. At the same time, though we should acknowledge that violent crime seems to be a mostly-solvable problem on a population level and simply throwing up your arms and saying "bad people are bad" isn't going to push the stats down.


Some get better, some are disasters waiting to happen.

Source: Norwegians do this. It is not every day it seems but there absolutely are multiple cases each years of people with violent pasts who become violent again given the opportunity.

There are two types of prison sentences here: ordinary and "forvaring" (best translation I could come up with is "detainment") which practically means they get to serve a minimum sentence anyway but won't be released until the specialists consider them reasonably safe. Kind of close to life but with a chance of parole, only the parole is meant to be the rule, not the exception.

For now it seems to work well enough, but if the rate climbs I think they will have to make adjustments.


That industry is relationship and influence driven.


Every one of these companies except has a competing OS. Just saying.


Having Linux make the first move, then learn from their successes and mistakes to implement a better version in your own OS seems like a good move that profits everyone.

It's not like Microsoft isn't pushing Rust on their own platform. Every time a Windows driver does a use-after-free somebody swears at Microsoft for causing a bluescreen.


From what I've heard (3rd hand :D), Rust has already made it's way into the windows kernel.


All of them are interested in good technical security (actually being secure).


So why not formal verification or static analysis tools of an existing codebase (Frama-C)? Or dropping in C derived from a formally verified source (Idris / F-Star)? Or a language with good performance and a verified subset (Ada/Spark)?

I've never read anything to make me think that Rust is a safer alternative than any of these options; the guarantees of the language are just assurances of its advocates.


Its not like Microsoft does not work on that, they probably have a dozen different formally verified programming languages around at the moment. You already mentioned F*[0], but there is also Checked C[1], VCC[2], koka[3], dafny[4], Project Verona and probably some more that I am missing. The point being that they have clearly evaluated and have used F-Star for parts of the OS already.

[0]: https://www.fstar-lang.org

[1]: https://www.microsoft.com/en-us/research/project/checked-c/

[2]: https://www.microsoft.com/en-us/research/project/vcc-a-verif...

[3]: https://github.com/koka-lang/koka

[4]: https://github.com/dafny-lang/dafny

[5]: https://www.microsoft.com/en-us/research/project/project-ver...


As one of the developers of this patchset, here is my simple answer: try it. I'd love to see it.

I know Rust; I don't know Frama-C or Idris or Ada. What little I know of Ada indicates that it doesn't actually solve the problem because it doesn't have safe dynamic memory allocation. Ada advocates have told me I'm wrong and there's some new thing (inspired by Rust) that does this. Maybe! I'm not sure there's any benefit to the world in arguing it; someone skilled in Ada should just write something and show how it would actually work.

Write a kernel module with any of those techniques, send in a patch, and see what people say. Any effective means of making the kernel less buggy is great. It doesn't have to be Rust.

I claim that Rust is effective because, well, those patches are there, and frankly I haven't been doing any work for this project in almost a year and we're getting so many enthusiastic contributors. So, apparently, a lot of other people also know or are willing to learn Rust. Maybe it's actually better than the alternatives you mention; maybe it just has better documentation. I don't know. I'm not claiming Rust is perfect, just that it's empirically likely to solve the problem.

If you can get together a community of folks who can do any of these other approaches, that would be fantastic. Try it!


> What little I know of Ada indicates that it doesn't actually solve the problem because it doesn't have safe dynamic memory allocation.

It does, just not the way most are used to it.

Surely if one wants to code like malloc()/free(), free() is unsafe in Ada and requires the Ada version of unsafe {}

However many type constructions in Ada do dynamic allocation by themselves, including on the stack.

If memory allocation fails, an exception is thrown and a retry with less memory requirements is a possible execution path.

Ada also has RAII since Ada95 (controlled types), and yes SPARK formal specification now allows for ownership rules.


> If memory allocation fails, an exception is thrown and a retry with less memory requirements is a possible execution path.

So, since we're talking about the Linux kernel, that's a red flag straight away. Notice a whole section of this new LKML post is about their implementation of Rust's alloc crate? Linus doesn't want useless exceptions here and has previously objected on that basis.

Meanwhile... Over in the C++ world they have this same behaviour. For them these exception handlers are a fiction, in reality popular standard libraries (never mind everything else you're using) will blow up in low memory and it's game over. Maybe Ada does a lot better. Maybe.


Linux kernel doesn't have clearance for high integrity computing, or being in control of machines responsible for human life's, Ada does.

Ada also has no exception allowed profiles and other mechanisms.

HN is not the place to hold a lecture on Ada features.

It is not a maybe, rather lack of knowledge.


You seem to be arguing as if this effort should have gone to Ada instead. So go ahead and do it. There's a group of people who are trying to get kernel support for Rust. This is not a statement about Rust being superior to Ada or Ada's (in)suitability for the same job. It's only a statement that this group of people likes rust for it's safety features. I'm sure they are all aware that there are other, even safer languages, but these people like Rust.

We can lead academic discussions about which language is safer/better/leaner/whatever-er, but the fact is that Rust has a larger and growing mindshare so this effort is unsurprising. For Ada, the solution isn't to try and prove to anonymous forum goers it would've been better choice if only people recognized the fact. The solution is to mobilize the Ada community to put the same amount of effort into the publicity and marketing (yes, the bad M word) of their ecosystem.

Ada clearly lives on in many safety critical systems, but judging by the admittedly biased topic frequency on HN and other technical sites, it has a tiny community who mostly keep to themselves. That's fine, but then don't be surprised when it's Hip Language X that gets the headlines and these sort of efforts.


That effort has been happening for production code in high integrity systems since Ada was released to the world in 1983.

My only point is how many in the Rust community advocate as if Rust would be the first language having features, that actually were already in Ada for the past decades.

As for Ada on Linux, naturally Linus would never go along it.

A C kernel guy liking a language with Pascal syntax, a community that worships a paper that hasn't been true in the Pascal world since early 80's, not even at gunpoint.

If you want to see a use case where Ada was chosen against Rust, check NVidia's project for self driving vehicles firmware.


I appreciate what you are saying, but it is again just repeating why Ada is better than Rust. I posit that's irrelevant to this discussion. Rust is a safety centered language that seems to have champions and a growing community around it. Can we not just be happy that finally a safety conscious language has a following? If you are convinced Ada is better, nobody is stopping you from being the champion and building the Ada ecosystem. Right now, Ada is for all intents and purposes invisible both in terms of libraries, mindshare and hype.


Ada is alive where it matters most, high integrity systems.

Plenty of safety conscious language have had followings since ALGOL, this is the point that many miss when praising Rust, while ignoring our computing history.


>the fact is that Rust has a larger and growing mindshare so this effort is unsurprising.

I think that is a false perception created by the dev-blogging community. Based on GitHub activity (https://madnight.github.io/githut/#/pull_requests/2021/2) Rust usage is declining (absolute numbers show little growth). Based on developer surveys (https://www.jetbrains.com/lp/devecosystem-2020/) very few people use rust outside of hobby development.

I think the concern is that the core argument that "I think Linux should adopt rust for safety" is a bit dishonest, given that there are a large number of well-vetted alternatives. "I think Linux should adopt rust because I like rust for social reasons" would probably be more correct. That Google staff seem to think rust is good for the Linux kernel but not, say, Zircon, would be surprising if those same people actually believed that any legal rust program is safe and inherently trustworthy. "Embrace, extend, extinguish" wasn't that long ago.


I don't use rust so I have no money in this game, but based on the two links you provided, Rust ranks the highest among the "safety minded" languages. It ranks the same level as Ruby, Swift, Matlab or R in the Jetbrains ranking. That's incredible for such a new language.

I mean, neither Ada nor Zircon are even on the lists, so in terms of mindshare, they are clearly blown out of the water by Rust (from the evidence you provide). To discount Rust based on usage when the alternatives being suggested don't even make the rankings is also a bit dishonest.

In fact, the argument "I think Linux should adopt rust for safety" makes perfect sense. Here's finally a safety conscious language that seems to be embraced by a larger community, and even the big tech players. Instead we get people suggesting arcane alternatives that may be technically better, but have a snowball's chance in hell to become commonplace any time soon.


Zircon is a kernel.


That’s minor. It sounds like you agree with the rest of the post?


Rust was considered for Zircon. The only reason it wasn't chosen was that at the time Rust wasn't yet sufficiently mature for embedded. And it really wasn't.

In the meantime, Google makes extensive use of Rust in new components of both Fuchsia and Android and even rewrites some legacy components where it makes sense.


Right. It is a lack of knowledge, and I know I've seen you argue that Ada has facilities for this before, and I don't know enough Ada to answer that one way or another.

I think you and 'grumblenum should team up and write a working kernel module in Ada/Spark and then we can have a fruitful discussion about it, instead of just theorizing about whether it could be done. I'm interested in seeing the problems solved, and I'm indifferent to technology. If you think Ada/Spark can solve them, that would be awesome!

(Editing to reply to a small claim in another subthread, "Linus would never go for it" - we believed the same thing when we started. We were originally thinking that we'd pitch this directly at Linux distros shipping out-of-tree modules that they didn't want to keep having to push security updates for. https://github.com/fishinabarrel/linux-kernel-module-rust was out-of-tree and we tried to keep compatibility with kernels 4.4 onwards. But it turned out Linus isn't opposed to it.)



Fascinatingly, the more you post, the less convinced I am that Ada could ever be suitable for this use case. I looked at all these links to see what's been done:

Muen: If I am reading the allocator in https://git.codelabs.ch/?p=muen.git;a=tree;f=tools/liballoc/... correctly, it simply has no deallocator function at all. It does have a "Clear" function, but there's no sign that it has any ability to statically check that calling it is safe. That is exactly what we're talking about.

Nvidia: No public source code, and it's not obvious what actually exists here, beyond a compiler to Nvidia hardware.

Genode: This project is mostly written in C++. There is a very tiny, experimental amount of Ada in it (< 400 lines of code). There's certainly nothing that speaks to the questions discussed in this thread (safe dynamic allocation/deallocation, security, etc.).

ApexAda: This is a marketing brochure. Also this marketing brochure talks about "significant deployments in satellite communications, avionics, weapons control systems, and a myriad of other mil/aero applications," all of which - as discussed elsewhere in these comment - are applications that neither need nor want dynamic allocation (and are generally not exposed to deeply-structured untrusted input, for that matter).

Green Hills: This looks interesting - there's a POSIX filesystem and a UDP socket implementation, both of which (probably) want dynamic memory allocation. That is, at last, an application that needs dynamic allocation. Is it actually written in Ada? I see that the RTOS itself supports development in C, C++, and Ada, and they claim to offer a BSD socket implementation, so they at least have C bindings. I don't see any claim about what language it's written in, and Green Hills seems to do a lot of C work (see e.g. https://www.ghs.com/products/micro_velosity.html). If it's in Ada, then I think this gets the closest to what would be needed for the Linux kernel - do you know how their filesystem and socket implementations handle dynamic allocation? (I'll also note that they lost their certification many years ago, so if the answer is that they have safe code because they spent a whole lot of time perfecting a single version of an OS, then that's a very different problem space from Linux, which is constantly changing.)

Wind River: This is a marketing page without much detail, but it looks like it's a hypervisor to run general-purpose OSes on top of, not a general-purpose OS itself, which means it probably does not need dynamic memory allocation, or that if it does, the safety of deallocation can be verified by hand.

---

No Ada developer has done what's being done in this patchset, and it's pretty insulting to the folks putting in the work to claim it's been done and Gish-gallop a bunch of random links and expect people to put in the time to figure out what exactly you're claiming already exists. Everyone knows it's possible to write code in Ada. Nobody is asking if Ada can be used for completely unrelated purposes. You may as well link to GORILLAS.BAS and ask why the Linux kernel isn't using QBasic, a memory-safe language if you use the subset without PEEK and POKE.

Write - or point to - a kernel driver for a UNIX-like operating system written in Ada, that safely handles dynamic allocation. That's the clear bright-line goal. If you think that's not the right goal, say why.


>I'm interested in seeing the problems solved, and I'm indifferent to technology.

I think the first problem is some kind of language specification. That it's not possible to know what behavior one should expect from rustc means that it isn't a solution. It's another problem. You should check out Ken Thompson's great speech on why this is an issue. https://www.cs.cmu.edu/~rdriley/487/papers/Thompson_1984_Ref...

Or do you think it's wrong to regard unauditable enhancements to free software paid for by Google with suspicion?


Ken Thompson's speech was made two years before the first draft specification of C was published, so I don't understand how it is connected to language specifications.

Here's a way to address the issue he brings up in that speech that does not involve language specifications: https://dwheeler.com/trusting-trust/


As one of the developers of this patchset, can you comment on how these contributions will be licensed? Can you also comment on how a third party can verify or audit the output of a compiler without a specification?

>just that it's empirically likely to solve the problem.

Can you comment on what empirical evidence you based this statement on?


> As one of the developers of this patchset, can you comment on how these contributions will be licensed?

Yes, I can comment on this. The licensing is the same as any other submission to the Linux kernel. See https://www.kernel.org/doc/html/latest/process/license-rules... for Linux kernel licensing in general and also https://www.kernel.org/doc/html/latest/process/submitting-pa... for the meaning of the Signed-off-by lines.

> Can you also comment on how a third party can verify or audit the output of a compiler without a specification?

I imagine it's pretty similar to how you verify or audit the output of a compiler with a specification. If you can show me an example of the sort of verification or audit you envision, I can perhaps give you a more helpful comment.

We do not make any claims about verified or audited compiler outputs or compliance with specifications, and to the best of my knowledge, the Linux kernel does not currently verify or audit any compiler outputs, so I'm not sure I follow why I would be particularly qualified to comment on it, but I could maybe point you at some interesting research.

> Can you comment on what empirical evidence you based this statement on?

Yes, I did comment on this, in the paragraph you quoted. If you'd like me to expand on something can you clarify?


>I imagine it's pretty similar to how you verify or audit the output of a compiler with a specification.

I must be missing the humor here. You can test against a specification to verify implementation. Testing an implementation against itself is meaningless. I would think that this is obvious. Also I don't typically use the word "empirical" to indicate purely qualitative assessments or intuition. You also provided a dead link.


Fixed the link, thanks.

There's no attempt at humor here - I'm just trying to get you to explain what, precisely, you mean by "test against a specification to verify implementation" so that we're in agreement on terms.

When I hear "test against a specification" and "verify implementation," I think about things like CompCert. But the Linux kernel does not use CompCert, it uses GCC or Clang. To the best of my knowledge, neither is tested against a specification and neither has a verified implementation.

Actually, when we first started discussing mainlining this, we had a bit of discussion about whether it was safe to allow rustc-compiled Rust code to link against GCC-compled C code. The specific worry was potential compiler bugs, of which the kernel has historically run into many. We eventually decided the risk was low, but the kernel has gone through periods where it insisted you use the exact same compiler for the kernel and for modules because of bugs. Those bugs have gone away through the normal human process of writing better code, not through formal verification. And when they existed, the kernel's response was not to reject those compilers, it was to work around the bugs.

Also, CompCert does not test against the C standard, which is written in English prose. It tests against the CompCert developers' attempt to re-express the C standard in terms of machine-checkable theorems. The C standard is confusing even to human readers, so it would be misleading to say that CompCert has verified their compiler against the standard as published by the standards committee. (Hence my claim that they're similar; you can verify a compiler of any language provided you write your own machine-checkable statements about what you expect it to do, and you still need other humans to read those theorems and convince themselves that they're the right ones.)

For those reasons, I'm trying to understand what your question is in a precise manner. Are you asking about things like CompCert? Because if so, I don't follow the relevance to our work, though again, I'm happy to point you at interesting research.


Without being an expert in either language, given what Ada is used for, I'm sure it offers at least all of the safety Rust offers and probably more.

But Rust has a lot more going for it than safety. Maybe it shouldn't matter for kernel code, but the focus on developer "ergonomics," high-level functional constructs and pretty flexible pattern matching, supporting both ML-style and Algol-style syntax, very strong language-level tooling for building, testing, packaging, and generating documentation, plus the fact that the language itself is so well documented with great, accessible tutorials.

There's a reason it keeps winning Stack Overflow's most loved language award every year. It's easy to learn for people coming from almost any kind of programming background, supportive of newbs, and equally well-suited for application-level and system-level development, so it ends up with pretty widespread appeal.

So again, I don't know that any of this should matter for selecting what languages you're going to support in the kernel, but as a purely practical matter, it's a lot easier to ask contributors to use a language they already know and use elsewhere than it is to ask them to learn something new that they don't use anywhere else, which is what Ada and Idris and what not would be for most developers.


Rust has far more enthusiasm from potential contributors than any of those options. Onboarding something is meaningless if it's not going to be used and maintained.

FWIW I would trust anything language-level over applying static analysis tools to C (for an evolving codebase like Linux, having the safety be a first-class part of the codebase is a must) or extracting C from another language (the semantics of C are surprisingly weak, such an extraction can only ever be as good as your model of C which will probably be buggy).


> I've never read anything to make me think that Rust is a safer alternative than any of these options

What Rust focuses on is providing a scalable approach to a limited kind of formal verification - focusing purely on type- and memory-safety, not conformance to arbitrary specifications. The latter is of course being worked on, but type safety and memory safety are table stakes.


So what is the issue with Ada/SPARK?


I can't think of a case of a system like that being revived after 10 or more years. My best guess is that the mismatch between the environment the language was written in and the modern world simply becomes too great for anyone to care after a while.

For instance, I have to deal with JSON. There's no reason older languages can't deal with JSON, but it'll often be inconvenient as the libraries try to deal with mismatches in the type system or whatever, and it just can't ever be quite as slick as a more modern language written with some partial, vague combination of the fact that JSON exists in mind, together with the sorts of techniques used to implement JSON support in modern languages. Or sometimes it'll just be the dominant library for dealing with it just has an old design that baked bad assumptions in, and since it's the dominant library it's the one everything uses, so I have to deal with that everywhere.

And it's just the death of a thousand of those cuts. "But you can do JSON in my favored language!" Yeah, but it's just not quite as slick. "But we have an HTTP client in this language!" Yeah, but it's just not quite as slick. "But we have GPU support in my favorite language!" Yeah, but it's just not quite as slick. And you've ground all that "not quite as slick" deeply into your language over the decades.

So nobody has any one reason they can point at as to why Ada isn't going come back, or why E isn't going to come back, or why Cyclone isn't going to come back... but the reality is that the sum total of the issues is more than sufficient to prevent it from ever happening, and what's going to happen is a more modern language is going to get the support.

Advocates also try to "defeat in detail", in analogy to the military technique, arguments to the contrary, but the problem is that while that may satisfy the advocate, everyone else is still left looking at a metaphorical battlefield that has an exhausting number of "battles" over basic deficiencies, and being "defeated in detail" by an advocate just transfers to exhaustion with the underlying language anyhow.

It probably isn't a "the issue" with Ada/SPARK. It's a horde of small little things, the sum total of which is too much hassle for anyone to want to deal with.


Why are you using past tense for Ada? Adacore is still doing business and Ada 202x will be standardized this year or next.

I’m also not sure why “nice JSON libraries” is a hard requirement for kernel code. Also, see JWX.


Why past tense? Because it's dead as languages go. Defining "death" as "zero users" is not the definition most people use, because it's not useful; things are clearly dead before that point. Ada doesn't have a bright future. It isn't going to rise from the ashes and have everyone agree that yes, it was right all along, and everybody with 10 years of experience on their resume is suddenly in hot demand as the crowds give them ticker tape parades for having been right all along. That never happens. It may well klunk along for more decades... but it's dead.

What does happen with such languages is that someone comes along and creates some sort of new language taking the best of the old language and mixing it with more modern stuff. I personally think there's an Ada-like niche waiting to be exploited there, probably something that also takes some lessons from Haskell. (Haskell is still alive, but IMHO, headed toward this sort of dead right now, and really could use this sort of revival itself.) An example of this is Elixir... Erlang would probably be also something I'd be calling "dead" if it weren't for Elixir.

(Ironically, often the biggest partisans of the base language are the last to know about the revival language because every deviation from the base language is seen as a flaw from the One True Answer....)


Ada should be considered dead because it doesn't have an npm full of super[1] useful[2] libraries?

> That never happens. It may well klunk along for more decades... but it's dead

Ada is used in 2021 in the exact same places that Ada was used in 2011 (and 2001 before it), if not more. Ada was designed for a specific purpose, one that it still fulfills very well. People have been claiming Ada was dead for decades, yet still it miraculously lives on. Despite Rust, despite Haskell, despite an algal bloom of "formally verifiable subsets of C", and so on.

> What does happen with such languages is that someone comes along and...

Yet... This just... hasn't happened. A successor language more suitable for safety-critical development has just simply not appeared. The manufacturers using Ada —a subset of which can be seen at this[3] link— ostensibly don't seem convinced that any of Ada's many alternatives offer a more appropriate solution for developing safety-critical software.

> I personally think there's an Ada-like niche waiting to be exploited...

The Ada-like niche you describe is, believe it or not, currently occupied by Ada. Ridiculous as this may sound, Ada is actually pretty good at fulfilling its intended function. What needs to change? Ada is not designed to fill the role of Node.js, Java, or C#. It's designed for safety-critical embedded systems, which is where you're most likely to find it today.

> Ironically, often the biggest partisans of the base language are the last to know about the revival language because every deviation from the base language is seen as a flaw from the One True Answer...

Does this apply to your thinking too, or just the proponents of Ada, and other languages you don't like? Are the Rust users just blind to the benefits of Ada? Are you susceptible to the same cognitive blind-spot you accuse us of having?

[1] https://www.npmjs.com/package/pad-left

[2] https://www.npmjs.com/package/is-array

[3] https://www.adacore.com/company/our-customers


> Ada should be considered dead because it doesn't have an npm full of super[1] useful[2] libraries?

Wait! There is Alire[1][2]! There is a list of "crates", you can easily include them into your project, and so forth. This is pretty much the same as https://crates.io or https://pkg.go.dev, which means people really should throw their misconceptions away and actually do some research before concluding that the language (Ada) is dead. A couple of quick searches quickly tells me that Ada is pretty much alive, with all of these "modern libraries with modern solutions", and full-on safety without performance loss.

This comment is intended for the person you replied to. :)

[1] https://alire.ada.dev/

[2] https://blog.adacore.com/first-beta-release-of-alire-the-pac...


So dead "because I said so" in essence?


Very few users over a long period of time means very few are working to keep the language up to date.

Too long of this pattern and it becomes much more trouble than it's worth to use on a business level.


Having 7 compiler vendors, happily in business, from people that actually pay for compilers, and an upcoming language revision, seems healthy enough.


No more than you're saying it's not dead "because I said so".

They gave a bunch of reasons.


Yeah. "Not just quite as slick". What does it even mean? Ada/SPARK has everything you would find in a modern language... and more. People hold lots of misconceptions over the language, that is why I often bring it up and talk about it. That is why I often provide links to stuff people would easily find if they were to look stuff up about the language. As I said in my other comment, it even has a package manager: https://alire.ada.dev/.

Now, I know that there is a trendy view of "the more the better", and it does not have as many libraries there, but that has to do with the fact that it is not yet widespread among Ada users. Alire is really new.


Not sure what you mean about Haskell. Haskell is seeing more activity than it has even seen!


What kind of scalable approach are you describing? Rust is not formally verifiable. I'm aware that there are projects attempting to solve this problem, however as I understand it you can't currently create verifiable production code.


> focusing purely on type- and memory-safety, not conformance to arbitrary specifications

What are types if not arbitrary specifications? Are you suggesting that formally verified software is not memory safe? What on earth does "scalable" even mean? A verified program cannot scale? If not, why not?


Types are not 'arbitrary' in any real sense. Type systems can be described as a sort of formal verification that leverages and is closely aligned with the syntax structure of actual code, thus type systems can preserve compositionality but don't generally support non-local reasoning, at least not in a very elegant way. This is exactly what can make them potentially more scalable than other sorts of FV, hence more useful for programming "in the large".


Types that can be defined by users are, necessarily, part of a language syntax, so that they are aligned with the syntax is just a tautology. Equating user defined types with formal verification is just nonsense. C has types. Python has types. Rust has types. That the program does what you think it should do does not logically follow the mere existence of types. I have no idea what "in the large" is supposed to mean. I feel like GPT-3 is arguing with me.


Maybe I can try to explain the parent comment's point a different way. When saying that rust works on types and not on arbitrary specifications, I think he is saying that rust is more limited than languages that support arbitrary specifications. However, by making this tradeoff it achieves a reasonable degree of safety without incurring a ton of overhead.

I believe the comment that types are aligned with the syntax is meant as a contrast to other languages which include specifications written in a format substantially different from or fully removed from the implementation. This can reduce the friction of using type-based verification when compared to formal verification capable of describing an arbitrarily complicated spec.

When discussing the scalability of types, I think he is saying that because types are coupled to the implementation, and don't support non-local reasoning, it is less likely that you will run into issues with type checking as you try to compose many small components into a larger program. In contrast, my impression is that with full formal verification, it can become extremely difficult to properly verify a large system.

Regarding your comparison to C and python, I think it's clear that the parent was comparing the specific type system and borrow checker that Rust provides vs. formal verification, not making a statement about the general concept of a type system. In particular, I don't think it's reasonable to assume he was saying the existence of types provides any sort of safety. Rather, it's clear he was saying the use of a powerful type system (such as that found in Rust or Haskell) to implement a limited specification of the program functionality can provide a degree of safety.


https://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspon...

"In other words, the Curry–Howard correspondence is the observation that two families of seemingly unrelated formalisms—namely, the proof systems on one hand, and the models of computation on the other—are in fact the same kind of mathematical objects.

If one abstracts on the peculiarities of either formalism, the following generalization arises: a proof is a program, and the formula it proves is the type for the program."


I believe what zozbot234 is saying is that Rust focuses on a specific subset of formal verification, and on trying to make a language with that enforces that subset in a way that is still somewhat ergonomic.

I'm assuming "scalable" in this case is referring to the fact that fully verifying a piece of software is currently takes a large amount of effort. Rust's approach is more "scalable" in the sense that they decided to focus primarily on memory safety, and punted on other forms of verification in the hope that it would be more ergonomic/require less effort.


> I've never read anything to make me think that Rust is a safer alternative than any of these options; the guarantees of the language are just assurances of its advocates.

Note that some formal verification work has been done on the guarantees that the borrow checked provides (which led to finding and fixing an edge case bug), so there is a little more concrete evidence that just the "assurances of its advocates"

See http://plv.mpi-sws.org/rustbelt/ for more information, also the verification working group: https://rust-lang-nursery.github.io/wg-verification/


I don't think that the mere existence of a "working group" or grant funded research positions for verifying rustc or a subset thereof is the same as having a verified compiler. That's certainly not a strong argument to against using rustc for safety over one of the many verified compilers that actually exist. At least, I didn't see in the "rustbelt" list of papers anything which appeared to indicate that a verified compiler or language subset do in fact exist. Furthermore, that the proposed patches rely on the nightly compiler and a special variant of rust's core, which can only be enforced by special feature tags seems to suggest that Rust-Linux is not being written in this hypothetical, future compiler.


> At least, I didn't see in the "rustbelt" list of papers anything which appeared to indicate that a verified compiler or language subset do in fact exist.

You might want to look harder, because there is certainly a verified language subset (anything that can be written as LambdaRust, which covers a large and substantial swathe of the language).

If you're looking for a formally verified, optimizing compiler for any realistic language, particularly a low level language, you have pretty much one option: CompCert (I'm not sure where this large group of verified compilers you claim exists lives, but I'd love to see them). As someone who would love a verified Rust toolchain, I can't help but notice that very few safety-critical systems actually use CompCert in practice, which makes me a bit suspicious of claims that this is something Rust needs to be competitive in this arena.

Finally, I'll point out that verified compilers like CompCert only promise to translate correct C to correct machine code; they do not necessarily aid you in coming up with correct C in the first place. Miscompilations certainly happen, but not nearly as often as undefined behavior in the source program. So asking why people would use Rust when verified compilers exist for other languages is sort of missing the point.


I think Compcert isn’t widely used because of its license. GNAT is also GPL. Everyone lost their mind over Oracle charging a nominal sum for their compiler. I don’t think rust would have its corporate backing if it had similar licensing.

Also, there is only undefined behavior in rust. The compilers behavior can change at any time for any reason any time somebody with commit rights thinks that it should. You cannot begin to assert provenance about some thing that can change at any time.


> I think Compcert isn’t widely used because of its license.

I somewhat doubt this, but even if it is true, the fact remains that people are not actually using it for the vast majority of mission-critical software. This suggests that the elimination of miscompilation bugs, which is what you get from formal verification of a compiler, is not high on their list of priorities.

> Also, there is only undefined behavior in rust. The compilers behavior can change at any time for any reason any time somebody with commit rights thinks that it should.

This is certainly not true of Rust in any practical sense.

I am not saying Rust doesn't need a certified version of its compiler that people can legally rely on for stuff like this (as opposed to practically). I am saying that this objection has nothing to do with the existence of a formally verified compiler, which is also not in practice used to compile mission-critical C programs. I think you are confusing certification of a compiler for mission-critical software, with formal verification of the language's compiler, semantics (including type safety of the safe subset--a proof that would not go through for C as it explicitly unsafe), and standard library.


It has not been done yet, it is a work in progress, but it is being done. https://ferrous-systems.com/blog/sealed-rust-the-pitch/


My point is that solicitations for grant money do not provide the assurance that products that actually exist do.


Sure, I am here to state the current state of the world and point to the most up-to-date way to find out what the status is, that's it.


So... it hasn't been done


That is what I said, yes.

(Though, as pointed out by a sibling, there is a subset that is.)


Static code analysis is not enough for C: https://t.co/Pbo4iHqEub



Apologies. I thought I pasted the original link but I was wrong. Thanks for fixing it.


Formal verification techniques don’t scale, not even to your average CRUD app, let alone to an OS-scale.


In a number of ways I'd argue that FV of an average CRUD app is a very hard problem, for two reasons:

- the requirements generally change quite frequently

- the requirements are generally a long way from being "formally verifiable"

Where I, personally, have found the most value so far using FV techniques is from modelling "tricky" things whose requirements are quite solid. A few examples from my work:

- A state machine for managing LoRa communication. The requirements in the LoRa spec are pretty good. I took those requirements and designed a state machine I could implement on a low-power device and modelled that SM in TLA+ to verify that there were no deadlocks/attempted transitions to invalid states.

- A state machine for managing the Bluetooth communication between an Android app, a Bluetooth-Iridium (satellite) bridge, and the server on the other side of the Iridium link. Similar to the LoRa case, I used TLA+ to ensure that there were no deadlocks in my proposed SM (and uhh made a number of corrections as I went). Ultimately, due to the design of the BT/Iridium gateway, the exercise resulted in the conclusion that it wasn't actually possible for this process to be 100% reliable but it was possible to detect and retry the edge cases.

- Modelling a somewhat awkward email/SMS invite flow for a mobile app. You could get an invitation over email or SMS, and your account transitioned through a few different states as both your email address and your phone number were verified. Modelling this helped ensure that your account couldn't get into a state where no forward progress was possible or into a state where you could use a half-completed account.


Thanks for the informative answer!

Just a note, I (personally) don’t consider TLA+ a FV technique — it is a really good technique that actually scales much better than formal verification programs like Agda or Coq.


It's formal verification of an abstract machine as opposed to running code. The problem is that formal methods as a field doesn't yet have a rich enough vocabulary to distinguish between its flavors. I use "code verification" and "design verification", which is a little more specific, but has its own issues.

Is TLA+ a sandwich?


> TLA+ is a formal specification language developed to design, model, document, and verify concurrent systems.


But it only proves a model, not the actual code — that’s a major distinction.


You can't write a bug and nor can you write a test if you don't even have proper spec. But it happens a lot when requirements are just ambiguous.


https://github.com/Componolit

Let us see what they have, using formal verification (Ada/SPARK).

- JWX is a library for handling JSON data and more. It is implemented in the SPARK programming language and has been proven to contain no runtime errors. As a result, JWX is particularly suited for processing untrusted information.

- The RecordFlux specification language is a domain-specific language to formally specify message formats of existing real-world binary protocols. Its syntax is inspired by Ada. A detailed description of the language elements can be found in the Language Reference.

- Formally verified, bounded-stack XML library

This is just Componolit.

Let us check the F* language:

- F* (pronounced F star) is a general-purpose functional programming language with effects aimed at program verification. It puts together the automation of an SMT-backed deductive verification tool with the expressive power of a proof assistant based on dependent types. After verification, F* programs can be extracted to efficient OCaml, F#, C, WASM, or ASM code. This enables verifying the functional correctness and security of realistic applications. The main ongoing use case of F* is building a verified, drop-in replacement for the whole HTTPS stack in Project Everest. This includes verified implementations of TLS 1.2 and 1.3 and of the underlying cryptographic primitives.

---

Given the above, what exactly do you mean that it does not scale?


Whoops, I completely forgot about libsparkcrypto (https://github.com/Componolit/libsparkcrypto) which is a "formally verified implementation of several widely used cryptographic algorithms using the SPARK 2014 programming language and toolset. For the complete library proofs of the absence of run-time errors like type range violations, division by zero and numerical overflows are available. Some of its subprograms include proofs of partial correctness.".

Features:

* AES-128, AES-192, AES-256

* AES-CBC (all supported AES modes)

* SHA-1

* HMAC-SHA1

* SHA-256, SHA-384, SHA-512

* HMAC-SHA-256, HMAC-SHA-384, HMAC-SHA-512

* PRF-HMAC-SHA-256, PRF-HMAC-SHA-384, PRF-HMAC-SHA-512

* RIPEMD-160

* HMAC-RIPEMD-160

* ECDSA, ECGDSA

It is pretty awesome. :)


Looking at the examples you've provided, I'm pretty sure the Linux kernel's codebase dwarfs their combined codebases into insignificance. Since it is very hard to not have NP-hard asymptotic complexity when formally verifying a codebase as large and as demanding (in terms of performance, etc) as the Linux kernel, I would expect that if you tried to rewrite Linux in the languages you've mentioned and formally verify the rewrite, you will be able to keep the entire AWS cloud (globally) very busy for a very, very long time. On the other hand, if you were to do the rewrite in Rust and use some hypothetical formally verified Rust compiler to type check it (which is mathematically equivalent to such a formal verification in, say, F* limited to only verifying certain properties), it'll probably type check in minutes or even seconds on the laptop I'm currently writing this reply on.

For a less hypothetical example. RedoxOS has a codebase that dwarfs Sel4 (but it is still much smaller than Linux in turn), yet it type checks with much less compute cost than it takes to formally verify Sel4. Or heck, Redleaf actually formally verifies in a tiny fraction of the time it takes to formally verify Sel4, because the Rust type checker not only significantly reduces the amount of requirements that still needs to be formally verified but also reduces the complexity to verify those requirements to linear to the codebase size due to the restrictions that the Rust type system and module system puts on valid Rust programs (with a little bit of care when writing the code). Of course, until you have a certified Rust compiler there might still be miscompiles of Redleaf and until all the features of Rust used in the Linux kernel is formally specified, we cannot do the same with the Linux kernel as for Redleaf even if somehow all of Linux got rewritten in Rust. But in Rust at least it is conceivably possible given sufficient resources and sufficient time. I very much doubt it will ever be possible without help from a sufficiently expressive and practical type system.

Oh yea. Then there's another problem. Most (all?) of the examples you've mentioned have requirements that are well specified and pretty much set in stone. The requirements fro the Linux kernel... aren't quite as obliging.


Sel4 is an OS that is formally verified.

It's security parameters are well-defined enough that you can use it to launch isolated VMs with... less secure OSes that are sandboxed from each other.


I was under the impression that SeL4 exists.


That's true, but as a microkernel, seL4 has a very limited scope. It's not in the same ballpark as, say, the Linux kernel. It's more like a hypervisor. [0]

From a quick google, seL4 has around 10,000 lines of C code, whereas the Linux kernel has around 30,000,000.

[0] https://sel4.systems/About/seL4-whitepaper.pdf


Kinda. https://news.ycombinator.com/item?id=27232401

I think CSIRO disbanding the team was a huge mistake, but in any case they weren't producing the kind of success metrics that sustained their funding.


I know of at least one other company that is picking it up.


Testing is hard. And (very) expensive.


Formal verification is even harder. And even more (very) expensive.


ROTFL. Microsoft is interested in security ? Maybe they shall start with their own OS. (Solar Winds, Wannacry, ransomware)


'except' Who? ARM?

Well [0]

[0] https://github.com/ARMmbed/mbed-os


Also to note, ARM toyed with the idea of using a Linux distribution as alternative to MBed, but it was short love, it is already dead, it only lasted one release.

https://os.mbed.com/docs/mbed-linux-os/deprecated-product/we...


Very true. Fuchsia was also initially prototyped with a Linux kernel and well, the rest was 'history'.


Note that MBed Linux was created after MBed already existed, they just toyed with the idea of offering MBed userspace on top of Linux.


Linux isn't an OS and every one of them relies on it for mission-critical infrastructure.


Sounds like someone pissed off a senior manager by speaking the truth in an email with a sensitive cc list.

Hey OP shed light on the truth. Don’t shy away from doing the right thing on the way out.

Buy some shares and now you’re a shareholder.


> Sounds like someone pissed off a senior manager by speaking the truth in an email with a sensitive cc list.

> Hey OP shed light on the truth. Don’t shy away from doing the right thing on the way out.

Uhh...what? Did we read the same blog post? This is an oddly specific fantasy you've conjured.


Probably they have direct experience of it?

Google has a variety of mechanisms that can be used to effectively get rid of people without them understanding why. For instance they allow anyone to give feedback on anyone else during promo cases which can sink promotions or entire careers, and which isn't merely anonymous but completely confidential. That's been mentioned elsewhere on this thread. The result is that people can appear to be doing very well with glowing feedback, happy managers, successful projects and then their career suddenly goes off the rails without that person ever being told why, for no better reason than a single person didn't like them. This blog post gives off strong vibes of this sort of thing, given the vagueness about why he had to leave.

This problem interacts badly with another problem Google has: it started with a culture of extreme internal openness compared to most organizations. For example almost all internal mailing lists were open, almost all internal code was open to all engineers, people were encouraged to send feedback and resolve problems directly across teams instead of always going through management and so on. Hence the mono-repo so large they had to reimplement Perforce to keep it running.

In an environment where everyone is culturally very similar, has a lot of experience of the world outside of Google to keep them grounded, a clear and motivating goal and minimal politics, this can just about work. It worked OK for the first 6-7 years or so I was there. It ... stopped working. The combination of very open communication, an increasingly radicalized workforce (due to their heavy dependence post 2010 on recruiting new grads to keep the hiring pipelines full), and formal systems for secretly sinking people's careers = a lot of very messy problems in which loyal workers get screwed.

The vast majority never talk about it in public, at least not under their own name, but there are a lot of Evans out there.


Great, one more thing to add to my never ending list of things I want to get good at.


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: