At some level this is just concern trolling. There is nothing the Rust developers could possibly do or say that would alleviate the concern you've just expressed. You are asking for something that is impossible.
What could they possibly "deliver" beyond a strong commitment to fix the code in a timely manner themselves?
It is not concern trolling. It is a harsh disagreement.
Some kernel developers really do feel that any Rust in the kernel will eventually mean that Rust gets accepted as a kernel language, and that they will eventually have to support it, and they the only way to prevent this is to stop any Rust development right now.
And yes, there's nothing that the R4L group can offer to be get around that belief. There isn't any compromise on this. Either Rust is tried, then spreads, then is accepted, or it's snuffed out right now.
A big mistake by R4L people is seeing anti-Rust arguments as "unfair" and "nontechnical." But it is a highly technical argument about the health of the project (though sometimes wrapped in abusive language). Rust is very scary, and calling out scared people as being unfair is not effective.
There is nothing to deliver that would satisfy this argument. Pretending like the disagreement is about a failure of the R4L folks to do "enough" when in fact there is nothing they could do is toxic behavior.
If you go back digging in the LKML archives, Christoph's initial response to Rust was more of a "let's prove it can be useful first with some drivers"
That has now been done. People (particularly Marcan) spent thousands of hours writing complex and highly functional drivers in Rust and proved out the viability, and now the goalposts are being moved.
R4L people are allowed to get upset about people playing lucy-with-the-football like this wasting their f***ing time.
> There is nothing the Rust developers could possibly do or say that would alleviate the concern you've just exprssed.
They could do exactly what Ted Ts'o suggested in his email [1] that Marcan cited: They could integrate more into the existing kernel-development community, contribute to Linux in general, not just in relation to their pet projects, and over time earn trust that, when they make promises with long time horizons, they can actually keep them. Because, if they can't keep those promises, whoever lets their code into the kernel ends up having to keep their promises for them.
Many of them have, in fact, done all of those things, and have done them over a time horizon measured in years. Many of the R4L developers are paid by their employers specifically to work on R4L and can therefore be considered reasonably reliable and not drive-by contributors.
Many existing maintainers are not "general contributors"
It is unreasonable (and a recipe for long-term project failure) to expect every new contributor to spend years doing work they don't want to do (and are not paid to do) before trusting them to work on the things they do want (and are paid) to do.
Christoph refused to take onboard a new maintainer. The fight from last August was about subsystem devs refusing to document the precise semantics of their C APIs. These are signs of fief-building that would be equally dangerous to the long-term health of the project if Rust was not involved whatsoever.
I disagree. If you want to provide technical leadership by massively changing the organization and tooling of a huge project that has been around a long time, it should be absolutely mandatory to spend years building trust and doing work that you don't want to do.
That's just how programming on teams and trust and teamwork actually works in the real world. Especially on a deadly serious not-hobby project like the kernel.
Sometimes you are gonna have to do work that doesn't excite you. That's life doing professional programming.
Everything Ted Tso recommended is just common sense teamwork 101 stuff and it's just generally good advice for programmers in their careers. The inability of rust people to follow it will only hurt them and doom their desire to be accepted by larger more important projects in the long run. Programming on a team is a social affair and pretending you don't have to play by the rules because you have such great technical leadership is arrogant.
> It is unreasonable (and a recipe for long-term project failure) to expect every new contributor to spend years doing work they don't want to do (and are not paid to do) before trusting them to work on the things they do want (and are paid) to do.
It is absolutely reasonable if the work they want to do is to refactor the entire project.
it's like saying to people that they cannot add for example npu subsystem to kernel because they should first work for 10 years in other subsystems like filesystems on with they know little about.
sound absurd? just replace subsystems in above with C/Rust and the rest is the same.
Folks that maintain rust are responsible for rust code, if they won't deliver what is needed, their rust subsystem will fail, not C codebase, so it's in their own interests to keep things smooth.
my feeling is that some people think that C is the elite language and rust is just something kids like to play with nowadays, they do not want learn why some folks like that language or what it even is about.
I think the same discussion is when Linux people hate systemd, they usually have single argument that it's agains Unix spirit and have no other arguments without understanding why other thinks may like that init system.
> it's like saying to people that they cannot add for example npu subsystem to kernel because they should first work for 10 years in other subsystems like filesystems on with they know little about. sound absurd? just replace subsystems in above with C/Rust and the rest is the same.
No it's not. What you're missing is that if the Rust folks are unable, for whatever reasons, to keep their promises, it falls on the up-tree maintainers to maintain their code. Which, being Rust code, implies that the existing maintainers will have to know Rust. Which they don't. Which makes it very expensive for them to keep those broken promises.
To look at it another way, the existing maintainers probably have a little formula like this in their heads:
Expected(up-tree burden for accepting subsystem X) = Probability(X's advocates can't keep their long-term promises) * Expected(cost of maintaining X for existing up-tree maintainers).
For any subsystem X that's based on Rust, the second term on the right hand side of that equation will be unusually large because the existing up-tree maintainers aren't Rust programmers. Therefore, for any fixed level of burden that up-tree maintainers are willing to accept to take on a new subsystem, they must keep the first term correspondingly small and therefore will require stronger evidence that the subsystem's advocates can keep their promises if that subsystem is based on Rust.
In short, if you're advocating for a Rust subsystem to be included in Linux, you should expect a higher than usual evidence bar to be applied to your promises to soak up any toil generated by the inclusion of your subsystem. It’s completely sensible.
> What you're missing is that if the Rust folks are unable, for whatever reasons, to keep their promises, it falls on the up-tree maintainers to maintain their code.
But that's the thing, the deal was that existing maintainers do not need to maintain that code.
Their role is to just forward issues/breaking changes to rust maintainer in case those were omitted in CC.
You are using the same argument that was explained multiple times already in this thread: no one is forcing anybody to learn rust.
The point is that “the deal” assumes that the Rust folks will keep their promises for the long haul. Which kernel maintainers, who have witnessed similar promises fall flat, are not willing to trust at face value.
What if, in years to come, the R4L effort peters out? Who will keep their promises then? And what will it cost those people to keep those broken promises?
The existing kernel maintainers mostly believe that the answers to the questions are “we will get stuck with the burden” and “it will be very expensive since we are not Rust programmers.”
Isn't it the same as with support for old hardware? Alpha arch, intel itanium, floppy drives?
Those are all in similar situation, where there is noone to maintain it as none of maintsiners have access to such hardware to event test of that is working correctly.
From time to time we see that such thing is discovered that is not working at all for long time and noone noticed and is dropped from kernel.
The same would happen to rust if noone would like to maintain it.
Rust for Linux is provided as experimental thing and if it won't gain traction it will be dropped in the same way curl dropped it.
The reason the maintainers can drop support for hardware nobody uses is that dropping support won't harm end users. The same cannot be expected of Rust in the kernel. The Rust For Linux folks, like most sensible programmers, intend to have impact. They are aiming to create abstractions and drivers that will deliver the benefits of Rust to users widely, eliminating classes memory errors, data races, and logic bugs. Rust will not be limited to largely disposable parts of Linux. Once it reaches even a small degree of inclusion it will be hard to remove without affecting end users substantially.
> You are using the same argument that was explained multiple times already in this thread: no one is forcing anybody to learn rust.
I think this sort of statement is what is setting the maintainers against the R4L campaigners.
In casual conversation, campaigners say "No one is being forced to learn Rust". In the official statements (see upthread where I made my previous reply) it's made very clear that the maintainers will be forced to learn Rust.
The official policy trumps any casual statement made while proselytising.
Repeating the casual statement while having a different policy comes across as very dishonest on the part of the campaigners when delivered to the maintainers.
The issue with systemd was that many people felt that it was pushed onto them while previously such things would just exist and got adopted slowly if people liked it and then actively adopted it. This model worked fine, e.g. there were many different window managers, editors, etc. and people just used what they liked. For init systems, distributions suddenly decided that only systemd is supported and left people who did not want it out in the cold. It is similar with Rust. It is not an offer, but something imposed onto people who have no interest in it (here: kernel maintainers).
If users of other init systems don't want to make the substantial investment in maintaining support for those other init systems, then their complaints weren't worth much.
To start, not resigning when things don't go their way. That tendency is doing a lot to make the claim of rust people saying they will handle the burden of rust code unbelievable.
The standard procedure is to maintain a fork/patchset that does what you want and you maintain it for years proving that you will do the work you committed to.
Once it’s been around long enough, it has a much better chance of being merged to main.
That has already been the case with Asahi Linux - for years. It exists as a series of forked packages.
The thing is, you do still have to present a light at the end of the tunnel. If, after years of time investment and proven commitment, you're still being fed a bunch of non-technical BS excuses and roadblocks, people are going to start getting real upset.
However, it may only get merged in by being conceptually re-thought and reimplemented, like the Linux USB or KGI projects back in the day.
The general pushback for changes in Linux are against large impactful changes. They want your code to be small fixes they can fully understand, or drivers that can be excluded from the build system if they start to crash or aren't updated to a new API change.
You can't take a years-maintained external codebase and necessarily convert it to an incremental stream of small patches and optional features for upstream maintainers, unless you knew to impose that sort of restriction on yourself as a downstream maintainer.
What could they possibly "deliver" beyond a strong commitment to fix the code in a timely manner themselves?