Game theoretic modelling commonly results in early escalation followed by a slower de-escalating as a winning strategy. Fits into the strong-arm approach to politics of exercising power to expand power which the current US administration seems to follow.
Thus, I wouldn't think that decoupling or falling of a cliff is the intended goal.
No matter how many times Trump uses the strategy he literally wrote a book about, the reaction to the initial escalation is always the same. It is remarkable.
Problem here is that the excessive focus on secondary issues raises the perception of a problem solving deficit which reduces support/legitimacy for the political system.
It would be nice to focus on solving more existential problems of which there are enough.
Seems to me that everyone is focused on the technical merits, not weighing the effort of learning a new programming language/toolchain/ecosystem for the maintainers appropriately.
Mastering a new programming language to a degree that makes one a competent maintainer is nothing to sneeze at and some maintainers might be unwilling to based on personal interests/motivation, which I'd consider legitimate position.
I think its important to acknowledge that not everyone may feel comfortable talking about their lack of competence/disinterest.
This is exactly the position Christoph Hellwig took in the original email chain that kicked off the current round of drama: https://lore.kernel.org/rust-for-linux/20250131075751.GA1672.... I think it's fair to say that this position is getting plenty of attention.
The opposing view is that drivers written in Rust using effectively foolproof APIs require far less maintainer effort to review. Yes, it might be annoying for Christoph to have to document & explain the precise semantics of his APIs and let a Rust contributor know when something changes, but there is a potential savings of maintainer time down the line across dozens of different drivers.
> Yes, it might be annoying for Christoph to have to document & explain the precise semantics of his APIs and let a Rust contributor know when something changes,
Don't he need to do that anyway for every user of his code?
I guess the point is that it he is able to review the code of every driver made in C using his API, but he can't review the Rust interface himself.
Acknowledged, but said maintainers need to learn to cope with the relentless advance of technology. Any software engineer with a long career needs to be able to do this. New technology comes along and you have to adapt, or you become a fossil.
It's totally fine on a personal level if you don't want to adapt, but you have to accept that it's going to limit your professional options. I'm personally pretty surly about learning modern web crap like k18s, but in my areas of expertise, I have a multi-decade career because I'm flexible with languages and tools. I expect that if AI can ever do what I do, my career will be over and my options will be limited.
To play devils advocate, for every technology that comes along with an advancement a handful come along with broken promises. People love to make fun of Javascript for that, but the only difference there is the cadence. Senior developers know this and know that the time and energy needed to separate the wheat from the chaff is exhausting. The advancements are not relentless it is the churn which is.
That being said, rust comes with technical advances and also with enough of a community that the non technical requirements are already met. There should be enough evidence for rational but stubborn people to accept it as a way forward
Totally tangential, but since I just recently found this out: character-number-character, like [k8s, a16z, a11y] means that 8/16/11 characters in the middle are replaced by their count. I was wondering why kubernetes would be such a long word, when you wrote k18s. Maybe it was just a typo on your end, and this system is totally obvious.
And sadly, those are going to die out eventually, so the faster we get there, the less potentials for something breaking in a way that nobody would be able to figure it out.
One benefit they list is storing associated metadata in the database (specifically different types of hashes are mentioned) which is not so easy with a file system.
I think the bigger benefit though is the increased read performance on many small files (saving system call overhead). To which amount that applies to static files that a smart server might keep in cache, I don't know.
I'm not sure what associated metadata is in this context but www/path/to/filename.jpg and www/path/to/filename.jpg.json would work and be very file-y. I take their/your point in it not being directly integrated though.
And even without WAL (which you should absolutely be using if you're serving web content with SQLite) the lock for most writes lasts for a tiny fraction of a second.
small writes, which is still a dramatically larger pause than simply copying a few files to a directory and not pausing anything. if the website update is hundreds of large files, then the SQLite write is going to be large also. it then comes down to, "is it faster to copy 200M of files to a filesystem or write 200M of new data to BLOBs in a single monolithic SQLite file?" I'd bet the former in that race
I might be misremembering, but if you're using a transaction like in the article but using the rollback journal mode rather than WAL, won't sqlite actually hold the lock on the database for the entire time until the transaction is committed, which might actually be a substantial amount of time if you're writing lots of blobs like in the article even if each individual blob doesn't take that long?