> A line of code does not depend on a million other things.
But it does! To qoute my top-level comment:
> What about about race conditions, null pointers indirectly propagated into functions that don't expect null, aliased pointers indirectly propagated into `restrict` functions, and the other non-local UB causes?
In other words: you set some pointer to NULL, this is OK in that part of your program, but then the value travels across layers, you've skipped a NULL check somewhere in one of those layers, NULL crosses that boundary and causes UB in a function that doesn't expect NULL. And then that UB itself also manifests in weird non-local effects!
Rust fixes this by making nullability (and many other things, such as thread-safety) an explicit type property that's visible and force-checked on every layer.
Although, I agree that things like macros and trait resolution ("overloading") can be sometimes hard to reason about. But this is offset by the fact that they are still deterministic and knowable (albeit complex)
This is true to some degree, but not really that much in practice. When refactoring you just add assertions for NULL (and the effect of derefencing a NULL in practice is a trap - that it UB in the spec is completely irrelevant. in fact it helps because it allows compilers to turn it a trap without requiring it on weak platforms). Restrict is certainly dangerous, but also rarely used and a clear warning sign, compare it to "unsafe". The practical issues in C are bounds checking and use-after-free. Bounds checking I usually refactor into safe code quickly. Use-after-free are the one area where Rust has a clear advantage.
I agree that in practice NULL checking is one of the easier problems. I used it because it's the most obvious and easy to understand. I can't claim which kinds of unsoundness in C code are more common and problematic in practice. But that's not even interesting, given that safe Rust (and other safe languages) solve every kind of unsoundness.
> in fact it helps because it allows compilers to turn it a trap without requiring it on weak platforms
The "shared xor mutable" rule in Rust also helps the compiler a lot. It basically allows it to automatically insert `restrict` everywhere. The resulting IR is easier to auto-vectorize and you don't need to micro-optimize so often (although, sometimes you do when it comes to eliminating bound checks or stack copies)
> Restrict is certainly dangerous, but also rarely used and a clear warning sign, compare it to "unsafe".
It's NOT a clear warning sign, compared to `unsafe`. To call an unsafe function, the caller needs to explicitly enter an `unsafe` block. But calling a `restrict` function looks just like any normal function call. It's easy to miss an a code review or when upgrading the library that provides the function. That the problem with C and C++, really. The `unsafe` distinction is too useful to omit.
You are not wrong, but also not really right. In practice, I never had a problem with "restrict". And this is my point about Rust being overengineered. While it solves real problems, you need to exaggerate the practical problems in C to justify its complexity.
> In practice, I never had a problem with "restrict".
That's exactly because it's too dangerous and the developers quickly learn to avoid it instead of using it where appropriate! Same with multithreading. C leaves a lot of optimization on the table by making the available tools too dangerous and forcing people to avoid them altogether.
Do you have any general, comprehensive benchmarks or statistics that would indicate the opposite? I would include one if I had one at hand, because that would be a stronger argument! But I'm not aware of such benchmarks. I have to cherry pick individual projects. I don't want to.
I still claim that, as a general trend, Rust replacements are faster while also being less bug-prone and taking much less time to write. Another such example is ripgrep.
You're focusing purely on the problem-fixing aspects, but Rust's expressiveness is why I like it.
C can't match that. In C, you're basically acting as a human compiler, writing lots of code that could be generated if you used a more expressive language. Plus, as has been mentioned, it supports refactoring easily and safely better than any language outside of the Haskell/ML space.
The advantages of Rust are a package which includes safety, expressiveness, refactoring support. You don't need to exaggerate anything for that package to make sense.
> This is true to some degree, but not really that much in practice. When refactoring you just add assertions for NULL (and the effect of derefencing a NULL in practice is a trap - that it UB in the spec is completely irrelevant.
This happens a lot in discussion about programming complexity. What you are doing is changing the original problem to a much simpler one.
Consider a parsing function parse(string) -> Option<Object>
This is the original problem, "Write a parsing function that may or may not return Object"
What a lot of people do is they sidetrack this problem and solve a much "simpler problem". They instead write parse(string) -> Object
Which "appears" to be simpler but when you probe further, they handwave the "Option" part to just, "well it just crashes and die".
This is the same problem with exceptions, a function "appears" to be simple: parse(string) -> Object but you don't see the myriads of exceptions that will get thrown by the function.
I guess it depends on what the problem is. If the problem is being productive and being able to program without pain "changing the original problem to a much simpler one" is a very good thing. And "crash and die" can be a completely acceptable way to deal with it. Rust's "let it panic" is not at all different in this respect.
But in the end, you can write Option just fine it C. I agree though that C sometimes can not express things perfectly in the type system. But I do not agree that this is crucial for solving these problems. And then, also Rust can not express everything in the type system. (And finally, there are things C can express but Rust can't).
> in the end, you can write Option just fine it C.
No, you can't. In the sense that the compiler doesn't have exhaustiveness checks and can't stop you from accessing the wrong variant of a union. An Option in C would be the same as manually written documentation that doesn't guarantee anything.
std::optional in C++ is the same too. Used operator* or operator-> on a null value? Too bad, instant UB for you. It's laughably bad, given that C++ has tools to express tagged unions in a more reliable way.
> And then, also Rust can not express everything in the type system. (And finally, there are things C can express but Rust can't).
That's true, but nobody claims otherwise. It's just that, in practice, checked tagged unions are a single simple feature that allows you to express most things that you care about. There's no excuse for not having those in a modern language.
And part of the problem is that tagged unions are very hard to retrofit into legacy languages that have null, exceptions, uninitialized memory, and so on. And wouldn't provide the full benefit even if they could be retrofitted without changing these things.
Yes, all I am saying is that we have to be honest what level of problems we are solving when we encounter a complicated solution.
The solution have to scale linearly with the problem at hand, that is what it means to have a good solution.
I agree with the article that Rust is overkill for most use cases. For most projects, just use a GC and be done with it.
> But I do not agree that this is crucial for solving these problems. And then, also Rust can not express everything in the type system.
This can be taken as a feature. For example, is there a good reason this is representable?
struct S s = 10;
I LOVE the fact that Rust does not let me get away with half-ass things. Of course, this is just a preference. Half of my coding time is writing one-off Python scripts for analysis and data processing, I would not want to write those in Rust.
> But in the end, you can write Option just fine it C.
Even this question have a deeper question underneath it. What do you mean by "just fine"? Because to me, tagged enums or NULL is NOT the same thing as algebraic data types.
This is like saying floating points are just fine for me for integer calculations.
Maybe, maybe for you its fine to use floating points to calculate pointers, but for others it is not.
>When refactoring you just add assertions for NULL
This is a line of thinking I used to see commonly when dynamic typing was all the rage. I think the difference comes from people who view primarily work on projects where they are the sole engineer vs ones where they work n+1 other engineers.
"just add assertions" only works if you can also sit on the shoulder of everyone else who is touching the code, otherwise all it takes is for someone to come back from vacation, missing the refactor, to push some code that causes a NULL pointer dereference in an esoteric branch in a month. I'd rather the compiler just catch it.
Furthermore, expressive type systems are about communcation. The contracts between functions. Your CPU doesn't case about types - types are for humans. IMO you have simply moved the complexity from the language into my brain.
This was about refactoring a code base where a type is assumed to be non-null but it is not obvious. You can express also in C on interfaces that a pointer is non-null.
But it does! To qoute my top-level comment:
> What about about race conditions, null pointers indirectly propagated into functions that don't expect null, aliased pointers indirectly propagated into `restrict` functions, and the other non-local UB causes?
In other words: you set some pointer to NULL, this is OK in that part of your program, but then the value travels across layers, you've skipped a NULL check somewhere in one of those layers, NULL crosses that boundary and causes UB in a function that doesn't expect NULL. And then that UB itself also manifests in weird non-local effects!
Rust fixes this by making nullability (and many other things, such as thread-safety) an explicit type property that's visible and force-checked on every layer.
Although, I agree that things like macros and trait resolution ("overloading") can be sometimes hard to reason about. But this is offset by the fact that they are still deterministic and knowable (albeit complex)