Hacker Newsnew | past | comments | ask | show | jobs | submit | pornel's commentslogin

That's not the spirit Rust wants to have. You can already disable borrow checker selectively by using "raw" pointers in places where you think you know better, and this is used very commonly. Every String in Rust has such raw pointer inside.

It doesn't make much sense to globally relax restrictions of Rust's references to be like C/C++ pointers, because the reference types imply a set of guarantees: must be non-null (affects struct layout), always initialized, and have strict shared/immutable vs exclusive access distinction. If you relax these guarantees, you'll break existing code that relies on having them, and make the `--yolo` flag code incompatible with the rest. OTOH if you don't remove them, then you still have almost all of borrow checker's restrictions with none of the help of upholding them. It'd be like a flag that disables the sign bit of signed integers. It just makes an existing type mean something else.


The compiler has deep assumptions about exclusive ownership and moves, which affects destructors and deallocation of objects.

It doesn't actually depend on the borrow checker. All lifetime labels are discarded after being checked. Code generation has no idea about borrow checking. Once the code is checked, it is compiled just like C or C++ would, just assuming the code is valid and doesn't use dangling pointers.

Borrow checker doesn't affect program behavior. It either stops compilation or does nothing at all. It's like an external static analysis tool.


Rust hasn't grown 17 ways to initialize a variable yet. Most projects use most features.

When projects choose a subset of language features, it's dictated by their needs (like embedded programs disabling the standard library, or safety-critical libraries forbidding "unsafe" code out of caution). There are some people who vocally hate async, but their complaint is usually that everyone uses async even where it's unnecessary (meaning that it actually has very broad adoption).

This feels very different than having an unwanted C subset, some '98 features that were replaced in '11 and '13, with fixes for them in '20 and '26 and then projects taking years to settle on a new baseline, and still bickering whether exceptions may be allowed or not.

Rust has "editions" that let new projects disable old misfeatures (which it hasn't got many yet). Rust ecosystem is fully on board with the latest version.


Bootstrapping of the rustc compiler from scratch is a very different experience than using the language as a user. Bootstrapping requires building a C++ mrustc proto-compiler, which is where the tons of flags are needed. The rustc compiler has a custom multi-stage build system to make it and its standard library built with itself, which multiplies the time it takes. It's designed to be consistent and optimized, since most users download a pre-built binary.

However, once you bootstrap it (or willing to trust someone who did), it's a breeze to build Rust projects with Cargo: `cargo build` just works with no extra flags for the majority of projects. The only finicky builds are ones that rely on C or C++ deps.


[flagged]


Calling your users idiots is not a good look for a maintainer. I don't know offhand what distro you maintain but I sincerely hope I never have to deal with someone this hostile.

> I'm the guy who sees all the underlying ugliness that the idiot user ignores and pretends doesn't What’s with all the hate against the end users?

“idiot end user”?

Why sign up to be a maintainer when you think so highly of the end user.


[flagged]


Why not use Gentoo?

You're harshly judging the entire language based on ease of compiling a 3rd party C++ compiler and the C11 code it emits. These gnarly build commands don't even come from the Rust project, and aren't using the Rust language.

(I assume you use mrustc, and you're not going the masochist route of recreating all the development steps starting from a 15-year-old Ocaml-based prototype of a language that wasn't Rust yet).

It's fair to say that bootstrapping of Rust sucks. It really does. The non-Rust bootstrap compiler doesn't get even a fraction of the polish that rustc and Cargo get. But it's not representative of how Rust and Cargo work for basically everyone in the world except you (and a couple of other maintainers who chose to do an independent bootstrap from scratch). Bootstrapping is a one-off pain, and then building Rust with a Rust-based compiler is nice and easy.

It'd be nice to have a cleaner bootstrap story for Rust, but it will take a while (waiting for gccrs C++ reimplementation to advance enough to replace mrustc).


[flagged]


There is a cranelift backend written in Rust.

Rust is pragmatic about its implementation. The goal isn't some ideological purity (despite the reputation Rust has), but to empower users to make safe and efficient systems software. LLVM works well for that, so replacing it isn't a priority. The cranelift backend exists to make debug builds faster.


Having first-party bindings for native extensions written in Rust seems like the least disruptive and most beneficial step. It doesn't require the core to change, but to guarantee support and compatibility.

Growth of CUDA gave it a second chance.

I guess TPUs and JAX give it a third chance, and maybe MLX a fourth, lmao.

To me the real horror is that the exact same syntax can be either a perfectly normal thing to do, or a horrible mistake that gives the compiler a license to kill, and this doesn't depend on something locally explicit, but on details of a definition that lives somewhere else and may have multiple layers of indirection.

The difference is that it can behave as if it had multiple different values at the same time. You don't just get any value, you can get completely absurd paradoxical Schrödinger values where `x > 5 && x < 5` may be true, and on the next line `x > 5` may be false, and it may flip on Wednesdays.

This is because the code is executed symbolically during optimization. It's not running on your real CPU. It's first "run" on a simulation of an abstract machine from the C spec, which doesn't have registers or even real stack to hold an actual garbage value, but it does have magic memory where bits can be set to 0, 1, or this-can-never-ever-happen.

Optimization passes ask questions like "is x unused? (so I can skip saving its register)" or "is x always equal to y? (so I can stop storing it separately)" or "is this condition using x always true? (so that I can remove the else branch)". When using the value is an undefined behavior, there's no requirement for these answers to be consistent or even correct, so the optimizer rolls with whatever seems cheapest/easiest.


"Your scientists were so preoccupied with whether they could, they didn't stop to think if they should."

With Optimizing settings on, the compiler should immediately treat unused variables as errors by default.


So here are your options:

1. Syntactically require initialization, ie you can't write "int k;" only "int k = 0;". This is easy to do and 100% effective, but for many algorithms this has a notable performance cost to comply.

2. Semantically require initialization, the compiler must prove at least one write happens before every read. Rice's Theorem says we cannot have this unless we're willing to accept that some correct programs don't compile because the compiler couldn't see why they're correct. Safe Rust lives here. Fewer but still some programmers will hate this too because you're still losing perf in some cases to shut up the prover.

3. Redefine "immediately" as "Well, it should report the error at runtime". This has an even larger performance overhead in many cases, and of course in some applications there is no meaningful "report the error at runtime".

Now, it so happens I think option (2) is almost always the right choice, but then I would say that. If you need performance then sometimes none of those options is enough, which is why unsafe Rust is allowed to call core::mem::MaybeUninit::assume_init an unsafe function which in many cases compiles to no instructions at all, but is the specific moment when you're taking responsibility for claiming this is initialized and if you're wrong about that too fucking bad.


With optimizations, 1. and 2. can be kind of equivalent: if initialization is syntactically required (or variables are defined to be zero by default), then the compiler can elide this if it can prove that value is never read.

That, however, conflicts with unused write detection which can be quite useful (arguably more so than unused variable as it's both more general and more likely to catch issues). Though I guess you could always ignore a trivial initialisation for that purpose.

There isn't just a performance cost to initializing at declaration all the time. If you don't have a meaningful sentinel value (does zero mean "uninitialized" or does it mean logical zero?) then reading from the "initialized with meaningless data just to silence the lint" data is still a bug. And this bug is now somewhat tricky to detect because the sanitizers can't detect it.

Yes, that's an important consideration for languages like Rust or C++ which don't endorse mandatory defaults. It may even literally be impossible to "initialize with meaningless data" in these languages if the type doesn't have such "meaningless" values.

In languages like Go or Odin where "zero is default" for every type and you can't even opt out, this same problem (which I'd say is a bigger but less instantly fatal version of the Billion Dollar Mistake) occurs everywhere, at every API edge, and even in documentation, you just have to suck it up.

Which reminds of in a sense another option - you can have the syntactic behaviour but write it as though you don't initialize at all even though you do, which is the behaviour C++ silently has for user defined types. If we define a Goose type (in C++ a "class"), which we stubbornly don't provide any way for our users to make themselves (e.g. we make the constructors private, or we explicitly delete the constructors), and then a user writes "Goose foo;" in their C++ program it won't compile because the compiler isn't allowed to leave this foo variable uninitialized - but it also can't just construct it, so, too bad, this isn't a valid C++ program.


If you have a program that will unconditionally access uninitialized memory then the compiler can halt and emit a diagnostic. But that's rarely what is discussed in these UB conversations. Instead the compiler is encountering a program with multiple paths, some of which would encounter UB if taken. But the compiler cannot just refuse to compile this, since it is perfectly possible that the path is dead. Like, imagine this program:

    int foo(bool x, int* y) {
      if (x) return *y;
      return 0;
    } 
Dereferencing y would be UB. But maybe this function is called only with x=false when y is nullptr. This cannot be a compile error. So instead the compiler recognizes that certain program paths are illegal and uses that information during compilation.

Maybe we should make that an error.

More modern languages have indeed embedded nullability into the type system and will yell at you if you dereference a nullable pointer without a check. This is good.

Retrofitting this into C++ at the language level is impossible. At least without a huge change in priorities from the committee.


Maybe not the Standard, but maybe not impossible to retrofit into:

    -Werror -Wlet-me-stop-you-right-there

That's what Golang went for. There are order possibilities: D has `= void` initializer to explicitly leave variables uninitialized. Rust requires values to be initialized before use, and if the compiler can't prove they are, it's either an error or requires an explicit MaybeUninit type wrapper.

They're not, but the flaws they found are independent of PGP. Mainly invalid handling of strings in C and allowing untrusted ANSI codes in terminal output.

Sequoia is mentioned in only one vulnerability for supporting lines much longer than gpg. gpg silently truncates and discards long base64 lines and sequoia does not. So the vulnerability is in ability to feed more data to sequoia which doesn't have the silent data loss of gpg.

In all other cases they only used sequoia as a tool to build data for demonstrating gpg vulnerabilities.


The vulnerability that opens the talk, where they walk through verifying a Linux ISO's signature and hash and then boot into a malicious image, impacts both GnuPG and Sequoia.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: