Hacker News new | past | comments | ask | show | jobs | submit login

> How do you suppose runtime bounds checks are done in Rust? They certainly also incur a performance penalty in not-trivial cases.

Certainly. I didn't intend to imply otherwise.

> Also, "safe by grep audit" means "safe according to a human."

Again, totally correct here.

> The argument of course is that it lowers the surface area of what a human must be trusted to verify. I'm still not convinced by that argument, because human error is a thing. And for actual systems programming, "very rare" may not be true.

Well, given a codebase where both safe and unsafe code exists, the amount of unsafe code is strictly less than the amount of both safe and unsafe code. So it does reduce the amount of code needed to audit, even in a very atypical case where a ton of the code is unsafe.

It's true that a project may use egregious amounts of unsafe. That would be unfortunate. Rust is still safer than C in that case, since it just defines more behavior (like arithmetic overflow), but I certainly wouldn't pretend that the rust code should be trusted.

When writing rust one should certainly strive to write less unsafe code, and to always document the invariants required for unsafe code to be safe.

Rust is not 100% safe 100% of the time, I'm only arguing that safe defaults are critical, and that grep auditing is a powerful tool.




I wrote a little tool to help check Rust crates on GitHub. It's been really interesting seeing how different libs use unsafety. https://github.com/alexkehayias/harbor


Rust is safe in terms of memory usage and race conditions, nobody claims that Rust compiler will catch all 100% of bugs human can invent.


Rust is safe by default in terms of memory usage. It is not, however, strictly memory safe. It is trivial to overflow a buffer in Rust, for example. I haven't discovered a trivial way to hide it, though.


If Rust is not memory safe in safe code, then you've found a bug. Please report it to https://www.rust-lang.org/security.html


If you're relying on any random third-party Rust crates you haven't audited yourself, don't you lose the safety guarantee? A given crate might turn out to have implemented operations on some data-structure using unsafe blocks, and then to have failed to mark its own API functions as unsafe in turn (like the Rust stdlib does, but without the "extensive manual auditing" that the stdlib gets).

AFAIK, cargo doesn't have any feature to point out when a crate contains unsafe code—so you pretty much need to grep the source of every crate you consume for "unsafe".


There's a lot more unsafe code in Rust crates than there should be. That's a fixable problem. Some stuff from the early days predates the optimizer getting smart enough that unsafe code isn't needed. I wrote on this a few days ago in a Rust topic.


While I now mostly agree with you that there is more unsafe code than there should be, I still maintain that the frequency of unsafe in a deptree is usually still small enough to be practically auditable, ignoring FFI. It could/should be much less, but it's not too bad. I've done such audits a few times and it's not been too hard and taken very little time.

Auditing FFI is a whole other challenge, however :(


> I still maintain that the frequency of unsafe in a deptree is usually still small enough to be practically auditable

Not in binary libraries, hence why it is important to have a culture to only use unsafe if it really must be used.


Well, yeah, but you don't really download Rust binary libraries yet :)

You do have C libraries which you access through FFI. This is inevitably unsafe. We should be auditing more there. Though IMO it's still manageable, for most crates.


Hmmmm. Note to self - actually try to audit a reasonably sized project's unsafe code to see how reasonable it is.


That's why I said "safe code," I mean, not using any unsafe.

The issue you're talking about is related, but different.


Right, I was just trying to put a finer point on your use of "using any unsafe" here. It sounds like you mean "using" in the lexical sense (writing the token "unsafe" in your code), but you mean it in the dynamic sense (having an unsafe block in your control flow graph.)


That's a good distinction to draw, thanks.


Let me clarify: in Rust it is trivial to use memory unsafely. It is not, so far as I have found, trivial to hide that fact because it is required to use "unsafe" syntax decoration" to do so.


The issue here is that that definition of "safe" language basically excludes all practical languages, including languages like Python, because FFI is possible.

In general when talking about safety in a language it's about the level of explicitness required to trigger unsafety.

I like the distinction made in the nomicon (https://doc.rust-lang.org/stable/nomicon/meet-safe-and-unsaf...) -- Rust comprises of two distinct languages. You have everyday Rust, which is completely memory safe, and "unsafe Rust", which looks similar to everyday Rust but is not safe. `unsafe {}` blocks are your FFI between the two. Looking at unsafe blocks as FFI is IMO a very useful mental model especially for understanding the changes to invariants involved.


In safe rust it is definitely not trivial to overflow a buffer in a way that violates memory safety.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: