Hacker News new | past | comments | ask | show | jobs | submit login

A side question to HN:

For the security topic, most of the power of Rust come from a strong typing, safety checks (boundaries), ownership checks... Well, a lot of checks at compile time.

As far as I know of Rust by glancing at the doc frequently, C++11/14 also contains these "safe tools": ownership can be dealt with smart pointers (unique, shared), value semantics, move semantics (rvalues) and the rest. Boundaries checks with the correct containers (std::vector, std::array...). Mutability with "const". Atomicity with std::atomic... Type safety by using explicit conversions, templates instead of macros, variadic args...

Heck, all the drama from C++ seems to come from a misusage or some legacy parts from another age (up to C). In order to avoid people to shot themselves in their feets, why couldn't we just introduce a strict compilation mode a la JavaScript to let say Clang: # pragma strict

# pragma unsafe # pragma safe

Nowadays, good C++11 compilers can even deal with such a things like constexpr. This seems a reachable goal.

What are the true drawbacks of a fully modern C++ code compared to a Rust? Is there any true feature that can't be done in C++ due to a deep problem of conception? Don't le me wrong, I do not seek to diminished the amazing work of Mozilla. I am not talking neither about the other cool features from functional languages, switch cases... that Rust has and C++ don't.




You're mistaken if you think that anything in C++11/14 makes it memory-safe like Rust guarantees.

C++11/14 lets you assign a std::unique_ptr to another variable then re-use the original variable, causing a null dereference. You can also have a reference to the same std::unique_ptr from two different threads causing a race-condition on the underlying pointer.

There is no bounds checking on std::vector by default. You can bounds check a std::vector (using "at" or various configuration/compiler settings which change the semantics of the standard library) but by default, it is not checked.

Fundamentally, the issue is that C++ lets you have null pointers and references (yes, references can be null). It lets you use before initializing. It lets you run out of bounds. It lets you access the same variable from multiple threads without any safety. Yes, idiomatically, none of these should be an issue but it's not possible to be a perfect programmer.

Rust forces idiomatic memory management and eliminates all these issues. If you make a memory safety mistake, the compiler will force you to fix it.


Your points are clearly valids. But wouldn't some of them corrected by a strict mode?

- For instance, the reuse of the original variable could be deduced by the compiler. - The bound checking for std::vector can effectively be enable. One could imagine an std::strict_vector that do so.

What I am wondering is: does the same idiomatic memory management applied to C++ would require some huge tweaks to the language, some bad tricks, new keywords, or can it be done without changing its design but enforcing some rules?

I, for instance, have no clues how to deal with your "one unique_ptr two threads" problem. Could it be done in an elegant way in C++?


Linear types (as in Rust) can prevent (at compile time) some of the more trivial use-after-free issues e.g. for unique_ptr, but I think the main reasons you won't see undefined behavior eliminated from C++ wholesale is that it a) often requires extensive support at runtime (see ASAN, UBSAN, etc.) and b) presents a huge barrier to optimization in certain cases and c) (thus) is going to be waaay too slow for production use. (I.e. if C++ were to go in this direction someone would basically either "fork" C++ or a new (similar) language would supplant it.)

Unfortunately, currently no "sufficiently smart compiler" exists, so that high-level code can be optimized sufficiently to beat what a good micro-optimizing C++ compiler (which can assume that no undefined behavior can occur at runtime) can achieve.


Undefined behavior does permit optimizations, yes, but I think you're overselling its benefit. Rust doesn't have undefined behavior and there are many instances where its strict semantics mean that it is far more optimizable and runtime-efficient than C++ (though some of those optimizations are yet to be implemented).


Perhaps, these days you're right -- assuming you want to only support mainstream architectures. These days you can mostly rely on all mainstream architectures to do something sensible with e.g. signed integer overflow[1] or excessive shifting, but that wasn't necessarily the case when most of C++ was standardized. As an example of a similar nature -- as I'm sure you know -- Rust has chosen to not abort/panic on signed overflow although almost all instances of such are most probably plain logic errors and could lead to security problems[2]. As far as I could understand, this was for performance reasons. Granted, this is not quite as disastrous for general program correctness as UB, but it can lead to security bugs.

Point being: Underspecification can give you a lot of leeway in how you do something -- and that can be hugely valuable in practice.

Just as an aside: Personally I tend to prefer safety over performance, but I was persuaded that UB is valuable by comments that Chandler Carruth of Clang (and now ISO C++ committee) fame made about UB actually being essential to optimization in C++. Sorry, can't remember where, exactly, those comments were made.

[1] Everybody's probably using two's-complement (for good reasons).

[2] Not nearly as easily as plain buffer overflows, but there have been a fair few of these that have been exploitable.


Even mainstream architectures don't handle excessive consistently, e.g. for shifting an n-bit integer, I believe some mask the shift by 2^n - 1, some by 2^(n+1) - 1, and some don't mask at all (i.e. 1 << 100000 will be zero). Of course, being UB (rather than just an "unspecified result" or some such) probably isn't the best way to handle the inconsistency.

In any case, I believe Rust retains many of the optimisations made possible via UB in C++ by enforcing things at a compile time. In fact, Rust actually has quite a lot of UB... that is, there are many restrictions/requirements Rust 'assumes' are true. For example, the reference types & and &mut have significantly more restrictions around aliasing and mutability than pointers or references in C++. The key difference between Rust and C++ is that it is only possible to trigger the UB with `unsafe`, as the compiler usually enforces rules that guarantee they can't occur. People saying "Rust has no UB" are usually implicitly meaning "Rust cannot trigger any UB outside `unsafe`".


Rust will actually now panic when not in release mode when an integer overflow happens, as opposed to how things used to be before, where it would just accept the overflow silently with wrapping semantics. Here is the discussion from when this change was announced: http://internals.rust-lang.org/t/a-tale-of-twos-complement/1...


Oh, that's good news!


A "safe mode" in C++ is completely impossible in practice, because C++ does not have a module system but an automated text copy-and-paste mechanism (the preprocessor). Hence your "strict" mode would refuse to compile any unsafe constructs in your standard C++ library headers, boost headers, Qt headers, libc headers, or any other headers of popular and mature libraries that made you choose the C++ language in the first place. If you can't re-use any code anyway, why not pick a sane language?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: