Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Is C++26 getting destructive move semantics? (stackoverflow.com)
28 points by signa11 17 days ago | hide | past | favorite | 47 comments


Sounds like the answer is no.

"For trivial relocatability, we found a showstopper bug that the group decided could not be fixed in time for C++26, so the strong consensus was to remove this feature from C++26."

[https://herbsutter.com/2025/11/10/trip-report-november-2025-...]


I’m curious what that showstopper bug actually was.

I was really looking forward to this feature, as it would've helped improve Rust <-> C++ interoperability.



Thanks for the link!


IMO destructive move is rather tangled up with another language feature: the ability to usefully have a variable of type T where all values must be actual values that meet the constraints of a T.

Suppose T is a file handle or an owning pointer (like unique_ptr), and you want to say:

    T my_thing = [whatever]
and you want a guarantee that T has no null value and therefore my_thing is valid so long as it’s in scope.

In C++, if you are allowed to say:

    consume(std::move(my_thing));
then my_thing is in scope but invalid. But at least C++ could plausibly introduce a new style of object that is neither copyable nor non-destructively-movable but is destructively movable.

Interestingly, Go is kind of all in on the opposite approach. Every instance of:

   my_thing, err := [whatever]
creates an in-scope variable that might be invalid, and I can’t really imagine Go moving in a direction where either this pattern is considered deprecated or where it might be statically invalid to use my_thing.

I actually can imagine Python moving in a direction where, if you don’t at least try to prove to a type checker that you checked err first, you are not allowed to access my_thing. After all, you can already do:

    my_thing: T | None = [whatever]
and it’s not too much of a stretch to imagine similar technology that can infer that, if err is None, then T is not None. Combining this in a rigorous way with Python’s idea that everything is mutable if you try hard enough might be challenging, but I think that rigor is treated as optional in Python’s type checking.


After learning Rust, C++ just seems unnecessarily complicated.

I'm not even going to try to understand this.

Using C++ these days feels like continuing to use the earth-centric solar system model to explain the motion of planets vs a much simple sun-centric model: Unnecessarily over-complicated.


Like C always will, C++ mostly still echoes the diversity of hardware architectures and runtime environments out there. That diversity is still broader than many people who don't work in weird specialties and fringes realizes, and it's useful to have a "modern" language that respects the odd quirks of those systems and how you sometimes need to leverage them to meet requirements.

If you're just writing application software for consumers or professionals, or a network service, and it's destined to run on one of the big three families of operating systems using the one of the big few established hardware architectures at that scale, there are definitely alternatives that can make your business logic and sometimes even your key algorithms simpler or clearer, or your code more resistant to certain classes of error.

If you look at Rust and see "this does everything I could imagine doing, and more simply than C++", there's nothing wrong with that, because you're probably right for yourself. But there are other projects out there that other people work on, or can expected themselves working on someday, that still befit C++ and it's nice for the language to keep maturing and modernizing for their sake, while maintaining its respect for all the underlying weirdness they have to navigate.


> C++ mostly still echoes the diversify of hardware architectures and runtime environments out there

It doesn't though, or at least none of those echoes are why C++ is complex. Here are some examples of unnecessary complexity.

The rules of 3/5 exist solely due to copy/move/assign semantics. These rules do not need to exist if the semantics were simpler.

Programmers need to be aware of expression value categories (lvalue, rvalue, etc). These concepts that most languages keep as internal details of their IRs are leaked to error messages, because of the complex semantics of expression evaluation.

SFINAE is a subtle rule of template expansion that is exploited for compile time introspection due to the fact the latter is just missing, despite the clear desire for it.

The C++ memory model for atomics is a particular source of confusion and incorrectness in concurrent programs because it decouples a fairly simple problem domain and rules into (arguably too small) a set of semantics that are easy to misuse, and also create surprisingly complex emergent behaviors when misused that become hard to debug.

These are problems with the language's design and have nothing to do with the hardware and systems it targets.

The thing that bugs me about this topic is that C++ developers have a kind of Stockholm syndrome for their terrible tools and devex. I see people routinely struggle with things that other languages simply don't have (including C and Rust!) because C++ seems committed to playing on hard mode. It's so bad that every C++ code base I've worked on professionally is essentially its own dialect and ecosystem with zero cross pollination (except one of abseil/boost/folly).

There is so much complexity in there that creates no value. Good features and libraries die in the womb because of it.


SFINAE in 2025 is only reasonable in existing old code, or people stuck in old compilers.

Since C++17 there are better options.

Despite all its warts, most C++ wannabe replacements, depend on compiler tools written in C++, and this isn't going to change in then foreseeable future, based on the about two decades that took to replace C with C++ in compiler development circles, even though there is some compatibility.


> If you're just writing application software... there are definitely alternatives...

Tangentially, is there a good alternative to Qt or SDL in Rust yet?


Slint, done by ex-Qt employees.


IMO, even better would just be good QT bindings that take advantage of the benefits of Rust. I haven't checked. The gnome bindings are pretty good, but the abstraction does leak through.


The real barrier is the C++ ecosystem. It represents the cost of losing decades of optimized, highly integrated, high-performance libraries. C++ maps almost perfectly to the hardware with minimal overhead, and it remains at the forefront of the AI revolution. It is the true engine behind Python scientific libs and even Julia (ex. EnzymeAD). Rust does not offer advantages that would meaningfully improve how we approach HPC. Once you layer the necessary unsafe operations, C++ code in practice becomes mostly functional and immutable, and lifetimes matter less beyond a certain threshold when building complex HPC simulations. Or even outsourced to a scripting layer with Python.


> C++ maps almost perfectly to the hardware with minimal overhead

Barely.

The C++ aliasing rules map quite poorly into hardware. C++ barely helps at all with writing correct multithreaded code, and almost all non-tiny machines have multiple CPUs. C++ cannot cleanly express the kinds of floating point semantics that are associative, and SIMD optimization care about this. C++ exceptions have huge overhead when actually thrown.


> C++ exceptions have huge overhead when actually thrown

Which is why exceptions should never really be used for control flow. In our code, an exception basically means "the program is closing imminently, you should probably clean up and leave things in a sensible state if needed."

Agree with everything else mostly. C/C++ being a "thin layer on top of hardware" was sort of true 20? 30? years ago.


In simulations or in game dev, the practice is to use an SoA data layout to avoid aliasing entirely. Job systems or actors are used for handling multithreading. In machine learning, most parallelism is achieved through GPU offloading or CPU intrinsics. I agree in principle with everything you’re saying, but that doesn’t mean the ecosystem isn’t creative when it comes to working around these hiccups.


> The C++ aliasing rules map quite poorly into hardware.

But how much does aliasing matter on modern hardware? I know you're aware of Linus' position on this, I personally find it very compelling :)

As a silly little test a few months ago, I built whole Linux systems with -fno-strict-aliasing in CFLAGS, everything I've tried on it is within 1% of the original performance.


Even with strict aliasing, C and C++ often have to assume aliasing when none exists.


If they somehow magically didn't, how much could be gained?

I've never seen an attempt to answer that question. Maybe it's unanswerable in practice. But the examples of aliasing optimizations always seem to be eliminating a load, which in my experience is not an especially impactful thing in the average userspace widget written in C++.

The closest example of a more sophisticated aliasing optimization I've seen is example 18 in this paper: https://dl.acm.org/doi/pdf/10.1145/3735592

...but that specific example with a pointer passed to a function seems analogous to what is possible with 'restrict' in C. Maybe I misunderstood it.

This is an interesting viewpoint, but is unfortunately light on details: https://lobste.rs/s/yubalv/pointers_are_complicated_ii_we_ne...

Don't get me wrong, I'm not saying aliasing is a big conspiracy :) But it seems to have one of the higher hype-to-reality disconnects for compiler optimizations, in my limited experience.


Back in 2015 when the Rust project first had to disable use of LLVM's `noalias` they found that performance dropped by up to 5% (depending on the program). The big caveat here is that it was miscompiling, so some of that apparent performance could have been incorrect.

Of course, that was also 10 years ago, so things may be different now. There'll have been interest from the Rust project for improving the optimisations `noalias` performs, as well as improvements from Clang to improve optimisations under C and C++'s aliasing model.


Thanks! I've heard a lot of anecdotes like this, but I've never found anyone presenting anything I can repeoduce myself.


Strict aliasing is not the only kind of aliasing.


Yes, that's why I described it as "silly" :)

Is there a better way to test the contribution of aliasing optimizations? Obviously the compiler could be patched, but that sort of invalidates the test because you'd have to assume I didn't screw up patching it somehow.

What I'm specifically interested in is how much more or less of a difference the class of optimizations makes on different calibers of hardware.


Well, the issue is that "aliasing optimizations" means different things in different languages, because what you can and cannot do is semantically different. The argument against strict aliasing in C is that you give up a lot and don't get much, but that doesn't apply to Rust, which has a different model and uses these optimizations much more.

For Rust, you'd have to patch the compiler, as they don't generally provide options to tweak this sort of thing. For both rust and C this should be pretty easy to patch, as you'd just disable the production of the noalias attribute when going to LLVM; gcc instead of clang may be harder, I don't know how things work over there.


Thanks!

CUDA hardware is specially designed against C++ memory model.

It wasn't initially, and then NVIDIA went through a multi-year effort to redesign the hardware.

If you're curious, there are two CppCon talks on the matter.


Your post could be (uncharitably) paraphrased as: "once you have written correct C++ code, the drawbacks of C++ are not relevant". That is true, and the same is true of C. But it's not really a counterargument to Rust. It doesn't much help those us who have to deliver that correct code in the first place.


> Unnecessarily over-complicated.

Most of the complexity comes from the fact that C++ trivially supports consuming most C code, but with its own lifetime model on top, and that it also offers great flexibility.

Of course things become simpler when you ditch C source compat and can just declare "this variable will not be aliased by anyone else"

AFAIK C++'s constexpr and TMP is less limited than Rust's is.


Also, with all its warts, I find easier to stay within C++ itself, instead of two macro systems, that additionally depend on an external crate.


Rust just doesn't work for a lot of applications. Things like GUI toolkits, Web browsers, game engines are a pain to write without true OOP. Yes, it's "overly complicated" at this point after about 40 years of development, buts it's still top 3 of the TIOBE index after all these years for a reason.


Of course Rust can handle those use cases fine (GUIs, web browsers, and game engines).

C++ is still high on the TIOBE index mainly because it is indeed old and used in a lot of legacy systems. For new projects, though, there's less reason to choose C++.


Web browsers, yes. With GUIs and games, it's a less clear. Of course you can write GUIs and games in any Turing complete language but there's still a lot of work to be done in finding the right ergonomics in Rust [1, 2].

[1] https://www.warp.dev/blog/why-is-building-a-ui-in-rust-so-ha...

[2] https://loglog.games/blog/leaving-rust-gamedev/


You are saying "With GUIs and games" as if there is any GUI framework or game engine that doesn't suck.


It's still high because it solves real world problems, so it's still the gold standard for anything ranging from systems programming to scientific computing.


I think you’ve misidentified the reason this stuff is harder in Rust and it has nothing to do with “true OOP” if by that you mean class based inheritance. The primary challenge is mapping how GUIs are traditionally mutated onto Rust semantics. Even then, efforts like Slint show it’s eminently feasible so I’m not sure your argument holds.

It’s important to remember that c++ has a 30 year head start on Rust, especially at a crucial growth part of computing. Thats why it tops the TIOBE index. But I fully expect it to go the way of COBAL sooner rather than later where most new development does not use c++.


If anything OOP might actually be detrimental to many game engine applications (in a modern computing context in regards to the kind of data layouts and implementation patterns it encourages), and traits and "traditional" OOP (if you exclude implementation inheritance, which is largely cursed anyways) are real close together anyways, I think Rust is a great fit specifically for game engines at least, for gameplay programming I'm not as certain but for anything where you're mostly managing essentially data pipelines that need to go very fast and be reliable and not crashy, Rust is a great fit.


The TIOBE index does not measure anything useful.

I’d say this if Rust were at the #1 spot too.


I think Rust is unnecessarily safety focused.

Opinions are cool.


But that's it's thing. C++ with improved syntax (think Kotlin for C++) isn't really enough to gain popularity. It's like how Dvorak is sorta better than qwerty, but it doesn't offer anything new, and it isn't so much better it justifies migrating.


No need to imagine hypotheticals – this exists: https://github.com/hsutter/cppfront


I think rust has a lot of slippery points too. And this number will grow with years. So, yes - rust made a lot of good choices over C++, but this does not mean it has no its own problems. Therefore I can't say Rust is simple in this sense.. But, of course, this is a good evolution step.


I feel like C++ is the counter-argument to backwards compatibility. Even Java is loosening it's obsession with it.

Sometimes you just need to move forward.

Python 3 should be studied for why it didn't work as opposed to a lesson not to do it again.


C++ has already broken backwards compatibility a few times, and the way GCC changed their std::string implementation semantics is one of the reasons why nowadays features that require ABI breaks tend to be ignored by compiler vendors.

Backwards compatibility is C++'s greatest asset, I already took part in a few rewrites away from C++, exactly because the performance in compiled managed languages was good enough, and the whole thing was getting rebooted.


> Python 3 should be studied for why it didn't work as opposed to a lesson not to do it again.

I'm curious about this in particular. It seems like the Python 2 to 3 transition is a case study in why backwards compatibility is important. Why would you say the lesson isn't necessarily that we should never break backwards compatibility? It seems like it almost could've jeopardized Python's long-term future. From my perspective it held on just long enough to catch a tail wind from the ML boom starting in the mid 2010s.


Because you end up accumulating so much cruft and complexity that the language starts to fold under its own weight.

Often you hear the advice that when using C++ your team should restrict yourself to a handful of features. ...and then it turns out that the next library you need requires one of the features you had blacklisted and now it's part of your codebase.

However, if you don't grow and evolve your language you will be overtaken by languages that do. Your community becomes moribund, and more and more code gets written that pays for the earlier mistakes. So instead of splitting your community, it just starts to atrophy.

Python 2 to 3 was a disaster, so it needs to be studied. I think the lesson was that they waited too long for breaking changes. Perhaps you should never go too many releases without breaking something so the sedentary expectation never forms and people are regularly upgrading. It's what web browsers do. Originally people were saying "it takes us 6 months to validate a web browser for our internal webapps, we can't do that!" ...but they managed and now you don't even know when you upgrade.


Part of the issue is that you can't just generalize it to "breaking changes." Ruby, very similar to Python, underwent a similar break between Ruby 1.8 and Ruby 1.9, but didn't befall the same fate.

The specifics really matter in this kind of analysis.


That commenter’s syntax notation looked pretty neat and intuitive.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: