Dynamic linking will inhibit inlining entirely, and so yes qsort does not in practice get inlined if libc is dynamically linked. However, compilers can inline definitions across translation units without much of any issue if whole program optimization is enabled.
The use of function pointers doesn't have much of an impact on inlining. If the argument supplied as a parameter is known at compile time then the compiler has no issue performing the direct substitution whether it's a function pointer or otherwise.
Depending on exactly what you mean, this isn't correct. This syntax is the same as <T: BarTrait>, and you can store that T in any other generic struct that's parametrized by BarTrait, for example.
> you can store that T in any other generic struct that's parametrized by BarTrait, for example
Not really. You can store it on any struct that specializes to the same type of the value you received. If you get a pre-built struct from somewhere and try to store it there, your code won't compile.
I'm addressing the intent of the original question.
No one would ask this question in the case where the struct is generic over a type parameter bounded by the trait, since such a design can only store a homogeneous collection of values of a single concrete type implementing the trait; the question doesn't even make sense in that situation.
The question only arises for a struct that must store a heterogeneous collection of values with different concrete types implementing the trait, in which case a trait object (dyn Trait) is required.
Static analysis is about proving whether the code emitted by a compiler is actually called at runtime. It's not simply about the presence of that code.
>Static analysis is about proving whether the code emitted by a compiler is actually called at runtime.
That is but one thing that can static analysis can prove. It can also prove whether source code will call a move contractor or a copy constructor. Static analysis is about analyzing a program without actually running it. Analysizing what code is emitted is one way a program can be analyzed.
The call to a move cons/move assign does not happen at call time. When a function taking rvalue reference is called, it can still have two code paths, one that copies the argument, and one that moves it.
All the && does is prevent lvalues from being passed as arguments. It's still just a reference, not a move. Indeed, in the callee it's an lvalue reference.
But yeah, you can statically check if there exists a code path that calls the copy cons/copy assign. But you'll need to check if the callee calls ANY type's copy cons/assign, because it may not be the same type as the passed in obj.
At that point, what even is a move? char*p = smartptr.release() in the callee is a valid move into a raw pointer, satisfying the interface in callee. That's a move.[1] how could you detect that?
[1] if this definition of move offends you, then instead remember that shared_ptr has a constructor that takes an rvalue unique_ptr. The move only happens inside the move constructor.
How do you detect all cases of e.g. return cons(ptr.release()) ? It may even be the same binary code as return cons(std::move(ptr))
Probably in the end shared pointer constructor probably calls .release() on the unique ptr. That's the move.
What the callee does it out of scope. We are talking about a single assignment or construction of a variable. This has nothing to do with tracing execution. It happens at one place, and you can look at the place to see if it is using a copy or move contructor.
When talking C++ move semantics it's easy to talk past each other. So I'm not sure what your claim is. Another commenter said that one can tell if something is moved or not without looking at the body of the callee. Is that what you're saying? Because you can't.
I apologize if you're making a different claim, but I'm not clear on what that is.
Anyway, for my point, here's an example where neither copy nor move happens, which one can only know by looking at the body of the callee: https://godbolt.org/z/d7f6MWcb5
Equally we can remove the use of `std::move` in the callee, and now it's instead a copy. (of course, in this example with `unique_ptr`, it means a build failure as `unique_ptr` is not copyable)
> [assignment or construction of a variable] happens at one place
Not sure what you mean by that. The callee taking an rvalue reference could first copy, then move, if it wants to. Or do neither (per my example above). Unlike in Rust, the copy/move doesn't get decided at the call point.
You can, at one point, statically determine if the (usually const) single ampersand reference function is called, or the rvalue reference function, via standard polymorphism. But that's not the point where the move cons/assign happens, so for that one has to look in the callee.
Calling a function that takes a rref will never use a move constructor to create the parameter. We can statically know that both of your foo functions will not use a move constructor when constructing p.
>By changing only the callee we can cause a move
This move is for constructing t. p still is not constructed with a move constructor.
I have to disagree with you about MMX. It's possible a lot of software didn't target it explicitly but on Windows MMX was very widely used as it was integrated into DirectX, ffmpeg, GDI, the initial MP3 libraries (l3codeca which was used by Winamp and other popular MP3 players) and the popular DIVX video codec.
Similar to AI PC's right now, very few consumers cared in late 90s. Majority weren't power users creating/editing videos/audio/graphics. Majority of consumers were just consuming and they never had a need to seek out MMX for that, their main consumption bottleneck was likely bandwidth. If they used MMX indirectly in Winamp or DirectX, they probably had no clue.
Today, typical consumers aren't even using a ton of AI or enough to even make them think to buy specialized hardware for it. Maybe that changes but it's the current state.
MMX had a chicken/egg problem; it did take awhile to "take off" so early adopters really didn't see much from it, but by the time it was commonplace it was doing some work.
Lifetimes are the input to the borrow checker, so it doesn't make much sense to say you have never been bothered by the borrow checker but you are bothered by lifetimes.
How does lifetime elision affect performance? I thought the compiler just inferred lifetimes that you would have had to manually annotate. Naively, it seems to me that the performance should be identical.
Cloning values, collecting iterators into Vecs and then continue the transformation rather than keeping it lazy all the way through. Skipping structs/enums with references.
I thought they meant the case where you go "ugh, I don't want to write a lifetime here" and then change your code, because you have to. If you don't have to, then yes, there's literally no difference.
>C++ says that all correct programs are valid but the trade is that some incorrect programs are also valid.
C++ does not say this, in fact no statically typed programming language says this, they all reject programs that could in principle be correct but get rejected because of some property of the type system.
You are trying to present a false dichotomy that simply does not exist and ignoring the many nuances and trade-offs that exist among these (and other) languages.
I knew I should have also put the (in terms of memory safety) on the C++ paragraph but I held off because I thought it would be obvious both talking about the borrow checker and in contrast to Rust with the borrow checker.
Yes, when it comes to types C++ will reject theoretically sound programs that don't type correctly. And different type system "strengths" tune themselves to how many correct programs they're willing to reject in order to accept fewer incorrect ones.
I don't mean to make it a dichotomy at all, every "checker", linter, static analysis tool—they all seek to invalidate some correct programs which hopefully isn't too much of a burden to the programmer but in trade invalidate a much much larger set of incorrect programs. So full agreement that there's a lot of nuance as well as a lot of opinions when it goes too far or not far enough.
Nope. C++ really does deliberately require that compilers will in some cases emit a program which does... something even though what you wrote isn't a C++ program.
Yes, that's very stupid, but they did it with eyes open, it's not a mistake. In the C++ ISO document the words you're looking are roughly (exact phrasing varies from one clause to another) Ill-formed No Diagnostic Required (abbreviated as IFNDR).
What this means is that these programs are Ill-formed (not C++ programs) but they compile anyway (No diagnostic is required - a diagnostic would be an error or warning).
Why do this? Well because of Rice's Theorem. They want a lot of tricky semantic requirements for their language but Rice showed (back in like 1950) that all the non-trivial semantic requirements are Undecidable. So it's impossible for the compiler to correctly diagnose these for all cases. Now, you could (and Rust does) choose to say if we're not sure we'll reject the program. But C++ chose the exact opposite path.
No one disputes that C++ accepts some invalid programs, I never claimed otherwise. I said that C++'s type system will reject some programs that are in principle correct, as opposed to what Spivak originally claimed about C++ accepting all correct programs as valid.
The fact that some people can only think in terms of all or nothing is really saying a lot about the quality of discourse on this topic. There is a huge middle ground here and difficult trade-offs that C++ and Rust make.
So don't use it. Rust is not intended to be used by everyone. If you are happy using your current set of tools and find yourself productive with them then by all means be happy with it.
reply