No in this case addressing the accusation is necessary.
I think what's currently been said is sufficient. You need to make a grown up version of the statement "None of that is true", but yes probably best to leave it at that.
Honestly, this being Adafruit, my default assumption is to believe them. Especially with this super vague "please read between the lines because if I actually say something false it'll be libel" accusation.
What do you mean by "move or copy constructor is used when constructing the parameter of foo"?
Nothing is constructed at call time. Check out this example, which compiles just fine, even though Foo is neither copy nor move constructible/assignable: https://godbolt.org/z/Wj57o773d
"&&" is just a type system feature, resolving function polymorphism matching rvalue reference and not lvalue reference. It's not a thing that causes a move.
The call to a move cons/move assign does not happen at call time. When a function taking rvalue reference is called, it can still have two code paths, one that copies the argument, and one that moves it.
All the && does is prevent lvalues from being passed as arguments. It's still just a reference, not a move. Indeed, in the callee it's an lvalue reference.
But yeah, you can statically check if there exists a code path that calls the copy cons/copy assign. But you'll need to check if the callee calls ANY type's copy cons/assign, because it may not be the same type as the passed in obj.
At that point, what even is a move? char*p = smartptr.release() in the callee is a valid move into a raw pointer, satisfying the interface in callee. That's a move.[1] how could you detect that?
[1] if this definition of move offends you, then instead remember that shared_ptr has a constructor that takes an rvalue unique_ptr. The move only happens inside the move constructor.
How do you detect all cases of e.g. return cons(ptr.release()) ? It may even be the same binary code as return cons(std::move(ptr))
Probably in the end shared pointer constructor probably calls .release() on the unique ptr. That's the move.
What the callee does it out of scope. We are talking about a single assignment or construction of a variable. This has nothing to do with tracing execution. It happens at one place, and you can look at the place to see if it is using a copy or move contructor.
When talking C++ move semantics it's easy to talk past each other. So I'm not sure what your claim is. Another commenter said that one can tell if something is moved or not without looking at the body of the callee. Is that what you're saying? Because you can't.
I apologize if you're making a different claim, but I'm not clear on what that is.
Anyway, for my point, here's an example where neither copy nor move happens, which one can only know by looking at the body of the callee: https://godbolt.org/z/d7f6MWcb5
Equally we can remove the use of `std::move` in the callee, and now it's instead a copy. (of course, in this example with `unique_ptr`, it means a build failure as `unique_ptr` is not copyable)
> [assignment or construction of a variable] happens at one place
Not sure what you mean by that. The callee taking an rvalue reference could first copy, then move, if it wants to. Or do neither (per my example above). Unlike in Rust, the copy/move doesn't get decided at the call point.
You can, at one point, statically determine if the (usually const) single ampersand reference function is called, or the rvalue reference function, via standard polymorphism. But that's not the point where the move cons/assign happens, so for that one has to look in the callee.
Calling a function that takes a rref will never use a move constructor to create the parameter. We can statically know that both of your foo functions will not use a move constructor when constructing p.
>By changing only the callee we can cause a move
This move is for constructing t. p still is not constructed with a move constructor.
You should not be downvoted, which you appear to be. Your comparison is both correct and interesting.
Maybe you're being too verbose for your point, and it would help readers if you summarize and narrow the argument to:
In Rust a function signature can force a move to happen at call time (by being non-reference and not Copy), but in C++ a function taking rvalue reference (&&) only signals the callee that it's safe to move if you want, as it's not an lvalue in the caller.
It's an added bonus that Rust prevents reusing the named variable in the caller after the move-call, but it's not what people seem to be confused about.
Look, the act of calling std::move and and calling a function taking an rvalue reference in no way invokes a move constructor or move assignment. It does not "move".
It's still just a reference, albeit an rvalue reference. std::move and the function shape is about the type system, not moving.
(Edit: amusingly, inside the callee it's an lvalue reference, even though the function signature is that it can only take rvalue references. Which is why you need std::move again to turn the lvalue into rvalue if you want to give it to another function taking rvalue reference)
I didn't reply to this thread until now because I thought you may simply be disagreeing about what "move" means (I would say move constructor or move assignment called), but the comment I replied to makes a more straightforward factually incorrect claim, that can easily be shown in godbolt.
If you mean something else, please sketch something up in godbolt to illustrate your point. But it does sound like you're confusing "moving" with rvalue references.
Thanks for the godbolt link, it really helped me understand where my mistake was. I was treating r-value references in my mind as if they are "consuming" the value, but of course, as reference types, they are not.
"Validity" is an extremely low bar in C++, it just means operations with no preconditions are legal, which in the most general case may be limited to destruction (because non-destructive moves means destruction must always be possible).
So you're saying if you use the language to write UB, then you get UB?
Seems kinda circular. Ok, you're not the same user who said it can be UB. But what does it then mean to same "sometimes it's UB" if the code is all on the user side?
"Sometimes code is UB" goes for all user written code.
I mean the language doesn't dictate what post-condition your class has for move-ctor or move-assignment.
It could be
- "don't touch this object after move" (and it's UB if you do) or
- "after move the object is in valid but unspecified state" (and you can safely call only a method without precondition) or
- "after move the object is in certain state"
- or even crazy "make sure the object doesn't get destroyed after move" (it's UB if you call delete after move or the object was created on the stack and moved from).
But of course it's a good practice to mimic the standard library's contract, first of all for the sake of uniformity.
Every second spent arguing this point, or spent saying the words "X, formerly twitter", is free advertising for a multi billion corporation.
Why are you wasting syllables giving it free advertising?
It was a sewage when named twitter, it's a sewage now. At least "twitter" has the benefit of being unambiguous. X is not. X can be literally anything, or specifically X11, XOrg (x.org).
> It failed to solve the problem of impending IP address depletion
I wouldn't say so. Some mobile carriers and big data centers have used IPv6 to pretty much completely solve the problem of being able to assign a unique address to devices.
For mobile devices, moving 50% of traffic over to IPv6 means buying half as many CGNAT/v6-to-v4 boxes (of various kinds).
And on the v6-inside, unique address can be assigned. Legal requirement and court orders suck when you get "who had A.A.A.A:32800 at time T?" if you have to go through three levels of NAT to decode that. So even if a customer only accesses IPv4, having their actual handset only be assigned IPv6 makes things easier and cheaper. Even if they share an outside address, there's only one translation so the inside is unique.
For big data companies, it means not needing to solve the problem of running out of 10/8 (yes I'm aware of the other private addresses), and having an address plan problem any time they make an acquisition.
And I've seen large providers who build their whole actual network with IPv6, and only tunnel IPv4 on top of it. Huge savings in complexity and cost of IPv4 addresses.
So what I'm saying is that I've seen first hand in multiple large providers of different kinds how IPv6 is delivering incremental payoff for incremental adoption.
It doesn't have to be 100% before we get ROI.
> it is not a success.
About half of even public traffic on the most complex and distributed system ever built is IPv6.
It's going slower than I'd like, but it's definitely paying off.
There are still ATM and X.25 networks out there, so is IPv4 a failure? (admittedly, a bit hyperbolic)
I'm working on a problem right now at a large company to move a thing from IPv4 to IPv6 because the existing IPv4 solution is running out of addresses, and it's impossible (for multiple reasons) to "just add more IPv4". Can't go into details, sorry.
I should've qualified that as address exhaustion on the Internet, the side adventure of private networking has no bearing on the goal that IPng had set out to do, which was to address the impending address exhaustion. You say you wouldn't say so, but here we are, IPv4 exhausted, and IPv4 remains the incumbent. If IPv6 had succeeded, we would probably be having this very discussion on an IPv6 enabled site, the cost difference between a v4 address and a v6 address would be negligible, that is to say v6 would not be a second class citizen or an optional bolt-on to the Internet. I mean that's all that needs to be said about whether it has succeeded in what it needed to do.
> I should've qualified that as address exhaustion on the Internet
Well I addressed that too, so…
> private networking
To some extent this is a distinction without a difference. Again, as I said…
> we would probably be having this very discussion on an IPv6 enabled site
$ host news.ycombinator.com
news.ycombinator.com has address 209.216.230.207
news.ycombinator.com has IPv6 address 2606:7100:1:67::26
When IPv4 is disrupted for me, I only notice because github.com goes away.
> v6 [is] a second class citizen
It is. Except for endpoints (again) as I mentioned…
> the cost difference between a v4 address
The alternative to buying v4 is not just private addresses, as (again, as I was very specific about) private v4 addresses also have a cost.
v4 is priced according to the demand. Without IPv6 demand would be much higher, as the alternative (with CGNAT and intra org problems) would drive up the demand for more public addresses.
To say that "the cost should be equal" for IPv6 to not be a partial/in progress success misses the entire economics of addresses.
The biggest most complex system in the world shuffles half its traffic on IPv6, and rising, with million of devices without any form of IPv4 address.
This is mainly due to mobile devices only being issued ipv6 addresses by the telco 4g networks. They are the only ones using ipv6 on the millions of clients scale.
Everything supports both. We are talking about being issued only IPv6 addresses where you actually use it to connect to stuff.
Most mobile devices are only issued an IPv6 address and therefore when the masses do google searches it uses IPv6 and makes it look like there is huge adoption.
> We are talking about being issued only IPv6 addresses where you actually use it to connect to stuff.
You seem to be asserting that dual-stack machines use IPv4 by default, but that's not really true. If your machine has both IPv4 and IPv6 connectivity, browsers will in fact use IPv6 to connect to sites that support it, like Google. They prefer IPv6 by default and fall back to IPv4 if IPv6 is slower (Happy Eyeballs algorithm).
Of course, random software can mostly use whichever it wants, so I'm not claiming every process on such a machine will use IPv6, but most common stuff does.
Make sure they actually have GUA addresses, not just link-locals.
If you're on a Linux machine, check `ip -c addr show` for an address that's "scope global" and doesn't start with "f". Those are the ones you need. If you have one of those, check `getent ahosts google.com` to see if v6 is being sorted before v4 in DNS lookups, and then `wget google.com` to see if wget prints any errors connecting to the v6 address.
If you have GUA addresses and nothing is outright broken, devices and software that support v6 will prefer it over v6.
Well, I am. MacBook on a home internet connection in Arizona. Using IPv6 by default without me having ever had to do anything special to configure it.
You are simply misinformed. Either your setup doesn’t actually support IPv6 (or it’s much slower than IPv4 due to something being misconfigured), or you turned it off at some point, or you’re making a mistake in how you measure it. Because IPv6 is used by default on systems that support it. You don’t have to take my word for this, you can google it or ask someone else to try it.
Unsurprisingly Google actually does also have IPv4 addresses. What they're measuring isn't "How did you reach our servers?" but instead "Could you have reached our IPv6 servers?"
My understanding (for which I can't give you a citation) is that a tiny fraction of Google visitors are randomly chosen to try to reach IPv6 servers and measure what happens.
Because of Happy Eyeballs if you measure whether your users did use IPv6 you don't find out whether they could have done so, and so your results will be thrown off by happenstance.
APNIC's stats check for that. For the US, it makes the difference between 58.74% capable and 57.85% preferring, so it doesn't produce a huge discrepancy.
I believe your understanding here is incorrect. It doesn’t make sense that Google would claim to measure usage while actually measuring access. I can’t find anything that supports your assertion.
"When large masses of devices that use IPv6 connect to IPv6 servers it makes it look like there is huge IPv6 adoption"
I don't understand your logic. How does a large amount of devices using IPv6 to connect to IPv6 servers only "make it look" like there is IPv6 adoption but somehow it shouldn't count?
Do you need a banking license, or partner with someone who has?
reply