> Casting does not work as expected when optimization is turned on.
> This is often caused by a violation of aliasing rules, which are part of the ISO C standard. These rules say that a program is invalid if you try to access a variable through a pointer of an incompatible type.
Regarding this, is this what reinterpret_cast was at least partially designed for?
Kind of, casts in C++ were designed to be on your face, to call out that you are doing something unsafe.
reinterpret_cast is meant to be really unsafe, to meant that in this type of casts the compiler will blindly do it, whereas the other *_cast there is still some type validation involved.
Thanks for the clarification. So this doesn't signal to the compiler that it shouldn't optimize the aliased variables?
Reason I ask and not test is that it seems that VC++ does not take advantage of this undefined behavior and the sample code always works 'as expected'.
I know this is a problem with many programming languages, but why is there not a standard way to precisely represent decimal numbers? I understand the difficulty representing them in binary (well, kind of -- it's been a while since I read about it), but it seems like knowing this, a better solution would be found. Why is this accepted and standard behavior?
Are you familiar with the IEEE 754-2008 specification of decimal floating point format?
From Wikipedia's entry on decimal64[1]: "It is intended for applications where it is necessary to emulate decimal rounding exactly, such as financial and tax computations."
I guess I wasn't necessary referring to solving the problem of non-terminating numbers, but more things like 0.1 * 0.2 = 0.020000000000000004 in Javascript, or the very first bug listed on the linked page.
Perhaps what you intend with that statement is that issues like that of 1/3 are a cause for these sorts of issues, but interestingly (at least in JS), 1/3 is terminating, and results in an "expected" value.
Well, every base has issues with non-termination. In base 10, we have problems like 1/3 = 0.33333... . Base 2 has its own set of non-terminating decimal expansions, including 1 / 3 = 0.01010101... and 1 / 10 = 0.0001100110011... . Since we only have finite storage space, we have to cut the repeating string of digits off somewhere, no matter what base we're in. This can cause rounding issues.
You can't dodge all your rounding problems by changing base. There are binary-coded decimal (BCD) systems that store numbers as strings of decimal digits. You can generally find these in calculators (ever notice that a TI-84 overflows after 9.(9)e99) or some financial software. However, you'll still have problems like the 0.1 * 0.2 issue in binary floating-point. For example (assuming shorter-than-usual numbers):
2/3 = 0.66... ~ 0.66667
2/3 + 2/3 ~ 0.66667 + 0.66667 = 1.33334
However, 4/3 = 1.33333... ~ 1.33333, which isn't the same result.
Basically, you can't get away from this problem, you can only push it around to cases you care less about.
(You probably can't find an API for binary-coded decimal in $LANGUAGE unless $LANGUAGE is often used for tasks where you really really don't want any float-related gotchas since most everyone else doesn't care too much).
Hmm, so BCD seems closer to what I was roughly imagining (though, I haven't really thought this issue through deeply or anything), but you mention it has shortcomings as well. I guess I'll just have to trust that other people have thought about this, and have determined this to be the best solution, even with its flaws.
The result of the computation 1/3 in JS results in a number which is terminating, but that number is not 1/3. Open up your console and type 1/3+1/3+1/3... 1, as expected. Then type 1-1/3-1/3-1/3... I get about 1e-16, which is not 0.
The statement that "1/3 is terminating" is not true in binary, it is not true in decimal, and the only reason that you sometimes get the expected results is that sometimes the rounding errors will cancel out.
I realize that 1/3 is not terminating in decimal, what I meant was that in JS, it seemed to end neatly after x decimal places, which I found interesting. I hadn't tried the operation you mentioned until you posted that, the results are even more bizarre to me. Seems I need to spend some time relearning floating point math to better understand what I'm seeing.
floating point is just one way of representing non-integral numbers, you can also use fixed point, rational types (integer numerator and denominator), and there are also types for arbitrary precision decimals, like gmp or BigDecimal.
Clang 3.5 compiles that for me without warnings even with -Weverything. String literals in C (but not C++) aren't const for legacy reasons despite not being modifiable.
-pedantic gives "warning: array initialized from parenthesized string constant" for that, so I'm guessing it's an unintended consequence of a nonstandard extension. Might be worth reporting since at the minimum the warning is wrong.
In my embedded development, I have trained myself to use unions instead of type-punned-pointer casts to access one data type as another.
This document says this is a GCC-specific extension. Is this true? Or is it one of those things that's not standardized, but the compiler vendors all do it anyway?
Yes. Use memcpy if you need to do this! This is especially important when you're on a platform that requires aligned pointers. For example, the following code will crash on ARM
There is a single C++ compiler that supports export. Out of all the C++ compilers, only a single one supports it. Mainly because it is such a HUGE pain in the ass to write support for.
> This is often caused by a violation of aliasing rules, which are part of the ISO C standard. These rules say that a program is invalid if you try to access a variable through a pointer of an incompatible type.
Regarding this, is this what reinterpret_cast was at least partially designed for?