I don't think that's a fair representation of the ancient Roman combination social welfare and grain price stabilization system or of the role slaves had in Roman society.
Only if you try to reproduce the signature. Usually the signature is stored separately. That way, the reproduced work's signature applies to it as well.
Humanity did fine with other systems that capitalism. Hunter gather societies were well adapted to their environments, Tibetan monks dealt with human relations differently etc.
It would be crazy to assume that the whole humanity longs for capitalism whatever their situation or belief system, the same way _we_ don't assume any current form of capitalism is specially superior to other alternative forms that could better benefit our situations.
Physics is inherently simple. Accurate mathematical modeling of physics is hard.
A layman can understand (real) physics easily, but cannot accurately predict anything except some simple things. Each additional digit of accuracy requires additional complexity in models, which laymen nor need nor understand.
Unfortunately, real physical physics, with inaccurate physical models, completely replaced by much more accurate mathematical models by «shut up and calculate» guys, which explicitly ignore real physical processes behind the scenes to perform calculations faster and with more accuracy.
It's similar to how we use AI: we ask questions, AI system performs abstract calculations, then we have an answer.
To advance in physics, we need to create inaccurate physical models of physical world. For example, hydrodynamic quantum analogs[1] are highly inaccurate, but they allow to develop intuition about quantum world.
We were presumably talking about an ideal massless space [Minkowski] in which the speed of light in a vaccuum is considered -- that is what c is defined as.
int average(int x, int y) {
long sum = (long)x + y;
if(sum > INT_MAX || sum < INT_MIN)
return -1; // or any value that indicates an error/overflow
return (int)(sum / 2);
}
There is no guarantee that sizeof(long) > sizeof(int), in fact the GNU libc documentation states that int and long have the same size on the majority of supported platforms.
> return -1; // or any value that indicates an error/overflow
-1 is a perfectly valid average for various inputs. You could return the larger type to encode an error value that is not a valid output or just output the error and average in two distinct variables.
> There is no guarantee that sizeof(long) > sizeof(int), in fact the GNU libc documentation states that int and long have the same size on the majority of supported platforms.
That used to be the case for 32-bit platforms, but most 64-bit platforms in which GNU libc runs use the LP64 model, which has 32-bit int and 64-bit long. That documentation seems to be a bit outdated.
(One notable 64-bit platform which uses 32-bit for both int and long is Microsoft Windows, but that's not one of the target platforms for GNU libc.)
I don't know why this answer was downvoted. It adds valuable information to this discussion. Yes, I know that someone already pointed out that sizeof(int) is not guaranteed on all platforms to be smaller than sizeof(long). Meh. Just change the type to long long, and it works well.
Copypasting a comment into an LLM, and then copypasting its response back is not a useful contribution to a discussion, especially without even checking to be sure it got the answer right. If I wanted to know what an LLM had to say, I can go ask it myself; I'm on HN because I want to know what people have to say.
average(INT_MAX,INTMAX) should return INT_MAX, but it will get that wrong and return -1.
average(0,-2) should not return a special error-code value, but this code will do just that, making -1 an ambiguous output value.
Even its comment is wrong. We can see from the signature of the function that there can be no value that indicates an error, as every possible value of int may be a legitimate output value.
It's possible to implement this function in a portable and standard way though, along the lines of [0].
Too late for me to edit: as josefx pointed out, it also fails to properly address the undefined behavior. The sums INT_MAX + INT_MAX and INT_MIN + INT_MIN may still overflow despite being done using the long type.
That won't occur on an 'LP64' platform, [0] but we should aim for proper portability and conformance to the C language standard.