Hacker News new | past | comments | ask | show | jobs | submit | oneshtein's comments login

In ancient Rome, slave labor was used.


I don't think that's a fair representation of the ancient Roman combination social welfare and grain price stabilization system or of the role slaves had in Roman society.


Exact same binary can be hacked in exact same way on all platforms.


That would requiring breaking each of the separate build processes, which is very unlikely.

This doesn't counter subverted source code, that's not what reproduciblrle builds are for.


If I understand correctly that would require releasing the publisher’s private key, though,correct?


Only if you try to reproduce the signature. Usually the signature is stored separately. That way, the reproduced work's signature applies to it as well.

What?


Try HTML version:

  man --html=firefox man


Yep. Just look at countries with unbounded capitalism, like Russian Federation after fall of Soviet Union. It was a mess.

Developed countries installs thousands of regulations (100k - 2M regulatory acts per developed country) on top of wild capitalism to tame it.

However, even heavily regulated liberal capitalism is better than other systems.


"We" - humanity.

"We" actually experienced all social systems "we" invented so far, which are allowed by "our" productivity.


Humanity did fine with other systems that capitalism. Hunter gather societies were well adapted to their environments, Tibetan monks dealt with human relations differently etc.

It would be crazy to assume that the whole humanity longs for capitalism whatever their situation or belief system, the same way _we_ don't assume any current form of capitalism is specially superior to other alternative forms that could better benefit our situations.


Physics is inherently simple. Accurate mathematical modeling of physics is hard.

A layman can understand (real) physics easily, but cannot accurately predict anything except some simple things. Each additional digit of accuracy requires additional complexity in models, which laymen nor need nor understand.

Unfortunately, real physical physics, with inaccurate physical models, completely replaced by much more accurate mathematical models by «shut up and calculate» guys, which explicitly ignore real physical processes behind the scenes to perform calculations faster and with more accuracy.

It's similar to how we use AI: we ask questions, AI system performs abstract calculations, then we have an answer.

To advance in physics, we need to create inaccurate physical models of physical world. For example, hydrodynamic quantum analogs[1] are highly inaccurate, but they allow to develop intuition about quantum world.

[1]: https://en.wikipedia.org/wiki/Hydrodynamic_quantum_analogs



Gravitation is slightly faster than c in vacuum.


In GR, the speed of gravitational waves is _exactly equal_ to c.


In reality, speed of light is slightly lower than speed of gravitation, because gravitation slows down speed of light.


We were presumably talking about an ideal massless space [Minkowski] in which the speed of light in a vaccuum is considered -- that is what c is defined as.


AI rewrote to avoid undefined behavior:

  int average(int x, int y) {
    long sum = (long)x + y;
    if(sum > INT_MAX || sum < INT_MIN)
        return -1; // or any value that indicates an error/overflow
  
    return (int)(sum / 2);
  }


> long sum = (long)x + y;

There is no guarantee that sizeof(long) > sizeof(int), in fact the GNU libc documentation states that int and long have the same size on the majority of supported platforms.

https://www.gnu.org/software/libc/manual/html_node/Range-of-...

> return -1; // or any value that indicates an error/overflow

-1 is a perfectly valid average for various inputs. You could return the larger type to encode an error value that is not a valid output or just output the error and average in two distinct variables.

AI and C seem like a match made in hell.


> There is no guarantee that sizeof(long) > sizeof(int), in fact the GNU libc documentation states that int and long have the same size on the majority of supported platforms.

That used to be the case for 32-bit platforms, but most 64-bit platforms in which GNU libc runs use the LP64 model, which has 32-bit int and 64-bit long. That documentation seems to be a bit outdated.

(One notable 64-bit platform which uses 32-bit for both int and long is Microsoft Windows, but that's not one of the target platforms for GNU libc.)


I’m not convinced that solution is much better. It can be improved to x/2 + y/2 (which still gives the wrong answer if both inputs are odd).


We're about to see a huge uptick in bugs worldwide, aren't we?


Please stop posting AI-generated content to HN. It’s clear the majority of users hate it, given that it gets swiftly downvoted every time it’s posted.


I don't know why this answer was downvoted. It adds valuable information to this discussion. Yes, I know that someone already pointed out that sizeof(int) is not guaranteed on all platforms to be smaller than sizeof(long). Meh. Just change the type to long long, and it works well.


Copypasting a comment into an LLM, and then copypasting its response back is not a useful contribution to a discussion, especially without even checking to be sure it got the answer right. If I wanted to know what an LLM had to say, I can go ask it myself; I'm on HN because I want to know what people have to say.


It literally returns a valid output value as an error.


An error value is valid output in both cases.


The code is unarguably wrong.

average(INT_MAX,INTMAX) should return INT_MAX, but it will get that wrong and return -1.

average(0,-2) should not return a special error-code value, but this code will do just that, making -1 an ambiguous output value.

Even its comment is wrong. We can see from the signature of the function that there can be no value that indicates an error, as every possible value of int may be a legitimate output value.

It's possible to implement this function in a portable and standard way though, along the lines of [0].

[0] https://stackoverflow.com/a/61711253/ (Disclosure: this is my code.)


Too late for me to edit: as josefx pointed out, it also fails to properly address the undefined behavior. The sums INT_MAX + INT_MAX and INT_MIN + INT_MIN may still overflow despite being done using the long type.

That won't occur on an 'LP64' platform, [0] but we should aim for proper portability and conformance to the C language standard.

[0] https://en.wikipedia.org/wiki/64-bit_computing#64-bit_data_m...


> Meh. Just change the type to long long, and it works well.

C libraries tend to support a lot of exotic platforms. zlib for example supports Unicos, where int, long int and long long int are all 64 bits large.


I always downvote all AI-generated content regardless of whether it’s right or wrong, because I would like to discourage people from posting it.


Waves are quantized (one wave, two waves, ...), so energy transfers by waves are quantized too.


What you are describing is periodicity. That’s different from quantization.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: