The first example (the sum in C) is certainly optimized away. I'd be careful drawing too many conclusions about these examples because the code is dodgy (doesn't use sums etc), the optimizations are probably incorrect (reasonable compiler flags were added after the results were generated).
You could think of them as thought experiments e.g. "how many ADD's can we make on a single thread on an average PC?" rather than "What would the runtime of this C program be on an average PC?". Since the results were generated without optimizations, at least for the C programs, there is not much point in talking about runtime.
Compilers often don't optimise loops away because they feel that the user put them there for a reason -- after all they are so obvious that the user could have removed them herself.
One use for such a loop is a delay -- not so common nowadays, but it used to be a mainstay of DOS based games etc.
I bet that if GCC started aggressively optimising out empty loops, it would interact with some subtlety of concurrency to break a spinlock or three in various kernels.
Compilers can only optimize away loops if they know that the loop has no side effects. Compilers like GCC have to have a list of standard functions that they know are pure.
This can trip you up sometimes. For example, if you do something like use memset to zero out sensitive data when you're done using it, the computer can say "the result of memset is never used, and the reference to that memory is lost, so there's no need to actually make the call" and you're left with secrets in memory.
But again, you're telling the compiler that you know what you're doing, and to not perform its normal optimization. I'm not well-versed in compiler design, but I think that's the tradeoff you have to make between trusting the user's design choices, or not.