Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Compilers have become more powerful (opening up new ways to exploit undefined behavior) and the primary C compilers are free software with corporate sponsors, not programmer customers (or else perhaps Andrew Pinski would not have been so blithe about ignoring his customer Felix-gcc in the GCC bug report cited above).

This is the real problem. We have reached a situation where a small number of compilers dominate the space, but yet do not charge the users and so do not treat them as customers. The C standard is a product of an era where you would pay for your tools and so would demand a refund from any compiler vendor that would treat undefined behavior in an absurd manner.



> The C standard is a product of an era where you would pay for your tools and so would demand a refund from any compiler vendor that would treat undefined behavior in an absurd manner.

Since current compilers aren’t out to do anything malicious with UB, but instead simply treat it as “assume this can’t happen and proceed accordingly”, it’s not clear at all to me what you think paid compilers would do here instead: refuse to compile vast swaths of code that currently compiles? Or compile it very pessimistically, forgoing any optimization opportunities by instead assuming it can happen and inserting a bunch of runtime checks in order to catch it then?

In either case, I doubt there’s any real market for “pay money for this compiler and it either won’t build your code, or it will run more slowly”. I’m just old enough to remember the paid C compiler market and the thing that was driving everybody to pay for the latest and greatest upgrade was “how much better is it at optimization than before?”


I see this argument a lot, but it's silly. I don't have to care whether gcc changed the settled semantics without notice or documentation out of sincere belief that they were helping or out of a desire to beat a benchmark that doesn't matter to me, or out of habit. It's not the intent of the compiler authors that matters, but their disregard of the needs of application programmers.


What are they to do, exactly?

Optimization is a very 'generic' process and application programmers want optimization. The only sensible thing to do is to assume UB cannot occur and optimize accordingly.

What else is there?

I can already predict that whatever you suggest will very shortly end up in Halting Problem territory or will mean: No optimization. There are a lot of UBs that (if defined) would require run-time checking to define. That wouldn't inhibit optimization per se, but it would ultimately mean slower execution.


C programmers prefer control to "optimization". And if you assume UB cannot occur, you should not generate code that makes it happen. Radical UB is not required for optimization: in fact it appears to mostly do nothing positive. There is not a single paper or study showing significant better performance for substantial C code that depends on assuming UB can't happen - just a bunch of hand waving.


I don't mean to be snide, but there's no paper on it because it's pretty much something you can learn in compiler 101. Without being able to assume that UB doesn't happen, useful optimizations become impossible very quickly.


You are being snide and inaccurate. Studies show 80% or more of GCC optimization improvements come from the core simple methods. Trying to compensate for the lack of data to support your argument by claiming (falsely) it's taught in an elementary course is weak.


Please link these studies, or describe your "core simple methods", because it is likely that they rely on programs not exhibiting undefined behavior. I mean, even the most simple optimizations like eliminating unused variables or inlining functions (what if you reach into the stack to detect these?) fall apart. These were taught within the first couple weeks of my compiler optimizations class, and I certainly hope that they were part of yours, because I can't imagine what you could have possibly gone over without starting off with this.


Wouldn't the largest customers (by far) still be the companies funding development of the compilers already?


Are you arguing that e.g. Turbo C and friends from the 80s were higher quality than modern C compilers?


I believe he’s arguing that they were more sane/pragmatic compilers — they would be inherently less comfortable exploiting UB to do anything other than what people expected or were used to, because there is more real possibility of retribution (GCC could get away with making demons fly out your nose and just lose marketshare [that it doesn’t directly depend on anyways] where a commercial compiler would go out of business)


> I believe he’s arguing that they were more sane/pragmatic compilers

I can’t imagine they have any actual experience with Borland or Symantec’s C or C++ compilers, then. These things had notoriously shakey standards compliance and their own loopy implementations of certain things, along with legions of bugs – it’s not hard to find older C++ libs with long conditional compilation workarounds for Borland’s brain damage. Microsoft’s C++ compiler was for years out of step with the standard in multiple ways.

Part of the reason GCC ate these compilers’ markets for lunch was the much more rigorous and reliable standards adherence, not just the lack of cost.

This reads like nostalgia for an age that never was.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: